id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
144151272
pes2o/s2orc
v3-fos-license
Strategic planning for tourism industry using SWOT and QSPM Article history: Received September 28, 2014 Accepted 15 January 2015 Available online January 23 2015 Tourism plays essential role in today’s economy and Iran has good position of tourism sources such as natural, historical, cultural, etc., although, these sources have not been utilized, properly. One of regions which have many potentiality and capabilities for developing Tourism in natural aspect is district and city of Galugah. The purpose of this study is to provide strategic assessment and optimization strategies for development of tourism industry to reach sustainable tourism development in this city. The study uses three techniques namely; Quantitative Strategic Planning Matrix (QSPM) and strengths, weaknesses, opportunities and threats (SWOT) to determine necessary guidelines for development of tourism in the city of Galugah, Iran. The study first uses SWOT to categorize different factors and then QSPM is applied to prioritize various factors. The results of this study show that presenting methods in initial process and analyzing assessment matrix of T2 external and internal factors i.e. pollution of environment and river and extinction of plants species by result of pollution and O2 i.e. suitable climate for developing natural Tourism efforts in summer have been recognized as most priority factors among external factors. Intense cold of region in summer (W3) and existence of unique amusement places (S5) such as Amarg were recognized as effective and most priority factors among internal factors on Tourism development of Galugah city. Growing Science Ltd. All rights reserved. 5 © 201 Introduction Tourism has its impact on economic, cultural and political issues (Simpson, 2001).Tourism may create job, increase foreign travel demand as well as security in the country and makes it a reliable source of income for local residences (Inskeep, 1991).Many countries collect income from tourism than any other industries such as natural resources, etc. Tourism also plays an essential role in encouraging investment in infrastructure, generating revenue for the state and its direct and indirect job involvement across the world (Heath & Wall, 1991).The development of this tourism in industrialized countries may diversify income and reduce the imbalance in the economy (Getz, 1983).In several developing countries, it generates the opportunity for exports, productions and job creation.Moreover, the advantages of tourism are not limited to economic interest but it creates an opportunity to introduce the culture of a country to other countries.Tourism characteristics of each location in affected by the importance of validity, nature, role and function of a variety of religious, cultural, recreational, commercial and general attractions of its location (Allen, 1998).Moreover, it has been influenced by the characteristics of the social, cultural (religious belief) and local residents and tourism economy.Galugah is a county in Mazandaran Province in Iran and Galugah is the capital of the county, which is separated from Behshahr County in 2005.At the 2006 census, the county's population was 39,450 people consisted of 10,365 families.Fig. 1 shows some of the remarkable regions of this city. Fig. 1. The region of Galugah The region has outstanding natural attractions, which could preserve a good potential for development of tourism industry (Pak & Farajzadeh, 2007;Nouri et al., 2008;Farhoodi et al., 2009). The proposed study The purpose of this study is to provide strategic assessment and optimization strategies for development of tourism industry to reach sustainable tourism development in city of Qom metropolis.The study uses three techniques namely; Quantitative Strategic Planning Matrix (QSPM), integrated environmental assessment (IEA) and strengths, weaknesses, opportunities and threats (SWOT) for the implementation of the study.The study first uses SWOT to categorize different factors, IEA is applied to determine internal as well as external factors and finally QSPM is applied to prioritize various factors. SWOT analysis A SWOT analysis is a structured planning technique applied to make necessary evaluation on the strengths, weaknesses, opportunities and threats integrated in a particular problem.A SWOT analysis can be also applied for city development, which involves specifying the objective of the business venture or project and detecting the internal and external factors, which are considered as advantage/disadvantage to reach that objective.The following summarizes various perspectives of SWOT,  Strengths: characteristics of the business, which provide an advantage over others,  Weaknesses: characteristics that pose the business at a disadvantage compared with others,  Opportunities: elements the project could exploit to its advantage,  Threats: elements in the environment, which could generate trouble for the business. Identification of SWOTs is essential because they can inform later steps in planning to reach the objective (See Fig. 2). Fig. 2. The structure of SWOT The analysis consists of two major items of external and internal factors.Table 1 demonstrates the summary of opportunities and threats associated with external factors.Let Rij be the score of factor j in group i, Si be the Likert scale point given to group i and Fi be the frequency of each group.Therefore, the score given to each factor is measured as follows, =∑ . (1) In addition, we may normalize Eq. ( 1) as follows, The proposed study designed a questionnaire in Likert scale and distributed it among some experts and using Eq. ( 1) and Eq. ( 2), the study has calculated the weights of internal and external factors. Discussion and Conclusion Based on the results gathered from Table 3 and Table 4, we have performed a brain storming discussion among some experts and tried to extract the SWOT matrix.Table 5 shows details of our findings. Table 5 The summary of SWOT Aggressive strategy (SO) Review strategy (WO) So1: Investment and greater emphasis on cultural, religious and historical places of this beautiful place to attract more tourists ( s8 -s9 -s10 -o6) So2: Incentives for nature Battalion traveled to the region through the construction of recreational and travel services (s1 -s2 -s3 -s5 -s6-o1 -o2) So3: Proper use of the potentials of eco-tourism attractions and tourism as the main substrate city (s1 -s2 -s3 -s5 -s6 -o1 -o2) So4: Development of tourism resources and the establishment of ecotourism tours in the city (s2 -s3 -s5 -s6 -o1-o2 -o4) So5: Using the existing potential to develop the sport of mountain climbing and sport tourism (s1 -s2 -s3 -s4-s5 -s6 -o1 -o2 -o3) Wo1: Developing appropriate communication network, due to the lack of communication network and the expansion of public transport in the area Wo2: Revision of the publicity and awareness activities in the media and creating websites appropriate to introduce eco-tourism attractions and capabilities, religious and cultural aspects of the city (w4 -w6 -o4) Wo3 : Improvement of tourism products and the joint venture publicprivate sector (w5 -o1-o5 -o7 -o11) Wo4: Creating training programs (w11 -w12 -o1 -o2 -o7) Wo5: Creating appropriate regulations in order to protect the environment and to revise the regulations of urban land (w8 -w13 -o1-o2 -o11) Diversified strategy (ST) Defensive strategy (WT) -St1 Development of information and education services and tourism information about the destruction of the natural environment ( s2 -s3 -s5 -s6-t1 -t2 -t5) St2 Increase funding for the development of specialized tourist attractions (s1-s2 -s3 -s4 -s5 -s6 -t10) St3: Creation of new job opportunities based on natural potentials and rich local culture with the aim of protecting the region's natural and cultural identity (-s9 -s8 s10 -s13--t1 t2-t14 -t11-t7-t5) St4 : Prevent environmental degradation and the loss of vegetation and animal species (s2 -s3-s5 -s6 -t1 -t2-t5 -t11 -t13 -t14) Wt1: The negative effects of tourism development and try to minimize these effects (w7-w8 -w13 -t1-t2-t5-t11-t7 -t12 -t14) Wt2: Promoting the health and development of these centers in the city (w7-w10 -t5 -t6-t15) Wt3: Having meetings and seminars by organizations responsible for developing ecotourism facilities and privileges invested in construction of hotels, residential complexes and recreational facilities (s5 -w11-w12-t8-t9-t10) Wt4: Having an appropriate rules and regulations to protect environment (w7 -w8-w13-t1-t2-t5-t11-t14) Based on the scores given to each action, we summarize our SWOT strategic planning in Table 6.As we can observe from the results of The biggest advantages of the city include virgin natural resources and beautiful landscape.Therefore, proper use of the potentials of eco-tourism attractions must be considered, more significantly.The weakness point of the survey is associated with poor management of the city.In our survey, 12 out of 15 threats are associated with external factors and they are mostly related to management of the city. The biggest external factors are environment pollution and lack of attention for taking care of the city. Results of this research show that presenting methods in initial process and analyzing assessment matrix of T2 external and internal factors i.e. pollution of environment, river and extinction of plants species by result of pollution and O2 i.e. suitable climate for developing natural Tourism efforts in summer have been recognized as the most priority factors among external factors.Intense cold weather in region during of summer (W3) and existence of unique amusement places (S5) such as Amarg were recognized as effective and most priority factors among internal factors on Tourism development of Galugah city.Presenting methods of second process showed final score of assessment matrix in internal and external factors in this manner: 2.53-2.63.So, it shows that Tourism position of Galugah city was normal.According to four parts assessment matrix, importance and priority of SO methods was defined towards other methods in this process.Presenting methods of the third process have shown that comparison of total grades for attractiveness of methods and ST3 method i.e. initiating new occupational opportunities on the basis of natural potentials and rich culture of regions inhabitants by protecting region cultural identity and nature have maintained the greatest scores in this process. Table 1 The summary of the weights of external factors Table 2 The summary of the weights of external factors Table 3 and Table 4 demonstrate details of external and internal factors. Table 3 The summary of external factors Table 3 The summary of internal factors Table 4, the most important factors are concentrated on environmental and natural factors affecting tourism development.In fact, 17 out of 52 factors are associated with this item, which show the relative importance of environmental factors. Table 6 The summary of different actions and strategies
2018-12-21T04:12:34.461Z
2015-03-01T00:00:00.000
{ "year": 2015, "sha1": "ef6cb6ad6b949f4d2a64e2c6c7f1af108d66954a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5267/j.msl.2015.1.009", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "ef6cb6ad6b949f4d2a64e2c6c7f1af108d66954a", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
213973415
pes2o/s2orc
v3-fos-license
High voltage vacuum-processed perovskite solar cells with organic semiconducting interlayers In perovskite solar cells, the choice of appropriate transport layers and electrodes is of great importance to guarantee efficient charge transport and collection, minimizing recombination losses. The possibility to sequentially process multiple layers by vacuum methods offers a tool to explore the effects of different materials and their combinations on the performance of optoelectronic devices. In this work, the effect of introducing interlayers and altering the electrode work function has been evaluated in fully vacuum-deposited perovskite solar cells. We compared the performance of solar cells employing common electron buffer layers such as bathocuproine (BCP), with other injection materials used in organic light-emitting diodes, such as lithium quinolate (Liq), as well as their combination. Additionally, high voltage solar cells were obtained using low work function metal electrodes, although with compromised stability. Solar cells with enhanced photovoltage and stability under continuous operation were obtained using BCP and BCP/Liq interlayers, resulting in an efficiency of approximately 19%, which is remarkable for simple methylammonium lead iodide absorbers. Introduction Organic-inorganic lead halide perovskites are being widely studied in thin-lm optoelectronics and especially photovoltaics, 1 in view of their good semiconducting properties. 2 They typically exhibit a high absorption coefficient, long carrier diffusion length, high tolerance to chemical defects, and they can be prepared as high quality thin-lms through a variety of deposition techniques. 3 In particular, perovskite thin-lms can be readily prepared by solution processing or vacuum methods at low temperature, which is desirable when scaling up the device fabrication. 4,5 In perovskite solar cells, the photogenerated charge carriers need to be efficiently and selectively transported to the electrodes, minimizing non-radiative charge recombination. For this reason, the perovskite lm is typically sandwiched in between organic or inorganic semiconductors, acting as electron and hole transport layers (ETL and HTL, respectively). 6 Several studies have been focused on the understanding of charge transfer and interfacial processes between the perovskite, the ETLs, and the electrode, with the ultimate goal of maintaining a high charge collection efficiency and to abate non-radiative charge recombination. 7 Among ETLs, notable examples are n-type metal oxides, 8 in particular TiO 2 and SnO 2 , while the most widely adopted organic semiconductors are fullerene derivatives. 9 Fullerenes cannot only selectively transport electrons between the perovskite and the electrode, but are also capable to effectively passivate trap states and mitigate ionic migration at the perovskite surface and at the grain boundaries. 10 Efficient electron extraction, ensuring high open-circuit voltage (V oc ) and ll factor (FF), requires matching of the lowest unoccupied molecular orbital (LUMO) of the fullerene with the electrode work function. In a rst approximation, this can be simply attained by using low work function electrodes such as calcium or barium, 11,12 although stable metals (Ag, Au) can also lead to ohmic contacts with fullerenes due to the formation of interfacial dipoles. 13,14 Another approach to reduce the energy mismatch between a semiconductor and an electrode is to increase the charge-carrier density in the organic semiconductor through doping. [15][16][17] In p-i-n perovskite solar cells employing fullerene ETLs, typically C 60 or 1-[3-(methoxycarbonyl)propyl]-1-phenyl-[6.6]C 61 (PCBM), ohmic injection is ensured by depositing a thin interlayer between the ETL and the electrode. This includes different molecules such as 1,3,5-tri(m-pyrid-3-yl-phenyl)benzene (TmPyPB) 18 or (2-(1,10-phenanthrolin-3-yl)naphth-6-yl) diphenylphosphine oxide (DPO), 19 inorganic salts such as LiF, 20 or n-type metal oxides. 21,22 The most widely adopted interlayer is a thin (5-10 nm) lm of 2,9-dimethyl-4,7-diphenyl-1,10-phenanthrolin (bathocuproine, BCP), which is sublimed onto C 60 and covered with a silver or aluminium electrode. BCP is a wide band gap material with a deep highest occupied molecular orbital (HOMO, >6.5 eV) and shallow LUMO (3-3.5 eV, Fig. 1a), which makes it suitable as exciton-blocking layer in organic electronics. [23][24][25][26][27] By simply considering the at band energy diagram in Fig. 1a, BCP would not appear as a rational choice to match the energy levels of C 60 and Ag, as its small electron affinity would hinder both the electron injection and extraction at the C 60 /Ag interfaces. However, several reports have shown a strong chemical reaction occurring upon thermal vacuum deposition of Ag onto the BCP, and leading to the formation of Ag-BCP organometallic complexes. 24,28 These compounds would mediate charge transport due to the formation of new states well below the LUMO of pristine BCP, which justies the efficient electron injection/extraction properties of BCP in optoelectronic devices. 25 This widely accepted view is challenged by recent reports where BCP was found to efficiently mediate electron transfer when placed in between an indium tin oxide (ITO) electrode and C 60 , where the formation of organometallic species is unlikely. 29 In perovskite solar cells, non-radiative recombination is dominant at dislocations, grain boundaries, impurities as well as at the contact interface, and in all cases it unavoidably diminishes the attainable open-circuit voltage. 30 Non-radiative recombination in the perovskite layer can be regulated through controlled lm crystallization/ processing, while interface recombination should be minimized through the choice of suitable transport materials and optimized device architectures. 31 The inuence of interlayer chemical and electronic properties on transport and recombination in vacuum-deposited perovskite solar cells has not been fully investigated, although it is critical to modulate and maximize FF, V oc and stability. Here we studied the inuence of interlayers and cathode work function at the C 60 interface in vacuum-deposited p-i-n perovskite solar cells. In particular, we compared BCP with 8-hydroxyquinolinolato-lithium (Liq, Fig. 1a), a common electrode interlayer used in high efficiency organic light-emitting diodes (OLEDs). 32 Liq has HOMO and LUMO energies of À5.6 eV and À3.2 eV from the vacuum level, respectively, 33 and has the advantage of being easily processed by thermal evaporation or solution. 34,35 As for the case of BCP, the electron injection mechanism of Liq is not completely understood, but the most accepted hypothesis is that it is able to release metallic lithium upon reaction with the metallic cathode, leading to interfacial reduction of the underlying ETL. 36,37 In this work, we compared fully vacuum-deposited MAPI solar cells employing organic semiconductors as the transport and injection materials. We examined the inuence of different thin electron injection layers and of the metal work function on the performance of p-i-n solar cells, where the electron transport layer is deposited on top of the perovskite and before the metal electrode. We compared the performance of the devices using BCP, Liq or combinations of them, using either Ag or Ba as the top electrode. We identied that while low work function metals can enhance the open-circuit voltage, they do it at the expense of the ll factor and especially of the stability. Using bare BCP or a combination of BCP and Liq led to solar cells with improved rectication, high photovoltage, and long-term stability. Results and discussions Details of preparation and characterization of materials and devices are reported in the experimental section. Briey, we processed a 650 nm thick MAPI lm and employed it in the fabrication of p-i-n perovskite solar cells. All layers were prepared by vacuum sublimation of the corresponding inorganic or organic materials in high-vacuum chambers. A scheme of the device structure is reported in Fig. 1a. A thin layer of molybdenum oxide (MoO 3 , 5 nm) was deposited onto prepatterned ITO-coated glass slides, acting as hole injection layer (HIL). As the HTL we used a 10 nm thick N 4 ,N 4 ,N 400 ,N 400tetra([1,1 0 -biphenyl]-4-yl)-[1,1 0 :4 0 ,1 00 -terphenyl]-4,4 00 -diamine (TaTm) lm, while C 60 was used in all cases as the ETL on top of the perovskite. We then nished the devices using 5 different variations of interlayers and metals (Fig. 1b), namely BCP (8 nm)/Ag, Liq (2 nm)/Ag, Ba (5 nm)/Ag, and the combinations BCP (8 nm)/Ba (5 nm)/Ag and BCP (8 nm)/Liq (2 nm)/Ag, where Ag is 100 nm thick in all cases. The latter two combinations were chosen to assess whether BCP can be used in combination with a low work function metal (Ba), or when not in contact with Ag (using a Liq interlayer). To ensure sufficient statistics, for each device conguration, at least 2 different substrates each containing 4 cells were evaluated, while for top performing congurations at least 5 different substrates with a total of 20 cells were characterized. The overlap area between the metal and ITO electrode was 6.51 mm 2 (2.1 Â 3.1 mm 2 ) and the solar cell characteristics were measured under illumination with a 4 mm 2 mask to accurately determine the short circuit current (J sc ), and without a mask to avoid erroneous determination of both open-circuit voltage and ll factor. 38 We initially tested the optoelectronic properties of the devices by measuring the current density-voltage (J-V) curves under AM 1.5G simulated sun illumination at the intensity of 100 mW cm À2 (Fig. 2a and Table 1). Very small and negligible differences were observed between forward (from short to open circuit) and reverse (from open to short circuit) scans. The lack of hysteresis in the J-V curves suggests that no charge accumulation takes places at the perovskite/transport layers' interface, indicating good energy level matching among the different materials. All solar cells were characterized by a similar J sc of approximately 20.5 mA cm À2 , which is reasonable as all devices share the exact same stack of materials at the front contact, and the internal eld is sufficient to overcome eventual energy barriers induced by the use of different ETLs. The internal reference device with BCP/Ag top contact delivered a high V oc of 1.13 V with good rectication (FF of 77.6%), resulting in a PCE of 18.1% on average. When exchanging BCP for Liq, we observed that the photovoltaic parameters are essentially unvaried, leading to a V oc of 1.13 V and a PCE of 18.1% on average. Devices employing BCP in combination with the low work function Ba electrode exhibited a low FF (67.1%) suggesting hindered charge extraction, despite of the ohmic BCP/ Ba electron transport interface. Most likely, with this device structure the electron extraction is limited by the large potential difference between the LUMOs of C 60 and BCP, and there is no benecial interaction between BCP and Ba, as oen reported for BCP in combination with Ag and Al. This is an additional indirect evidence that indeed metals such as Ag and Al do interact with BCP leading to the formation of new species and of an additional density of states below the BCP LUMO, as described above. The V oc was also found to be slightly lower (1.11 V), and the overall efficiency was 15.7%. More interesting is the other device variation where we included Liq as interlayer between BCP and Ag. The diodes with BCP/Liq/Ag top contact exhibited a V oc ¼ 1.12 V, slightly higher compared to those with BCP/Ba, but with a much better rectication, as the FF approached 80% with small pixel-to-pixel variation. The latter observation might suggest that the chemical interaction between BCP and Ag takes place even in the presence of the Liq interlayer, due to its very low thickness (2 nm). Finally, the solar cells employing the top Ba electrode deposited directly on the C 60 ETL delivered the highest V oc of 1.15 eV, even though at the price of a small decrease in FF. The observed trends in photovoltage were not related with the diode dark J-V characteristics (Fig. 2b), where all solar cells showed rather low and comparable leakage current in the low voltage regime. In this regime, the minimum of the current density for devices with Ba and Liq appeared at low negative voltage, indicating a small carrier accumulation within the device. Only devices nished with Ba were found to have a slightly higher leakage, but that is also not reected in the measured V oc , which was the highest across the entire device series. While still a topic of debate, it is commonly accepted that a good energy level alignment is favourable for the carrier extraction (FF) and to limit recombination (increase V oc ) at the transport layer/perovskite interface. 7 In our case, however, the MAPI/ETL interface is unvaried, as in all device variations we have employed C 60 in contact with the perovskite. One effect responsible for the difference in V oc could be a variation of the diode built-in potential, which might help remove the majority carrier (electrons) from the MAPI/C 60 interface. However, we did not observe appreciable differences in the built-in potential across the series of devices (Fig. S1 †). At open-circuit all photogenerated charge carriers recombine, and non-radiative recombination will reduce the quasi-Fermi-level splitting (QFLS), and hence limit the attainable photovoltage. As in perovskites recombination takes place from free charge carriers, the QFLS can be directly related to the perovskite photoluminescence quantum yield. 39 Hence, we performed photoluminescence (PL) measurements on full devices with the different top contacts. In order to compare the PL, we characterized the solar cells in an integrated sphere, illuminating the pixel with a 522 nm laser. We adjusted the laser power so that the J sc of the cells matched the one obtained under simulated solar illumination, ensuring to have the same carrier concentration for all devices. All the PL spectra were taken with an integration time of 1 s. The series of devices exhibited PL spectra with maxima centred at 1.58 eV (Fig. 3), independently on the top contact composition, but with different intensity. In general, V oc uctuations can be ascribed to a variation of the non-radiative recombination rates (change of the PL quantum yield) in the device. In particular, the more intense PL (and V oc ) observed for solar cells capped with Ba indicates that the electrode work function has indeed an important role on the charge carrier recombination dynamics of the perovskite lm, even across the C 60 lm. On the other hand, devices with BCP and Liq showed the lowest PL intensity, while the combined BCP/Liq and BCP/ Ba top contacts were found to lead to brighter PL. The solar cells with BCP and Liq interlayers were characterized by a high V oc , although not accompanied by a proportionally intense PL signal, suggesting other competitive mechanisms determining the nal photovoltage. It is important to note that the trends observed in the PL might be affected, at least partially, to transient phenomena which will be discussed below. The fact that both the electrode work function and type of interlayers can inuence the MAPI PL, even when the adjacent C 60 ETL is unvaried, reects some of the unique properties of perovskites. It has been widely reported that the perovskite Fermi level can be drastically modulated by choosing the appropriate substrate. 31 In particular, p-type substrates (NiO x , or MoO 3 ) impose a p-type character on the perovskite itself, and the same but opposite mechanism is true for n-type materials, such as metal oxides or low work function surfaces in general. 40,41 Within this perspective, we can reasonably envision that the Fermi level of the MAPI lms is directly inuenced also by the top surface (the top contacts studied here). The work function of these materials is, however, very difficult to probe as the interface is buried within the device. We further investigated the characteristics of the series of solar cells by measuring their behaviour as a function of time, under continuous simulated solar illumination. The devices were encapsulated with UV-curable resin and a glass slide, and kept at 25 C under a nitrogen ow (max relative humidity 10%), to minimize the effect of the environmental degradation. The maximum power point was continuously tracked and we also measured the device V oc every 10 minutes. The evolution of the PCE over time for the series of devices are depicted in Fig. 4a. We can distinguish three types of behaviour for the device series. The solar cells employing Ba as the electrode (either alone or in combination with BCP) showed an initial fast rise in efficiency (to about 19%) followed by a relative fast decay of the device performance. Aer 2 days of continuous operation, the PCE for both devices was found to be already below 16%. This is expected as Ba is extremely reactive, and its implementation requires very rigorous encapsulation to ensure the absence of oxygen and/or moisture. On the contrary, the devices employing BCP and BCP/Liq in combination with the Ag electrode were found to be much more stable, and with rather similar decay prole. Aer 2 weeks of continuous operation, the device with BCP/Liq top contact still delivered a PCE of 15.5%, while the reference cell with only BCP showed an efficiency of 14.5%. Also in this set of devices we noted an initial rise of PCE to about 18% and 19% for cells with BCP and BCP/Liq top contacts, respectively. The solar cell employing the thin Liq interlayer in between C 60 and Ag showed a different behaviour, with an initial notable increase of the efficiency (up to 19.5% aer 1 day) followed by a monotonic decay of the PCE, reaching about 16% aer 6 days of operation. Apart from the perovskite intrinsic instability, the main processes driving the degradation of perovskite solar cells are the diffusion of halides to the electrode, 42 as well as the opposite migration of metal atoms from the electrode to the perovskite lm. 43 With this in mind, we can reasonably ascribe the short lifetime observed for the cells with only 2 nm thick layers of Liq in between the C 60 and Ag, to the faster interdiffusion of species between the MAPI lm and the electrode. When BCP is used, it can alleviate this effect just because of the additional barrier it introduces, but likely also thanks to the ability of BCP to coordinate Ag atoms, as discussed above, which might slow down the metal diffusion. An interesting feature of the measurements under continuous illumination is the initial performance increase observed for the solar cells, independently on the top-contact used (although Ba was found to speed up the device degradation, as compared to the other interlayers). In particular, the open-circuit voltage was found to increase in all cases of about 40 to 50 mV, as illustrated in Fig. 4b. Considering that the variations are similar and virtually independent on the type of top-contact, their origin is most likely a consequence of a reduction of the non-radiative recombination within the MAPI lm. A similar behaviour was previously observed in efficient vacuum-deposited n-i-p perovskite solar cells. 15 It has been widely reported that, under continuous illumination, the density of shallow traps can be reduced, leading to a decrease of the non-radiative recombination rate. 44,45 Interestingly, here we observed maximum V oc close to 1.16 V for solar cells with BCP, Liq or their combinations, and up to 1.185 V for devices employing Ba as the electrode. Although in the latter case the stability was found to be very limited, these voltage values are among the highest reported for vacuum-processed MAPI solar cells. 15,[46][47][48] Conclusion In summary, we have studied the inuence of different interlayers, electrodes, and their combinations on the performance and especially the photovoltage of vacuum-processed perovskite solar cells. Organic semiconductors traditionally used in solar cells (BCP) and OLEDs (Liq) can lead to devices with high rectication, ll factor, and photovoltage. Furthermore, we have observed that the use of low work function metals, such as Ba, can be benecial for the reduction of non-radiative recombination, although at the expenses of the device stability. Longterm device stability was observed only in the presence of BCP, with or without the Liq buffer layer. Future studies will address this aspect, trying to introduce low work function surfaces without undermining the device operation under continuous illumination. Thin lm and device preparation To prepare the devices, the layers were deposited on ITO-coated glass substrates. The substrates were cleaned using soap, water, and subsequently isopropanol in an ultrasonic bath, followed by a UV-ozone treatment. The substrates were transferred into a nitrogen-lled glovebox (H 2 O and O 2 < 0.1 ppm) equipped with a vacuum chamber with pressure lower than 10 À6 mbar. MoO 3 and TaTm layers were deposited at rates of 0.1 and 0.4 A s À1 , respectively. The MAPI lms were deposited by coevaporation of MAI and PbI 2 precursors simultaneously at the rates of 1.0 and 0.6 A s À1 , respectively. The rates of the coevaporation and the thickness of each layer were controlled by using three quartz crystal microbalance sensors. Aer deposition of the perovskite lm, C 60 was evaporated at a rate of 0.4 A s À1 with the source temperature at 380 C, and subsequently a thin layer (8 nm) of BCP was sublimed at a rate of 0.3 A s À1 with a source temperature of 150 C. Liq (2 nm) was deposited at a rate of 0.1 A s À1 . Ba (5 nm) and Ag (100 nm) were evaporated in another vacuum chamber using molybdenum boats as sources by applying a current of 1 A and 3-4 A, respectively. Device characterization The J-V curves for the solar cells were recorded using a Keithley 2612A SourceMeter in a À0.2 and 1.2 V voltage range, with 0.01 V steps and integrating the signal for 20 ms aer a 10 ms delay, corresponding to a speed of about 0.3 V s À1 . The devices were illuminated under a Wavelabs Sinus 70 LED solar simulator. The light intensity was calibrated before every measurement using a calibrated Si reference diode. Solar cell stability measurements (photovoltaic parameters versus time) were recorded using a maximum power point tracker system, with a white LED light source under 1 sun equivalent, developed by candlelight. During the stability measurements, the encapsulated devices were exposed to a ow of N 2 gas; temperature was stabilized at 300 K during the entire measurement using a water-circulating cooling system controlled by a Peltier element; J-V curve measurements were performed every 10 min. Photoluminescence measurements The photoluminescence of the full devices was measured with an Avantes AvaSphere-50-REFL integrating sphere connected to a 600 nm long-pass lter and an Avantes Avaspec2048 spectrometer. The devices were illuminated with a diode laser of integrated optics, emitting at 522 nm. The laser power was adjusted so that the short-circuit current via laser illumination matched the short-circuit current obtained from the measurement with the solar simulator. All the spectra were taken with an integration time of 1 s. Conflicts of interest There are no conicts to declare.
2020-02-13T09:15:02.917Z
2020-02-07T00:00:00.000
{ "year": 2020, "sha1": "906459198245977228d4841b375a1e5fee151f1b", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/d0ra00214c", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8090f45165d9bdb4942f6d42082dd41416f28a53", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
53651550
pes2o/s2orc
v3-fos-license
Efficacy of Trichoderma spp . and fungicides against Lasiodiplodia theobromae Experiments were carried out to find out the bio-efficacy of four Trichoderma species, viz. Trichoderma harzianum , T.koningii , T.viride (green strain), T.viride (yellow strain) against canker pathogen Lasiodiplodia theobromae. Bioassay of antagonist against test pathogens conducted by dual culture techniques at different temperatures; volatile, non volatile and naturally untreated metabolites of isolates were examined. T. koningii and T.viride (yellow strain) exhibited maximum inhibition in controlling the pathogens. Fungicides, viz. Bavistin and Dithane M-45 used where Bavistin found little effective but Dithane M-45 showed no effects on pathogen. Trichoderma viride showed better performance to control Lasiodiplodia theobromae than commercial fungicides used during present investigation. DOI: http://dx.doi.org/10.3329/bjsir.v49i2.22008 Bangladesh J. Sci. Ind. Res. 49(2) , 125-130, 2014 Introduction Species of the Botryosphaeriaceae have a cosmopolitan distribution which occurs on a wide range of monocotyledonous, dicotyledonous and gymnospermous hosts, as well as on lichen thalli.Lasiodiplodia theobromae (Syn. Botryodiplodia theobromae) belongs to Botryosphaeriaceae, are associated with different symptoms such as shoot blights, stem cankers, fruit rots, die-back, gummosis (Ciesla et al., 1996), canker and die-back, followed by kino exudation, and in severe cases tree death (Shearer et al. 1987;Smith et al. 1994;Old & Davison 2000;Roux et al. 2001).Ciesla et al., (1996) reported that species of the Botryosphaeriaceae are generally regarded as weak pathogens that invade stressed or wounded plants after drought, hail, wind, frost or insect damage and was also cited that the Botryosphaeriaceae occur in asymptomatic tissue as latent pathogens in trees such as Eucalyptus, Pinus and Syzigium (Pavlic et al., 2004).Hence, present investigation was carried out to investigate the efficacy of biological agents; Trichoderma spp.and fungicides against L.theobromae causing disease in plants. Material and Methods Four species of Trichoderma namely, Trichoderma harzianum, T.koningii, T.viride (green strain), T.viride (yellow strain) were isolated from spent (infected) mushroom spawn packets of Pleurotus ostreatus (Jacquin ex fr.) Kummer, during December'2010 to February'2011.Lasiodiplodia theobromae was also isolated from wood samples (saw dust) which used as raw materials for spawn packets preparation to grow commercial mushroom at National Mushroom Development and Extension Centre, Savar, Dhaka.After Surface sterilized samples were inoculated on PDA plates and incubated at three different temperatures viz.20±2ºC, 28±2ºC, 35ºC.Radial growth of mycelium were measured.Mycelium of the pathogens was spread over the whole plate after 3 days and sub-cultured on PDA slants and incubated for further growth.Cultural and microscopic characteristics were observed under microscope. In vitro assay of antagonists by Dual culture technique Trichoderma isolates were evaluated against Lasiodiplodia theobromae by dual culture technique as described by Kunz (2007).A 5 mm diameter mycelial disc from the margin of the 7 days-old culture of Trichoderma isolates and the Lasiodiplodia theobromae was placed on the PDA media at opposite of the plate at equal distance from the periphery.In control plates, (without Trichoderma), a sterile agar disc was placed at centre of the plates.Inoculated plates were incubated at 28±2ºC, 32 ± 2ºC, and 35ºC until the end of the incubation period of 7 days.Inhibition percent was calculated (Kunz, 2007) by the following formulae: T = Radial growth of treated plates. Volatile metabolites from antagonists on Lasiodiplodia theobromae The effect of released volatile metabolites of Trichoderma isolates on the mycelial growth of the pathogen was evaluated as methods described by Dennis and Webster (1971).Test pathogens were inoculated at the centre on PDA plates with 5 mm diameter mycelial growth and Trichoderma inoculated plates were inverted on the top of the test pathogens plates and held together by adhesive tape.Radial growths of the pathogens were recorded at 24 hours interval at room temperature (28 ± 2ºC). Effects of non volatile metabolites on Lasiodiplodia theobromae The method was followed as described by Kaur et al. (2006).Three mycelial agar blocks, each having 5mm diameter of four individual fungal antagonists, were cut off from the advanced margins of 5 day old culture and inoculated into a 500 ml conical flask containing 250 ml potato dextrose broth medium.The inoculated flasks were allowed for 15 days incubation period at 28 ± 2ºC.After incubation, the culture broth of each antagonist was filtered through a double ring filter paper (11cm) and finally through a millipore filter paper under suction pump to obtain cell and bacteria free extracts under aseptic conditions.All plates were incubated at room temperature 28 ± 2ºC and percent inhibition in mycelia growth was calculated.The effects of natural untreated metabolites by dipping culture disc method was followed as described by Ashrafuzzaman and Aminur (1992). In vitro assay of fungicides The effect of fungicides, namely Bavistin and Diathane-M 45 were used to examine the effectiveness against L. theobromae on PDA medium using 30 ppm, 50 ppm, 70 ppm concentration of each fungicides.Three replicated PDA plates were used for each dose of fungicides.PDA plate received no fungicide was served as control.The inoculated plates were incubated at 28±2ºC and percent of inhibition was calculated. Results and Discussion T. harzianum was characterized based on morphology such as colonies, hyphae, conidiophores, phialides and conidia according to Choi In-Young et al. (2003).Other strains of Trichoderma in the present study were characterized as described by Bernet (1960) The cultural and microscopic observation of the mycelia, spores of L. theobromae was confirmed, as described by Kunz (2007).Off-white colored immature colony appeared which turned into black color within 2-3 weeks.Colonies were luxuriant with regular fast growth.Black septate mycelium with colourless and unicellular spores was found at young stage.Upon maturity, spores became brown colored, distichously and thick walled.Spores were elliptical and larger in sized.The cultural and microscopic observation of the mycelia, spores of L. theobromae was confirmed, as described by Kunz (2007).Findings of the dual culture tests demonstrated that all the Trichoderma isolates tested showed inhibitory effects against Lasiodiplodia theobromae ranged from 60-75% at 28±2 ºC temperature whereas the maximum inhibition (80%) was exhibited by both T.koningii and T.viride (green strain) at 32±2 ºC and 35°C temperature (Table I).In case of volatile metabolites, T.viride (green strain) showed maximum inhibition (33.3%) whereas non volatile and naturally untreated metabolites of fungal cultures did not perform any significant reduction of mycelial growth of L. theobromae (Table 1).The mode of action of Trichoderma spp.showed mycoparasitism and competition for space and nutrients in dual culture which are in agreement with Kotze (2008).The antagonistic potentiality of Trichoderma spp.against Lasiodiplodia theobromae was also reported by earlier workers (Mortuza and Ilag, 1999, Yadav & Majumdar, 2005, Kunz, 2007) (1999) cited that T.harzianum exhibited the greatest inhibition in dual culture, whereas Yadav & Majumdar (2005) reported that T. viride was more effective than T. harzianum. Present findings have partially conformity with the results of Kotze (2008) who reported 23.6% inhibition by T. atroviride.During present study, non volatile metabolites had no effects on Lasiodiplodia theobromae which contradict to results cited by John et al. (2004). During present investigations fungicide Bavistin found effective to control Lasiodiplodia theobromae at 70 ppm than others used, whereas Dithane M-45 showed no significant effect at any concentration ( The aggressiveness of Trichoderma spp.studied varies more or less to previously mentioned workers.This might be due to difference in the characteristics of Trichoderma. So, to control the pathogen by using Trichoderma isolates is an environment friendly and non hazardous approach over chemical control. ; Choi In-Young et al. (2010).Efficacy of Trichoderma spp.and fungicides against Lasiodiplodia theobromae 49showing cultural and microscopic features of Trichoderma spp.(magnification 40X) a & b.Microscopic features and colony of Trichoderma harzianum showing phialides, conidia c & d.Microscopic features and colony of Trichoderma koningii showing phialides, conidia,coiling e & f.Microscopic features and colony of Trichoderma viride (green strain) showing spores g & h.Microscopic features and colony of Trichoderma viride (yellow strain) showing spores Fig. (a) Septed mycelium of L.theobromae
2018-11-09T17:41:20.028Z
2015-02-09T00:00:00.000
{ "year": 2015, "sha1": "7729ebc19de992e4f779fc214dcbd714f3858c4e", "oa_license": "CCBYNC", "oa_url": "https://www.banglajol.info/index.php/BJSIR/article/download/22008/15110", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7729ebc19de992e4f779fc214dcbd714f3858c4e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
239652626
pes2o/s2orc
v3-fos-license
The Heavy Burden of “Dependent Children”: An Italian Story : This paper analyses multidimensional fuzzy monetary and non-monetary deprivation in households with children by using two different definitions: households with children under 14 years old, and the EU definition of households with dependent children. Eight dimensions of non-monetary deprivation were found using 34 items from the EU-SILC 2016 survey. Dealing with subpopulations, it is essential to compute standard errors for the presented estimators. Thus, a relevant added value of the paper is fuzzy poverty measures and associated standard errors, which were also computed. Moreover, a comparison was made between the measures obtained concerning the two subpopulations across countries. With a focus on Italy, an Italian macro-region is presented. Introduction Children are more vulnerable to poverty and deprivation and the poverty that they experience can compromise their outcomes in future adult life. In 2018, one out of four children (aged 0-18) in the EU were at risk of poverty or social exclusion. However, as reported by Eurostat [1], child poverty rates vary significantly between member states. In Romania, Bulgaria, Greece, and Italy, one out of three children were found to be at risk of poverty or social exclusion, while in Denmark, the Netherlands, the Czech Republic, and Slovenia, only one out of six children were at risk in 2018. Most of the EU countries stated that the at-risk-of-poverty rate was highest for single persons with dependent children. Regarding Italy, there are several particular points to observe. Italy (with Spain and Greece) reported the highest at-risk-of-poverty or social exclusion rate (nearly 20%) in EU member countries for households with two adults and one dependent, while nearly 40% of households with two adults and three or more dependent children are at risk of poverty (only Bulgaria and Romania report higher figures). It seems that the burden of dependent children weighs more heavily in Italy than in other member states. A consideration that could aid our understanding of this issue is an aspect of Italian culture in which the average age at which children leave home is much higher than what is found in many other European countries. Therefore, children depend on their parents for a long time. Consequently, the first original contribution of this paper consists in carrying out a deeper analysis by considering two different definitions of households with children: the first is households with at least one child aged 0-14 years, and the second consists of households with at least one dependent child. A second original contribution of the paper is the computation of the standard errors for the fuzzy measures, performed from complex sample surveys, such as EU-SILC. The rest of the paper is organized as follows. Section 2 presents the data used for the analysis and delineates the research methodology. Section 3 presents the findings of the study, while Section 4 reports some final remarks. Data and Methodology The aims of this section are the following: to introduce the data set used for the analysis and the variables involved, as well as to include relevant information regarding the methodology, the approach, and the operationalization. Data This paper uses the 2016 wave of the European Union Statistics on Income and Living Conditions (EU-SILC). It provides multidimensional microdata on income and living conditions in the European Union. Other than that, the ad hoc modules developed in 2016, "Access to Services", includes variables concerning access to childcare, home care, training, education, and healthcare. Access to education and healthcare services is important and closely linked to living conditions for all household members. Education has an important impact on an individual's income as well as on their knowledge and culture. Better access to healthcare can improve life expectancy in addition to well-being. Access to childcare, too, has an important impact on household income in that the lack of access to childcare affects the work-family balance of women and actually reduces active female participation in the labor market. Moreover, childcare services improve the life chances of all children, especially those who are disadvantaged, by stimulating their learning. Moreover, these services offer children the opportunity to become familiar with those from different backgrounds. The target variables involved in the analysis relate to different types of units. Information on social exclusion, housing conditions, and material deprivation is collected mainly at the household level, while labor, education, and health information is collected at the individual level for everyone age 16 and over. Detailed data are collected on income components, primarily on personal income, and then they are aggregated at the household level to construct the household income. The income variables considered in the current analysis are the total disposable household income (HY020) and the total disposable household income before social transfers other than old-age and survivor's benefits (HY022). Both are adjusted for inflation and converted into the equivalized household income using the so-called modified Organization for Economic Co-operation and Development (OECD) equivalence scale, which weights (Organization for Economic Cooperation and Development, 2009) the first adult by 1.0, the second adult and each subsequent individual aged 14 and over by 0.5, and then 0.3 to each child under 14 years. Regarding the variables collected by the 2016 module on "Access to Services", the variables chosen for the analysis are those related to affordability of the service, specifically the following: affordability of formal education, affordability of healthcare services, and affordability of childcare services. These variables apply at the household level and refer to the household [2]. Our analysis considers the cross-section sample of households included in the 2016 wave of the EU-SILC. The countries involved in the analysis are as follows: Austria, Belgium, Bulgaria, Switzerland, Cyprus, Czech Republic, Denmark, Estonia, Greece, Spain, France, Hungary, Ireland, Iceland, Italy, Luxembourg, Latvia, Norway, Poland, Portugal, Serbia, Sweden, and Slovakia (Some member states were removed from the analysis because of high missing values in the considered variables, or variables that were not collected at all, or because of problems of sample sizes in households with children). Specifically, we are interested in two sets of households: those with at least one child aged 0-14 years and households with at least one dependent child. A dependent child is any person below 18 years as well as those who are from 18 to 24 years old, living with at least one parent, and who are economically inactive. Using this criterion, the sets of households analyzed consist of 42,817 and 52,871 households, respectively Table 1. Methods The consensus on the fact that poverty must be seen and measured as a multidimensional phenomenon is also recognized in the 2030 UN Agenda for Sustainable Development, which identifies the reduction of poverty in "all its forms and dimensions" among the objectives to be achieved. The adopted methodology is based on the crosssectional fuzzy multidimensional measures of deprivation (monetary and non-monetary) that treats poverty as a matter of degree [3]. Defining poverty as a matter of degree has several advantages, as highlighted by [4]. First, non-monetary poverty is subject to forced non-access to various facilities or possessions that determine basic living conditions, or an individual might have access to only some of them. Second, but not less important, the fuzzy approach provides more robust indicators [5], so it is particularly indicated for studying subpopulations or small domains, as in our case, for households with children. In treating monetary and non-monetary poverty with a fuzzy approach, the fundamental point is the choice of the membership function that quantifies the propensity of each person to poverty. We chose the membership function defined by [6], and further elaborated by [7], which includes the relative poverty measure of the so-called "Totally Fuzzy and Relative" (TFR) function [8]. In this way, two indicators are defined: the Fuzzy Monetary (FM, K = 1) indicator for monetary poverty and the Fuzzy Supplementary (FS, K = 2) indicator for non-monetary poverty. Accordingly, the propensity to poverty and deprivation for any individual, i, is specified through the "Integrated Fuzzy and Relative" (IFR) membership function, defined as: where X is the equivalized income in the FM or the overall score in the FS, w γ is the sample weight of each statistical unit of rank γ, and α K are parameters corresponding to monetary and non-monetary aspects of poverty. Each parameter α K is estimated so that the mean of the corresponding membership function is equal to the head count ratio (HCR), officially known as the at-risk-of-poverty rate (ARPR), which is computed on the basis the official poverty line (60% of the median national equivalized income). It is important to note that the two parameters α K have a very precise economic interpretation, that is, the mean of the membership functions are expressible in terms of the generalized Gini measures G α K , which is a generalization of the standard Gini coefficient, . In other words, such fuzzy poverty measures, intrinsically being highly relative, also constitute a good inequality measure. Reference [7] also proposed a step-by-step procedure for measuring the FS that can be briefly summarized as follows: 1. Identification of items to describe non-monetary poverty and their transformation into the range [0, 1]; 2. Development of exploratory and confirmatory factor analysis to identify the hidden dimensions of poverty; 3. Construction of the weights to be assigned within each dimension, based on the dispersion of the item and the correlation with other items belonging to the same dimension; 4. Computation of the score within each dimension as a weighted mean of the items in the dimension, and finally, computation of the overall score as a simple average of the dimension scores. In the present study, 34 items were identified from the EU-SILC 2016 database to investigate non-monetary deprivation within households with children who are under 14 year old or households with dependent children. After their transformation into the range [0, 1], the exploratory factor analysis enabled us to identify eight hidden dimensions of multidimensional non-monetary poverty. The dimensions identified are reported in Table 2. Most of the 34 items have already been used in the literature on multidimensional non-monetary poverty, and their strength in describing it have been proved (see, for example [7]). However, in this study, we decided to add a new dimension on service affordability, using three items from the EU-SILC ad hoc module 2016, in addition to one item from the dimension on housing amenities (overcrowd house), and one from the dimension on financial situation (financial burden of total housing costs). The construct validity was validated through a confirmatory factor analysis that confirmed the subsample of households with at least one child aged 0-14 years and for households with at least one dependent child. In Table 3, the main goodness-of-fit indexes are reported, which are very similar for both samples and all of them are very good, again highlighting the goodness of the chosen items and dimensions for non-monetary poverty. Then, the FS weights for each dimension and the overall weights were computed. Fuzzy monetary poverty was implemented by using three different incomes, namely, household equivalized income (HX090), household disposable income (HY020), and household disposable income before social transfers (HY022). This was done to compare the impact of different definitions of poverty, but most of all, to evaluate the impact of social transfers. Results To compare the deprivation status for households with at least one child aged 0-14 years and for households with at least one dependent child, the analyses were conducted separately by considering the two different sets of households. The results derive from the methodology explained in Section 2 and refer to the monetary and non-monetary dimensions of poverty for each EU country involved in the analysis. Fuzzy Monetary Measures Figures 1 and 2 report a comparison of monetary poverty using two different incomes, namely, equivalized income and income before social transfers for both subsamples. Results To compare the deprivation status for households with at least one child aged 0-14 years and for households with at least one dependent child, the analyses were conducted separately by considering the two different sets of households. The results derive from the methodology explained in Section 2 and refer to the monetary and non-monetary dimensions of poverty for each EU country involved in the analysis. Figures 1 and 2 report a comparison of monetary poverty using two different incomes, namely, equivalized income and income before social transfers for both subsamples. Observing Figure 1 referring to households with children aged 0-14, the risk of poverty, which was computed considering the equivalized income, is particularly widespread in Bulgaria, Croatia, and Romania, while in Mediterranean countries like Spain, Greece, Italy, and Portugal, it remained substantial; on the other tail of the histogram, we observe significantly lower poverty rates for Scandinavian systems, particularly for Iceland, Norway, and Denmark. The situation is very different if we still examine Figure 1, considering the risk of poverty, computed with income before social transfers: it is evident that the poverty rates for Scandinavian countries now are very similar to those registered by the Mediterranean ones. It seems that considering income before social transfers, the difference in child poverty between Scandinavian and Mediterranean countries narrowed significantly. This issue is consistent with a known situation: social transfers are very different from each other, especially in an international context and in general across countries there have different categories of people, who despite being poor, are not reached by cash transfers. As shown in [10], there are significative differences in regard to the exclusion rates from social transfers among the European regimes. Indeed, the Mediterranean system is the one with the highest exclusion rates for all socio-demographic groups of poor individuals considered; particularly, minors and single parent households in a poverty persistence status report higher non-receipt rates than employed persons. On the other side, Scandinavian countries aim to protect specific categories of the population regardless of the poverty status, so that very small amounts of the poor are excluded. The most protected categories of the poor population, across all welfare systems, are the poor and persistently poor disabled and elderly people. Now, comparing Figures 1 and 2 and observing Spain, Greece, Italy, and Portugal, we can state that the monetary deprivation considering equivalent income and household disposable income before social transfer is very similar for households with children aged 0-14 or for households with dependent children. It is also evident that in countries with a traditionally strong social assistance system, primarily Scandinavian countries, the monetary deprivation considering household disposable income before social transfer is distinctly higher for households with children aged 0-14. In general, in such countries, the monetary deprivation computed by using disposable income before social transfer is markedly higher for both samples. This remarkable difference is a confirmation that a welfare state can greatly affect households through children's living conditions. Observing Figure 1 referring to households with children aged 0-14, the risk of poverty, which was computed considering the equivalized income, is particularly widespread in Bulgaria, Croatia, and Romania, while in Mediterranean countries like Spain, Greece, Italy, and Portugal, it remained substantial; on the other tail of the histogram, we observe significantly lower poverty rates for Scandinavian systems, particularly for Iceland, Norway, and Denmark. The situation is very different if we still examine Figure 1, considering the risk of poverty, computed with income before social transfers: it is evident that the poverty rates for Scandinavian countries now are very similar to those registered by the Mediterranean ones. It seems that considering income before social transfers, the difference in child poverty between Scandinavian and Mediterranean countries narrowed significantly. This issue is consistent with a known situation: social transfers are very different from each other, especially in an international context and in general across countries there have different categories of people, who despite being poor, are not reached by cash transfers. As shown in [10], there are significative differences in regard to the exclusion rates from social transfers among the European regimes. Indeed, the Mediterranean system is the one with the highest exclusion rates for all socio-demographic groups of poor individuals considered; particularly, minors and single parent households in a poverty persistence status report higher non-receipt rates than employed persons. On the other side, Scandinavian countries aim to protect specific categories of the population regardless of the poverty status, so that very small amounts of the poor are excluded. The most protected categories of the poor population, across all welfare systems, are the poor and persistently poor disabled and elderly people. Now, comparing Figures 1 and 2 and observing Spain, Greece, Italy, and Portugal, we can state that the monetary deprivation considering equivalent income and household disposable income before social transfer is very similar for households with children aged 0-14 or for households with dependent children. It is also evident that in countries with a traditionally strong social assistance system, primarily Scandinavian countries, the monetary deprivation considering household disposable income before social transfer is distinctly higher for households with children aged 0-14. In general, in such countries, the monetary deprivation computed by using disposable income before social transfer is markedly higher for both samples. This remarkable difference is a confirmation that a welfare state can greatly affect households through children's living conditions. Fuzzy Supplementary Measures and Their Precision As mentioned in the introduction, an added value of the present paper consists in reporting standard errors of fuzzy poverty measures for the subpopulations considered in the analysis. Estimation of variance for complex measures (such as fuzzy ones) from complex surveys (such as EU-SILC) is not a straightforward exercise, and it cannot be performed by standard methods available in usual statistical packages such as SAS, SPSS, STATA, etc. Indeed, while the set of basic assumptions concerning sample design needed to use the variance estimation methods are generally met or they can be reasonably approximated in most population-based surveys, there is an additional one that is often not met in practice [11]. The assumption concerns the availability of all essential information on the sample structure. Indeed, as stated in [5], to compute accurate standard errors for fuzzy measures, it is necessary to have full access to the variables that define the structure of the sample. Here, we needed to adapt the original methodology proposed in [5], due to the lack of sufficient information for the purpose. In fact, the EU-SILC UDB (user database available to researchers) does not contain information on sample structure, in particular concerning stratification and clustering. Therefore, we used an alternative method by considering the design effect [12], which is the ratio of the variance in a given sample design, to the variance under a simple random sample of the same size. By inverting such a relationship, it is possible to estimate the variance by multiplying the variance in a simple random sample and the design effect. Reference [13] provides accurate estimates of design effects for child poverty for three EU-SILC countries: Austria, Belgium, and Poland. In Tables 4 and 5, we use these design effects for estimating standard errors for the fuzzy supplementary deprivation measures and their breakdown into the eight dimensions. In most cases, the coefficient of variation (last column) is well below 5%, and, in only a few cases, it is between 5% and 10%. Poverty measures disaggregated for such population subgroups are, clearly, very precise, and such a conclusion could be extended to other countries since their sample sizes were designed so as to get similar standard errors among countries. These results are in line with the substantive finding of another study [5], according to which, fuzzy measures tend to be subjected to a smaller sampling error than conventional measures of poverty for a given sample size and design. The computation of the standard errors for the fuzzy supplementary deprivation measures actually adds value to the analysis, considering the recommendations of [14], for which standard errors are essential when poverty measures are disaggregated for subpopulations such as children or other groups of interest. Comparison of the Two Subpopulations To compare the deprivation status for the households with children aged 0-14 years and the households with dependent children, a ratio of their scores was computed for each dimension (Tables 6 and 7). Each ratio shows the relative magnitude of the monetary and non-monetary deprivation computed for the two sets of households. Thus, a ratio close to 1 means that the deprivation level is similar to the two sets of households; a ratio greater than 1 indicates that the level of deprivation is higher in households with children aged 0-14 than in households with dependent children aged 0-24; and a ratio lower than 1 indicates that the level of deprivation is higher in households with dependent children than in households with children aged 0-14. Observing Table 6, we can see that the ratio is generally greater than 1, meaning that the level of deprivation is higher in households with children aged 0-14 than in households with dependent children aged 0-24. For all countries, the ratio of the non-monetary dimension related to consumer durables is greater than 1, meaning that, in all countries, the deprivation level for durable goods is higher in households with children aged 0-14 than in households with dependent children aged 0-24. It is notable that in three countries, namely Greece, Ireland, and Italy, the ratios are generally lower than 1. This could be explained by considering that these countries have a tradition of large families with children who stay in the household until marriage. Concerning monetary deprivation, the figures are similar to the non-monetary ones. The ratios are generally greater than 1, meaning that the level of deprivation is higher in households with children aged 0-14 than in households with dependent children aged 0-24 (Table 7). Again, a few countries, namely Greece, Ireland, Iceland, and Luxemburg show ratios lower than 1 with regard to the measure of deprivation related to the household equivalized income (HX090), but only Italy shows ratios below 1 for all three monetary variables. Focus on Italy According to the results presented, from the countries considered, Italy is the only country with all the ratios lower than 1 (except for the dimension of Consumer Durables), meaning that Italy is the only country presenting a higher deprivation for households with dependent children aged 0-24 than for households with children aged 0-14 (see Figure 3). Each ratio shows the relative magnitude of the monetary and non-monetary deprivation computed for the two sets of households. Thus, a ratio close to 1 means that the deprivation level is similar to the two sets of households; a ratio greater than 1 indicates that the level of deprivation is higher in households with children aged 0-14 than in households with dependent children aged 0-24; and a ratio lower than 1 indicates that the level of deprivation is higher in households with dependent children than in households with children aged 0-14. Observing Table 6, we can see that the ratio is generally greater than 1, meaning that the level of deprivation is higher in households with children aged 0-14 than in households with dependent children aged 0-24. For all countries, the ratio of the non-monetary dimension related to consumer durables is greater than 1, meaning that, in all countries, the deprivation level for durable goods is higher in households with children aged 0-14 than in households with dependent children aged 0-24. It is notable that in three countries, namely Greece, Ireland, and Italy, the ratios are generally lower than 1. This could be explained by considering that these countries have a tradition of large families with children who stay in the household until marriage. Concerning monetary deprivation, the figures are similar to the non-monetary ones. The ratios are generally greater than 1, meaning that the level of deprivation is higher in households with children aged 0-14 than in households with dependent children aged 0-24 (Table 7). Again, a few countries, namely Greece, Ireland, Iceland, and Luxemburg show ratios lower than 1 with regard to the measure of deprivation related to the household equivalized income (HX090), but only Italy shows ratios below 1 for all three monetary variables. Focus on Italy According to the results presented, from the countries considered, Italy is the only country with all the ratios lower than 1 (except for the dimension of Consumer Durables), meaning that Italy is the only country presenting a higher deprivation for households with dependent children aged 0-24 than for households with children aged 0-14 (see Figure 3). A consideration that helps in clarifying and understanding the phenomenon, refers to the Italian cultural and social model, in which young people are more likely to live at A consideration that helps in clarifying and understanding the phenomenon, refers to the Italian cultural and social model, in which young people are more likely to live at home with their parents and to accept being without a job or without being in education for a few months; in other words, they accept depending on their parents for a long time. Actually, the average age at which they leave their parents' home is much higher than in several other European countries. Taken into consideration, this aspect brings together concerns and certainly deserves to be investigated. The analyses presented for all countries were disaggregated by geographical NUTS1 area macro-regions for Italy. Observing Table 8, two peculiar patterns are evident: (a) regarding dimension 6, Work and Education, the ratios are below 1 for all the macro-regions; (b) all the ratios are below 1 for all the dimensions (obviously, except for the dimension Consumer Durables) for the Centre and for the South macro-regions. Table 8. Italian non-monetary deprivation by NUTS1: ratios for macro-regions of households with children under 14 years old to households with dependent children.
2021-10-21T16:06:20.431Z
2021-09-03T00:00:00.000
{ "year": 2021, "sha1": "6d1fa6b939fe2cf91ce8006453e6507f574c0622", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/17/9905/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2fbbc5bf4202b655a1f0eb3424599d117ff0812e", "s2fieldsofstudy": [ "Economics", "Sociology" ], "extfieldsofstudy": [ "Computer Science" ] }
6345229
pes2o/s2orc
v3-fos-license
Hypoglycemia Reduces Vascular Endothelial Growth Factor A Production by Pancreatic Beta Cells as a Regulator of Beta Cell Mass* Background: Beta cell VEGF-A is critical for islet vascularization and Insulin secretion. Results: VEGF-A release and synthesis in beta cells are regulated separately. Sustained hypoglycemia reduces beta cell mass through a decrease in Vegf-A signaling. Conclusion: Beta cell mass can be regulated via modulated Vegf-A signaling. Significance: Our data reveal a novel pathway for regulating beta cell mass physiologically. VEGF-A expression in beta cells is critical for pancreatic development, formation of islet-specific vasculature, and Insulin secretion. However, two key questions remain. First, is VEGF-A release from beta cells coupled to VEGF-A production in beta cells? Second, how is the VEGF-A response by beta cells affected by metabolic signals? Here, we show that VEGF-A secretion, but not gene transcription, in either cultured islets or purified pancreatic beta cells, was significantly reduced early on during low glucose conditions. In vivo, a sustained hypoglycemia in mice was induced with Insulin pellets, resulting in a significant reduction in beta cell mass. This loss of beta cell mass could be significantly rescued with continuous delivery of exogenous VEGF-A, which had no effect on beta cell mass in normoglycemic mice. In addition, an increase in apoptotic endothelial cells during hypoglycemia preceded an increase in apoptotic beta cells. Both endothelial and beta cell apoptosis were prevented by exogenous VEGF-A, suggesting a possible causative relationship between reduced VEGF-A and the loss of islet vasculature and beta cells. Furthermore, in none of these experimental groups did beta cell proliferation and islet vessel density change, suggesting a tightly regulated balance between these two cellular compartments. The average islet size decreased in hypoglycemia, which was also prevented by exogenous VEGF-A. Taken together, our data suggest that VEGF-A release in beta cells is independent of VEGF-A synthesis. Beta cell mass can be regulated through modulated release of VEGF-A from beta cells based on physiological need. The vascular endothelial growth factor (VEGF) 2 family is composed of six secreted proteins: VEGF-A, -B, -C, -D, -E, and placental growth factor. VEGF-A plays an important role in the reciprocal interaction between endothelial cells and surrounding tissues during development, regeneration, and carcinogenesis (1-3). By differential mRNA splicing, the murine Vegf-A gene can give rise to three protein isoforms, VEGF 120 , VEGF 164 , and VEGF 188 . Whereas VEGF 188 is heparin-binding and mainly associated with the cell surface and with the extracellular matrix, VEGF 120 is freely diffusible due to the lack of exons 6 and 7 that encode heparan-sulfate proteoglycan binding domains. The predominant isoform, VEGF 164 , appears to have the highest bioavailability and biological potency, and exhibits only partial binding to the cell surface and extracellular matrix (2,4,5). VEGF-A has two tyrosine kinase receptors, VEGF receptors 1 and 2 (VEGFR1 and VEGFR2) (2,3). VEGFR2 is expressed mainly by endothelial cells and mediates most of the biological effects of VEGF-A, including blood vessel growth and branching, endothelial cell survival, and vessel permeability. VEGFR1 is expressed by endothelial cells and many other cell types and its functions and signaling properties are developmental stage-and cell type-dependent (2). VEGFR1 binds VEGF-A with very high affinity, but only induces weak tyrosine autophosphorylation, suggesting a possible competitive inhibitor role in attenuating the biological activity of VEGF-A. VEGFR1 also binds placental growth factor and VEGF-B, which further complicates our understanding of the regulation of vascular networks (2,3). Although both VEGFR1 and VEGFR2 are expressed by islet endothelial cells (6 -8), VEGFR1 may play a more important role than VEGFR2 in the intra-islet microvasculature (9). Because VEGF-A mRNA and protein levels have been shown to be closely correlated with each other in many biological systems (10 -12), VEGF-A transcription levels have frequently been used to represent the levels of VEGF-A synthesis. The most well known and extensively studied regulator for VEGF-A is oxygen tension, in which hypoxia strongly increases Vegf-A transcription via up-regulation of hypoxia-inducible factor 1 (2,3,13,14). Pancreatic islets contain a 5-fold denser capillary network than the exocrine pancreas, and have specialized capillary fen-estrations. There is an intimate association between beta cells and the islet vasculature, with one cell domain abutting an afferent capillary, whereas another abuts an efferent capillary (9,(15)(16)(17). Although VEGF-A, -B, -C, -D, and placental growth factor are all expressed in pancreatic islets (8), VEGF-A, which is predominantly produced by beta cells, had been shown to play a critical role in mediating signaling from beta cells to islet endothelial cells for proper pancreatic organogenesis, islet-specific capillary formation, and beta cell function (6 -8). Beta cells promote endothelial cell recruitment, proliferation, growth, and extensive islet vascularization through angiogenic factors like VEGF-A, whereas endothelial cells also appear to signal back to beta cells to promote islet development and maintain beta cell homeostasis (1, 18 -20). VEGF-A has been reported to be essential for islet revascularization following islet transplantation (7,21,22). Gene deletion studies have shown that VEGF-A produced by beta cells is necessary for the maintenance of intra-islet endothelial cells and islet-specific capillary fenestrations, which are necessary for normal beta cell function and insulin secretion (7,8,19,23). Interestingly, genetic overexpression of Vegf-A in beta cells resulted in islet hypervascularization, but the effect on beta cell mass and beta cell function differed among studies (18, 24 -26). In general, the physiological effects of VEGF-A are known to be dosage-dependent over a fairly narrow physiologic range (2,3). It was shown that a 2-fold deviation (increase or decrease) in Vegf-A levels could lead to significant defects in some developmental systems (27,28). In addition, absence or overexpression of Vegf-A may change the expression of other VEGF family members, or activate other compensatory pathways (2,3,8,13). These epiphenomena can diminish the power of VEGF-A gene deletion or overexpression models because the relatively extreme changes in VEGF-A levels in such studies do not normally occur physiologically, which may explain the discrepancies between the previous studies (18, 24 -26). As a secreted peptide, VEGF-A has a surprisingly intense intracellular immunohistochemical signal in beta cells, suggesting that its secretion may be regulated (6 -8). However, although previous studies in beta cells have reported that VEGF-A production can be affected by glucose levels (29,30), a possible separate regulation of VEGF-A release and VEGF-A synthesis in beta cells has not been examined. In the current study, we show a reduction of VEGF-A release, but not production, by islets or purified beta cells in low glucose culture. To mimic in vitro low glucose culture, insulin pellets were given to mice to induce sustained hypoglycemia for 1 month, resulting in a 22% reduction in beta cell mass. Importantly, exogenous physiologic doses of VEGF-A could partially rescue the loss of beta cell mass in these hypoglycemic mice. Further dissection of cell apoptosis, proliferation, and vessel area in this study allow us to propose a model in which beta cell mass is regulated by the glucose level, via modulating VEGF-A in beta cells. EXPERIMENTAL PROCEDURES Mouse Manipulation-All mouse experiments were performed in accordance with the guidelines from the Animal Research and Care Committee at the Children's Hospital of Pittsburgh and the University of Pittsburgh IACUC. Both C57/6 and mouse insulin promoter GFP reporter (MIP-GFP) (C57/6 background) mice (31) were purchased from the Jackson Laboratory. For both strains, only 8-week-old males were used for experiments. To induce sustained hypoglycemia, each mouse received subcutaneous implantation of 2 mouse insulin pellets (LIN␤IT) at the back side of the neck, according to the manufacturer's instruction. To provide continuous exogenous VEGF-A to the mice, each mini-osmotic pump (ALZET, model 2004; 4 weeks content release) was filled with 1.5 g of recombinant mouse VEGF-A (R&D) 40 h before implantation into the abdomen of mice by surgery. Non-fasting blood glucose measurements of mice were performed at 8 a.m. Isolation and Culture of Islets and Beta Cells-For islet isolation, pancreatic duct perfusion and subsequent digestion of pancreas was performed with 0.3 mg/ml of collagenase. Islets were hand-picked three times to avoid contamination of nonislet cells (32). Purity of the islets was confirmed by absence of Amylase and Ck19 transcript in the RNA samples extracted from the isolated islets. For isolation of beta cells by fluorescence-activated cell sorting (FACS), islets that were isolated from MIP-GFP mice were further dissociated into single cells with DNase (Roche Applied Science, 10 g/ml) and trypsin (20 g/ml) (Sigma), filtrated at 30 m, and sorted for beta cells with a FACSAria (BD Biosciences) based on green fluorescent protein, as described (33). Flow cytometry data were shown by FlowJo (Tree Star Inc.). For isolation of beta cells by laser-capture microdissection (LCM), 1 week after various treatments, MIP-GFP mouse pancreas was harvested, treated with RNAlater TM (Qiagen), and snap frozen in Tissue-Tek OCT (Sakura) under RNase-free conditions. Frozen pancreas block was sectioned 10-m thick and mounted on RNase-free membrane-coated microscopy slides (Molecular Machines and Industries, MMI). The sections were air dried and processed with MMI CellCut Plus as described (34,35). LCM was performed by melting thermoplastic films mounted on transparent LCM MMI isolation caps on beta cells, visualized with their direct green fluorescence. The system was set to the following parameters: 70% for the laser power, 42% laser focus, and 28% laser speed. Purity of the beta cells by FACS and LCM was confirmed by absence of Amylase (acinar cell marker), Ck19 (duct cell marker), Vimentin (mesenchymal marker), CD31 (endothelial cell marker), Glucagon (␣ cell marker), Somatostain (␦ cell marker), and Pancreatic polypeptide (PP cell marker) transcripts, and enrichment of Insulin transcript in the extracted RNA samples, as described (33). Isolated GFP ϩ beta cells by FACS were suspended in a 24-well plate with Ham's F-10 medium (Invitrogen) supplemented with 0.5% BSA (Sigma), 2 mM glutamine, 2 mM calcium, and 5 mM glucose and re-aggregated for 2 h before overnight culture (37°C, 95% air, 5% CO 2 ). Isolated islets were kept in the same medium overnight. Thereafter, islets or beta cell aggregates were cultured in 2, 5, and 20 mM glucose, respectively. At 0.5, 1, and 25 h (fresh medium with corresponding glucose is supplied at the 24th h), cells or islets were harvested for RNA extraction to examine the levels of Vegf-A gene expression after conditioned medium was collected for VEGF-A enzyme-linked immunosorbent assay (ELISA). The number of cells in the wells was manually counted, followed by total DNA extraction (Qiagen). DNA content was determined by Nanodrop1000 (Thermo Scientific) correlated with manual counts, and was used as an objective way to normalize released VEGF-A levels. Quantitative Polymerase Chain Reaction-RNA was extracted from harvested cultured cells or islets with RNAeasy (Qiagen) and quantified with Nanodrop 1000 (Thermo Scientific), followed by cDNA synthesis (Qiagen) (32). Quantitative PCR primers were all purchased from Qiagen. They are , and Pancreatic polypeptide (QT00103999). Quantitative PCR were performed as described (32,33) and values of genes were normalized against Cyclophilin A, which proved to be stable across the samples. Nuclear Run-on Assay-The nuclear run-on protocol is performed according to previous publications (36,37). Briefly, crude nuclei were prepared by detergent lysis, homogenization, and re-suspension in 100 l of nuclear run-on buffer (50 mM Tris-HCl, pH 7.5, 5 mM MgCl 2 , 150 mM KCl, 0.1% Sarkosyl, and 10 mM DTT), 1 l each of 10 mM ATP, GTP, and CTP, 1 l of 10 mM Br-UTP (Invitrogen), and 1 l of RNaseOUT (Invitrogen). Reaction mixtures were preincubated on ice for 3 min and sub-FIGURE 1. VEGF-A release, but not transcription, from islets or beta cells was reduced in early low glucose culture. A, representative image of a histologic section from a MIP-GFP mouse pancreas showing co-localization of insulin (INS, red) and GFP. B, FACS of MIP-GFP mouse islets: beta cells were isolated based on green fluorescence. C-E, to examine whether VEGF-A release in beta cells is separate from VEGF-A synthesis, we analyzed the release and gene transcription of Vegf-A by either isolated islets or re-aggregated beta cells at 0.5, 1, and 25 h (for the latter, fresh medium was added at 24 h) in serum-free medium supplemented with 2, 5, or 20 mM glucose. C, quantitative RT-PCR was performed to check Vegf-A transcripts in cultured islets or beta cells. Cyclophilin A (cycloA) was used as a housekeeping gene to normalize Vegf-A values. Exposure to high glucose did not change the levels of Vegf-A transcript, whereas exposure to low glucose did not change the levels of Vegf-A transcript within 1 h, but did so at 25 h. D, nuclear run-on assay was performed on cultured islets or beta cells and showed that nascent Vegf-A transcription did not change within the 1-h exposure to low glucose. E, total DNA content of the cells was used to normalize the quantity of released VEGF-A into culture medium. VEGF-A release by either islets or beta cells was significantly reduced in 2 mM glucose (p Ͻ 0.05) at both 1 and 25 h. However, 20 mM glucose did not significantly increase VEGF-A release. *, p Ͻ 0.05; **, p Ͻ 0.01; NS, no significance; INS, insulin; HO, Hoechst. Scale bars are 30 m. sequently at 28 ºC for 5 min. Run-on reactions were stopped by addition of 350 l of RLT buffer (Qiagen). RNAs were isolated with RNAeasy kit (Qiagen) and incubated with 2 l of anti-BrU antibody (Sigma) in the presence of 2 l of RNaseOUT at 4 ºC for 2 h, immunoprecipitated with 10 l of protein G-agarose beads (Santa Cruz). Precipitated RNA was converted to cDNA and quantified by quantitative PCR. Nascent Vegf-A transcripts were normalized to cyclophilin A. ELISA for VEGF-A-Cell culture media was analyzed using a VEGF-ELISA kit (Raybio) according to the manufacturer's instructions, which detects both 120 and 164 isoforms of mouse VEGF-A. Each sample was assayed in duplicates and the mean value was taken for statistical analysis. Immunohistochemistry and Quantification-All pancreas samples were fixed for 4 h in 4% formaldehyde, then cryoprotected in 30% sucrose overnight before freezing. Primary antibodies for immunostaining are: guinea pig polyclonal insulin-specific (Dako); rat polyclonal Ki-67-specific (Dako) and CD31-specific (BD Biosciences); rabbit polyclonal synaptophysin-specific (Invitrogen), and caspase 3-specific (Cell Signaling). No antigen retrieval was necessary for these antigens, except for caspase 3, which needs microwave treatment, and for FIGURE 2. Reduction of beta cell mass during sustained hypoglycemia can be partially rescued by exogenous VEGF-A. A, experimental design. To evaluate whether hypoglycemia has an effect on beta cell mass, insulin pellets were implanted subcutaneously in mice to induce sustained hypoglycemia (INS, green). To check whether any effect of hypoglycemia on beta cell mass is due to the reduced VEGF-A, insulin pellet-treated mice were rescued with an additionally implanted VEGF-A-releasing pump (INS ϩ VEGF-A, purple). To exclude an independent effect of VEGF-A on beta cell mass, treatment with exogenous VEGF-A alone (VEGF-A, red) was included as a control. The mice from both sham (Sham, blue) and INS groups also received a sham operation to control for the effect of surgery. B, non-fasting blood glucose showed sustained hypoglycemia in mice that received either insulin only (INS, green) or a combination of insulin and VEGF-A releasing pumps (INS ϩ VEGF-A, purple). Mice that received VEGF-A pumps only (VEGF-A, red) were normoglycemic like sham-treated mice (Sham, blue). C, beta cell mass analysis showed a reduction (p Ͻ 0.01) in insulin-treated mice (INS, green) compared with sham-treated mice (Sham, blue). This reduction of beta cell mass was significantly attenuated (p Ͻ 0.05) in the insulin-treated mice that also received VEGF-A pumps (INS ϩ VEGF-A, purple). VEGF-A pump (VEGF-A, red) itself did not affect (no significance) beta cell mass in normoglycemic mouse controls. *, p Ͻ 0.05; **, p Ͻ 0.01. Ki-67, which needs pretreatment with protease for 5 min followed by a 45-min incubation with 2 N HCl (neutralized with Tris borate-EDTA buffer (Sigma)) as described (32,33). Secondary antibodies for indirect fluorescent staining were Cy2-, Cy3-, or Cy5-conjugated donkey anti-rabbit, anti-rat, anti-goat, and anti-guinea pig (Jackson ImmunoResearch Laboratories). Nuclear staining was performed with Hoechst (BD Biosciences). Imaging of cryosections was performed as described (32,33). The quantification of apoptotic islet endothelial cells was done by counting vessels (based on CD31 staining) that contained caspase 3 ϩ cells. The quantification of apoptotic beta cells was done by counting the beta cells (based on insulin staining) that were caspase 3 ϩ . Endocrine vascular density was determined with ImageJ (NIH) software by measuring the percentage of CD31 ϩ area to the total islet area (based on synaptophysin staining). Average islet size from each pancreas was determined based on 200 islets. For all these quantifications, at least five pancreatic sections that were 100 m apart from each other were analyzed and five animals were used for each experimental group. The beta cell mass was quantified on the basis of 10 sections that were 100 m apart from each other, as described before (32). Data Analysis-All values are depicted as mean Ϯ S.E. Each in vitro experimental condition contains 5 repeats. Each in vivo experimental group used 5 mice. Significance is considered when p Ͻ 0.05. All data were statistically analyzed by 2-tailed Student's t test. VEGF-A Release, but Not Transcription, in Beta Cells Is Reduced in Low Glucose-We examined the expression of VEGF-A in normal mouse pancreas and found that VEGF-A immunoreactivity was extremely strong in beta cells. Because VEGF-A is a secreted peptide, this finding encouraged us to test whether VEGF-A secretion is independent of VEGF-A synthesis, and if so, how its secretion is regulated in beta cells. Because glucose is a key regulator of beta cell function and insulin release, we first tested the effect of altered glucose levels on the release and production of VEGF-A. We isolated islets from C57/6 mice and purified beta cells from MIP-GFP mice by FACS (Fig. 1, A and B). After overnight culture in 24-well plates, C57/6 mouse islets or re-aggregated MIP-GFP beta cells were fed with fresh serum-free medium supplemented with 2, 5, or 20 mM glucose, and analyzed at 0.5, 1, or 25 h (the latter was fed with fresh medium at 24 h). After the conditioned media were taken for VEGF-A ELISA, the remaining islets or cells were harvested for RNA extraction and DNA content measurement. Our data showed that neither low glucose nor high glucose affected the levels of Vegf-A transcripts within 1 h (Fig. 1C), but low glucose inhibited Vegf-A transcript levels after 24 h. Nuclear run-on assay further confirmed that this constant level of Vegf-A transcript after 1 h of low glucose culture was due to unchanged Vegf-A transcription, rather than altered mRNA degradation (Fig. 1D). Interestingly, on the other hand, VEGF-A release into the culture medium was significantly reduced at 0.5 and 1 h under low glucose, although not significantly increased under high glucose (Fig. 1E). These data suggest that VEGF-A release, rather than transcription, may be a control point for the early response of beta cells to low glucose. Sustained Hypoglycemia Reduces Beta Cell Mass in Vivo-We tried to determine the biological relevance of reduced VEGF-A in low glucose. Because VEGF-A production and release by beta cells is important for islet endothelial cell survival and proper islet vasculature maintenance, we hypothesized that reduced VEGF-A release from beta cells under low glucose may decrease survival signals to islet endothelial cells, a loss of which might in turn have a direct feedback effect on beta cell mass. To test this hypothesis, insulin pellets were implanted subcutaneously to induce sustained hypoglycemia in mice, which should mimic the in vitro low glucose culture environment. To test whether any effect on beta cell mass during sus-tained hypoglycemia may be due to a reduced release of VEGF-A by beta cells, we "rescued" some insulin-treated mice with exogenous VEGF-A through continuous releasing pumps. To exclude an independent beneficial effect of VEGF-A on beta cell growth under normal glucose, we also included a control with only exogenous VEGF-A pump treatment (no insulin pellet, Fig. 2A). Both untreated control mice and insulin-treated mice received sham operations. Non-fasting blood glucose was monitored and showed that all mice that received insulin pellet treatment, regardless of implantation of VEGF-A releasing pumps, developed sustained hypoglycemia during the 1 month experiment (Fig. 2B). Exogenous VEGF-A did not affect the glycemia of the mice. After 1 month the mice were sacrificed and the pancreases were analyzed and quantified for beta cell mass. Our data showed a 22% reduction ( Fig. 2C; p Ͻ 0.01) in beta cell mass in insulin-treated mice (0.89 Ϯ 0.03 mg) compared with sham-treated mice (1.14 Ϯ 0.06 mg). The body weights between the two groups had no significant difference. These data suggest that sustained hypoglycemia is adequate to reduce beta cell mass in vivo. Exogenous VEGF-A Partially Rescued the Loss of Beta Cell Mass during Sustained Hypoglycemia-Even though our in vitro data clearly showed that low glucose significantly reduces VEGF-A release by beta cells, and can further reduce Vegf-A transcription in beta cells when the low glucose exposure persists for greater than 24 h, we tried to confirm that VEGF-A synthesis by beta cells is indeed reduced in our in vivo hypoglycemia model. We used LCM (Fig. 3A), rather than FACS, to isolate beta cells from MIP-GFP mouse pancreas sections after the mice were treated for 1 week with insulin-pellets. We used LCM because of the concern that the Vegf-A gene is highly hypoxia-sensitive, and its transcription may quickly change during the pancreas digestion process necessary for FACS. On the other hand, LCM allows prompt fixation of the cells and may better reflect the actual Vegf-A mRNA levels in beta cells. Beta cells isolated by LCM were checked for transcription levels of Amalyse, CK19, Vimentin, CD31, Glucagon, Somatostain, and Pancreatic polypeptide to confirm the absence of contamination, and for insulin transcripts to confirm enrichment (Fig. 3B). Our data showed that Vegf-A transcript levels were significantly reduced in beta cells from insulin pellet-treated mice compared with controls (Fig. 3B), confirming that insulin-induced hypoglycemia indeed reduced VEGF-A synthesis in beta cells, similar to what we found in vitro. Next, we examined whether the reduction of beta cell mass in hypoglycemic mice is due to reduced release of VEGF-A. We found that the beta cell mass of insulin-treated mice that also received VEGF-A pumps (Fig. 2C, 0.99 Ϯ 0.03 mg) was significantly greater (p Ͻ 0.05) than that of mice that only received insulin pellets (0.89 Ϯ 0.03 mg), rescuing roughly half of the lost beta cell mass, but still lower (p Ͻ 0.05) than sham-treated mice (1.14 Ϯ 0.06 mg). Importantly, exogenous VEGF-A pumps alone did not affect beta cell mass in normoglycemic mouse controls (1.10 Ϯ 0.05 mg, no significance). This partial rescue of the reduction of the beta cell mass by exogenous VEGF-A suggests that the reduction of endogenous VEGF-A release directly leads to loss of beta cell mass during sustained hypoglycemia. Partial rescue, rather than complete rescue, may reflect either an inadequate supply of VEGF-A, or that other factors may also play a role in the beta cell mass adaptation during sustained hypoglycemia. Beta Cell Proliferation Was Not Affected by Hypoglycemia, Regardless of Exogenous VEGF-A Supply-As adult beta cell mass is determined by a balance of beta cell proliferation and apoptosis, we first examined whether beta cell proliferation was altered in our experimental groups. Therefore, the percentage of the pancreatic beta cells that were positive for Ki-67 (32), a cellular proliferation marker, was quantified at 7, 14, and 30 days. Our data show that Ki-67 ϩ beta cell percentages did not change across all groups during the experiment (Fig. 4, A and B). These data suggest that reduced beta cell mass under VEGF-A, purple), untreated sham controls (Sham, blue), and VEGF-A only controls (VEGF-A, red) were analyzed at 7, 14, and 30 days for apoptotic islet endothelial cells and beta cells. A, the percentage of islet vessels that contained caspase 3 ϩ endothelial cells (CD31 ϩ ) was quantified and showed an increase (p Ͻ 0.05) at days 7 and 14, but reverted to normal at day 30 after insulin pellet treatment. This apoptosis of endothelial cells can be completely prevented by exogenous VEGF-A. The percentage of caspase 3 ϩ beta cells was also quantified and showed no change at day 7 after insulin pellet treatment, compared with controls. However sustained hypoglycemia is not due to decreased beta cell proliferation. Apoptosis of Islet Endothelial Cells Precedes Beta Cell Loss in Sustained Hypoglycemia-Because beta cell proliferation was not affected by hypoglycemia, the reduction of beta cell mass would likely be due to increased apoptosis of beta cells. Because the major target of VEGF-A released by beta cells is islet endothelial cells, the adaption of beta cell mass under hypoglycemia may be secondary to the effect of reduced VEGF-A release on endothelial cells, e.g. reduced endothelial cell survival. Thus, we checked apoptosis of endothelial cells and beta cells under all experiment conditions at 7, 14, and 30 days. The percentage of islet vessels that contained apoptotic caspase 3 ϩ endothelial cells and the percentage of apoptotic beta cells were quantified. Our data show that apoptotic islet endothelial cells increased significantly at day 7, persisted at day 14, but reverted to normal by day 30 (Fig. 5, A and B). On the other hand, the increase in apoptotic beta cells could only be detected by day 14 (Fig. 5, B and C). This time window for the presence of apoptotic cells suggests a causative link between apoptosis of endothelial cells and apoptosis of beta cells, which is strengthened by the fact that exogenous VEGF-A prevented apoptosis of endothelial cells and also reduced apoptosis of beta cells (Fig. 5A). Islet Vessel Density Was Not Affected by Hypoglycemia Regardless of Exogenous VEGF-A Supply-Next, we evaluated islet vessel density. If our hypothesis is correct that loss of endothelial cells affects the survival of beta cells, then the vessel density in islets should be relatively stable across all experimental groups. If, however, islet vessel density decreases under hypoglycemia, it might suggest that beta cells can survive after the loss of nearby islet endothelial cells, and thus beta cell mass is not strictly regulated by feedback from endothelial cells in this model. We found consistent islet vessel density among all four experimental conditions (Fig. 6, A and B), supporting our FIGURE 6. Quantification of islet vessel density and average islet size. A and B, islet vessel densities were measured under all experimental conditions and evaluated with the ratio of islet CD31 ϩ cell area to synaptophysin ϩ cell area. Synaptophysin is a pan-endocrine cell marker. A, representative immunofluorescent images of mouse pancreas at day 30 were shown: CD31 in green and synaptophysin in red. B, the islet vessel densities were consistent (no difference) across all four experimental conditions. C, average islet size was measured among all four experimental conditions, showing a significant decrease in insulintreated mice, which was prevented with exogenous VEGF-A. *, p Ͻ 0.05; NS, no significance. Scale bars are 50 m. hypothesis that a reduction in VEGF-A release by beta cells leads to a loss of islet endothelial cells which, in turn, leads to a proportionate loss of beta cells. Average Islet Size Was Reduced by Hypoglycemia and Preserved by Exogenous VEGF-A-Finally, we evaluated islet number and size. We did not find a change in islet number across all experiment conditions, but saw an approximate 22% reduction in average islet size with hypoglycemia, consistent with the measured reduction of beta cell mass. Importantly, this decrease in average islet size was prevented by exogenous VEGF-A (Fig. 6C). These data support our previous finding that loss of beta cell mass by hypoglycemia results from loss of some beta cells in the islets. Exogenous VEGF-A itself did not significantly change islet size (Fig. 6C) in normoglycemic mice, consistent with our previous data that VEGF-A does not significantly increase beta cell proliferation (Fig. 5). DISCUSSION VEGF-A expression by beta cells has been shown to be important for proper pancreatic organogenesis and proper insulin secretion (1,2,6,19,20). However, most rodent studies on VEGF-A in beta cells were based on either gene deletion or on overexpression under a strong promoter in beta cells (7,19,23). However, the physiological effects of VEGF-A are dosagedependent over a narrow physiologic range (2,3,27,28). Moreover, inactivation of only one allele of Vegf-A results in embryonic lethality at mid-gestation (38,39), whereas a 2-fold increase in Vegf-A levels also leads to embryonic lethality (40). Therefore, it appears that the nonphysiological Vegf-A levels in these transgenic models may considerably diminish the power of these studies. Indeed, we found that the promoter activity of insulin is much stronger than Pdx1 in beta cells, and is more than 2000-fold stronger than the Vegf-A promoter in beta cells (not shown), which suggests that expression of Vegf-A under either the insulin promoter or the Pdx1 promoter may produce supraphysiologic, and potentially biologically irrelevant levels of Vegf-A in beta cells, possibly explaining some of the discrepancies among these studies (18,(23)(24)(25). In addition, such deletion or extreme overexpression of Vegf-A may activate compensatory pathways or change the expression of other VEGF family members, with the occurrence of the latter having been clearly demonstrated (8). In the current study, we chose to study only beta cells from wild-type or reporter mice without interfering genetic modifications, hopefully providing more physiologically relevant information for understanding VEGF-A regulation in beta cells. Our dosage of exogenous VEGF-A was close to normal physiological replacement values, and should not lead to significant side effects (2, 3). We first found a reduction in VEGF-A release from cultured islets or beta cells specifically early on during exposure to low glucose culture conditions, and thus we hypothesized that reduced VEGF-A release under hypoglycemia from beta cells may affect beta cell mass, as a feedback of an influence on the survival of islet endothelial cells. To test this hypothesis, we gave mice insulin pellets to induce sustained hypoglycemia. The environment of beta cells in this insulin-induced hypoglycemia model should mimic the low glucose culture conditions of beta cells we used in vitro. To further confirm our findings, we isolated beta cells by LCM and found that Vegf-A mRNA levels in beta cells indeed were decreased in hypoglycemia. Furthermore, we showed that beta cell mass during hypoglycemia was reduced by 22% at 1 month. This reduction may be particularly significant in that it did not involve any genetic modification, and beta cell mass is normally tightly regulated (41). Importantly, this reduction was largely due to reduced VEGF-A signaling because the loss of beta cell mass was partially rescued by exogenous VEGF-A. Of note, apoptotic endothelial cells were seen well before apoptotic beta cells during hypoglycemia, and exogenous VEGF-A reduced the apoptosis of both endothelial cells and beta cells, suggesting that the loss of endothelial cells could result from reduced VEGF-A release from beta cells, and the loss of endothelial cells may contribute to the subsequent reduction in beta cell mass. Moreover, no change in beta cell proliferation was detected in any experimental conditions, suggesting that down-regulation of beta cell mass under sustained hypoglycemia is primarily due to increased beta cell apoptosis rather than deceased beta cell proliferation. A strong relationship between the amount of islet vasculature and beta cell mass was further manifested by the fact that vessel density remained constant and that the reduction in islet size during hypoglycemia was prevented by VEGF-A replacement. Reduced beta cell mass as a result of a hypoglycemia-induced reduction in VEGF-A release may have implications for both beta cell development and beta cell biology. Beta cell mass is well known to change with body weight, insulin demand, and other physiological parameters (42), including blood glucose levels. The molecular basis for this adaptation is not completely understood. From our study it appears that the cross-talk between islet endothelial cells and beta cells is an important regulator of beta cell mass, with VEGF-A being a key component of the cross-talk. Our study thus suggests a novel pathway for beta cell mass adaptation to metabolic changes (Fig. 7). Future studies might focus on the control of VEGF-A release FIGURE 7. Hypoglycemia regulates beta cell mass via modulated VEGF-A release. A model of how beta cell mass is regulated by hypoglycemia via VEGF-A is proposed. Oxygen tension is the major regulator for Vegf-A transcription in pancreatic beta cells. Hypoxia can greatly increase Vegf-A transcription. However, glucose can regulate VEGF-A release from beta cells before the adaptation of Vegf-A transcription occurs. Hypoglycemia can reduce VEGF-A release, resulting in a decrease in survival of neighboring islet endothelial cells, which subsequently leads to a secondary loss of beta cells. during islet development and under other physiological conditions when changes in beta cell mass occur, such as pregnancy. Our study improves the understanding of cross-talk between endothelial cells and beta cells, and thus provides new insights into the regulation of functional beta cell mass in diabetic patients.
2018-04-03T03:32:06.381Z
2013-02-01T00:00:00.000
{ "year": 2013, "sha1": "1670dab813ac8a13c11f327062a75b4c8648ad2c", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/288/12/8636.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "9f0428edbaf26a13363825105a2f82af007aacdf", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
13391768
pes2o/s2orc
v3-fos-license
Process-level improvements in CMIP5 models and their impact on tropical variability, the Southern Ocean, and monsoons The performance of updated versions of the four earth system models (ESMs) CNRM, EC-Earth, HadGEM, and MPI-ESM is assessed in comparison to their predecessor versions used in Phase 5 of the Coupled Model Intercomparison Project. The Earth System Model Evaluation Tool (ESMValTool) is applied to evaluate selected climate phenomena in the models against observations. This is the first systematic application of the ESMValTool to assess and document the progress made during an extensive model development and improvement project. This study focuses on the South Asian monsoon (SAM) and the West African monsoon (WAM), the coupled equatorial climate, and Southern Ocean clouds and radiation, which are known to exhibit systematic biases in present-day ESMs. The analysis shows that the tropical precipitation in three out of four models is clearly improved. Two of three updated coupled models show an improved representation of tropical sea surface temperatures with one coupled model not exhibiting a double Intertropical Convergence Zone (ITCZ). Simulated cloud amounts and cloud– radiation interactions are improved over the Southern Ocean. Improvements are also seen in the simulation of the SAM and WAM, although systematic biases remain in regional details and the timing of monsoon rainfall. Analysis of simulations with EC-Earth at different horizontal resolutions from T159 up to T1279 shows that the synoptic-scale variability in precipitation over the SAM and WAM regions improves with higher model resolution. The results suggest that the reasonably good agreement of modeled and observed mean WAM and SAM rainfall in lower-resolution models may be a result of unrealistic intensity distributions. Published by Copernicus Publications on behalf of the European Geosciences Union. 34 A. Lauer et al.: Process-level improvements in CMIP5 models projections of future climate. Examples of such biases include the simulation of a too thin Arctic sea ice (Shu et al., 2015), systematic problems in simulating monsoon rainfall (Turner and Annamalai, 2012;Turner et al., 2011), a dry soil moisture bias in mid-latitude continental regions, an excessively shallow equatorial ocean thermocline and double-Intertropical Convergence Zone (ITCZ) (e.g., Li and Xie (2014)), too thick clouds in mid-latitudes (Lauer and Hamilton, 2013) and excessive downwelling solar radiation over the Southern Ocean, accompanied by a warm bias in sea surface temperatures 5 (SSTs) in many coupled models (Trenberth and Fasullo, 2010). This paper presents and documents the progress made in the European Commission's 7th Framework Programme (FP7) project "Earth system Model Bias Reduction and assessing Abrupt Climate change" (EMBRACE). EMBRACE specifically aimed at reducing a number of these systematic model biases by targeting improvement in the representation of selected key variables and processes in ESMs: (1) The representation of the coupled tropical climate: (i) a cold bias in equatorial SSTs coupled with an incorrect location of the 10 ITCZ (Lin, 2007), (ii) a poor representation of coastal and associated Ekman dynamics in the tropical oceans (de Szoeke et al., 2010) and (iii) a poor representation of the location, intensity distribution and seasonal/diurnal cycles of precipitation in monsoon regions (Kang et al., 2002). (2) Southern Ocean processes, including (i) an underestimate of reflected solar radiation at the top of the atmosphere (TOA) and an overestimate of downwelling solar radiation at the ocean surface, (ii) systematically too shallow ocean mixed-layers, particularly in austral summer and (iii) warm SST biases across the Southern 15 Ocean (Randall, 2007;Flato et al., 2013). The community model evaluation and performance metrics Earth System Model Evaluation Tool (ESMValTool, Eyring et al. (2016b)) is used to evaluate a range of variables and climate processes in the models that have been updated during EMBRACE ("EMBRACE models") against observations and their CMIP5 (Coupled Model Intercomparison Project Phase 5, Taylor et al. (2012)) predecessor versions ("CMIP5 models"). The study has a particularly focus on evaluating processes 20 relevant to clouds and precipitation and aims at assessing the progress that has been made by model improvements introduced during the development and preparation of the models for the 6th Phase of CMIP (CMIP6, Eyring et al. (2016a)). It should be noted that even a good agreement of model results with observations does not necessarily guarantee a correct model behavior in future climate projections. This is one of the reasons why model ensemble based methods are used when projecting and interpreting future climate change. It is, however, typically regarded as a necessary condition for a model to 25 be a useful tool for future climate projections to be able to reproduce the observed features of the past climate reasonably well (Flato et al., 2013). This does not answer the much more difficult question of how good is good enough for a model to be used or useful for a specific application as this strongly depends on the processes of interest including, for instance, geographical region, simulated quantity, natural variability, time-scales and time range or metric. Statements on the usability or usefulness of the model results are thus beyond the scope of this study . 30 This article is organized as follows: Sect. 2 gives an overview on the model updates and model simulations analyzed. The updated models are then evaluated against observations and compared to the original CMIP5 versions of the models in Sect. 3, where a number of the aforementioned systematic biases are investigated. A summary of the model improvements and outstanding biases is given in Sect. 4. Model updates In the following, a brief summary of the main updates of the CMIP5 models implemented during the EMBRACE project period is given. For descriptions of the individual models and details on the specific updates the reader is directed to the references listed in Table 1 and further references within these model description papers. The updated model versions 5 evaluated here are models that are in the process of being further developed for CMIP6. It should be noted that the EMBRACE models shown here are prototypes and not yet fully tuned, calibrated or developed. The aim here is to document the long and sometimes difficult pathway of model development and the challenges of reducing large model biases. CNRM Major changes implemented into the atmosphere component of the CNRM-CM5.1 model ARPEGE-Climat version 5 10 (Voldoire et al., 2013) include in particular updates of the turbulent, convective and microphysics schemes. The new model CNRM-AM-PRE6 contains a prognostic turbulent kinetic energy (TKE) scheme (Cuxart et al., 2000) that improves the representation of the dry boundary layer while a new unified dry-shallow-deep convection scheme allows for a better transition between convective regimes (Guérémy, 2011;Piriou et al., 2007). The convective scheme solves a prognostic equation for the updraft vertical velocity and uses a convective available potential energy (CAPE) closure. It also features 15 detailed prognostic microphysics (Lopez, 2002), which are consistent with the ones used for large-scale condensation and precipitation. Besides, dust aerosol optical properties have been updated, as well as surface albedo, leading, for instance, to an improved radiation budget in the West African monsoon region (Martin et al., 2017). CNRM-AM6-PRE6 features 91 vertical levels compared to 31 levels in the CMIP5 version. EC-Earth 20 The atmosphere model of EC-Earth v2.3 (Hazeleger et al., 2013) has been upgraded from the Integrated Forecasting System (IFS) cy31r1 to IFS cy36r4 and the ocean model to the Nucleus for European Modelling of the Ocean (NEMO) 3.3.1. Major changes in the atmosphere are the new microphysics scheme with six hydrometeor classes including ice crystals and snow (Forbes et al., 2011), and the new Rapid Radiation Transfer Model (RRTM) (Jung et al., 2010). The resolution of the atmosphere model has been increased both horizontally and vertically from T159L62 to T255L91. The ocean component 25 NEMO 3.3.1 is a major upgrade and features a moderate increase in the vertical resolution (from L42 to L46). The sea ice model was upgraded from the Louvain-la-Neuve Sea Ice Model 2 (LIM2) to LIM3 with an improved description of the sea ice rheology and physics. The option of LIM3 to take into account multiple sea ice categories was not used as the Arctic sea ice was found to be unstable in a multi-category setup. Updates of the convection scheme were applied that were developed by the European Centre for Medium-Range Weather Forecasts (ECMWF) and resulted in a better representation of the 30 diurnal cycle of convection (Bechtold et al., 2014). HadGEM Changes in the atmospheric component between HadGEM2 and HadGEM3 model families include the ENDGame dynamical core (Wood et al., 2014), the inclusion of a prognostic cloud and condensate scheme (PC2; Wilson et al. (2008)), increased convective entrainment/detrainment, a new orographic gravity wave drag (GWD) representation (Vosper et al., 2009), and numerous other changes (see Walters et al. (2011), Walters et al. (2014), Walters et al. (2017) for details). In 5 addition, the vertical resolution has been increased and the model lid extended from 40 km to 85 km. Both of these changes require the model physical schemes to be revisited and adjusted to remove level-dependencies and, in some cases, for additional parametrizations to be included, such as the non-orographic GWD scheme (Scaife et al., 2002) to represent momentum deposition by breaking of gravity waves in the upper stratosphere and mesosphere. The PC2 scheme is a distributed cloud parameterization that represents cloud cover and condensate changes occurring through changes to the 10 environmental temperature and humidity as a result of the other physical parameterizations. In particular, condensate detrained by the convection scheme is handled directly by PC2, rather than being evaporated, detrained and re-condensed as in HadGEM2. Many other changes to the clouds, microphysics and convection have also been made in order to achieve a reasonable global climatology and radiative balance. HadGEM3-GC2 (Williams et al., 2015) includes Global Atmosphere 6.0 (Walters et al., 2017), Global Ocean 5.0 (based on 15 NEMO v3.4) with 75 vertical levels and Global Sea Ice 6.0 (see Table 1). In addition, HadGEM3-GC2 does not include earth system components such as an interactive carbon cycle, dynamic vegetation, tropospheric chemistry or ocean biogeochemistry that are present in the CMIP5 version HadGEM2-ES, but it does include interactive aerosols (with a different tuning for the dust scheme). MPI-ESM 20 ECHAM6 and its land component JSBACH have undergone several further developments since the version used for CMIP5 (ECHAM6.1/JSBACH 2.0). Several bug fixes in the physical parameterizations of ECHAM6.3 assure energy conservation in the total parameterized physics. A re-calibration of the cloud processes resulted in a climate sensitivity of about 3 K of the new model system, which is about in the middle of the range of climate sensitivities spanned by the CMIP5 models. JSBACH 3.0 comprises several bug fixes, a new soil carbon model (Goll et al., 2015) and a 5-layer soil hydrology scheme 25 (Hagemann and Stacke, 2015) replacing the previous bucket scheme. Model experiments Two kinds of model simulations have been performed, Atmosphere Model Intercomparison Project (AMIP) type simulations, i.e. atmosphere-land only with prescribed SSTs, and coupled CO 2 concentration-driven (historical) simulations. AMIP simulations were performed with all four updated models (EC-Earth3, HadGEM3-GA6 (denoted HadGEM3-A 30 hereafter), CNRM-AM-PRE6, MPI-ESM), the three models EC-Earth3, HadGEM3-GC2 and MPIESM_1_1 were used to perform coupled simulations. For both types of simulations the CMIP5 protocol was followed (Taylor et al., 2012). The model experiments analyzed are summarized in Table 2. The main focus of this study will be on the coupled simulations, as these model configurations are particularly relevant to projecting future climate change. Observational data The observational and reanalysis data used for the model evaluation are summarized per data set in Table 3 and the variable 5 definitions are given in Table 4. Near-surface temperature and precipitation Near-surface air temperature and precipitation are controlled by a large number of interacting processes making it challenging to understand and improve model biases in these quantities as model errors can partly compensate each other. 10 The two variables, however, are frequently analyzed in atmospheric models and can provide an overview and a starting point for further analysis. Figure 1 shows the bias in 20-year annual mean near-surface temperatures averaged over the years 1986-2005 from the CMIP5 and EMBRACE models compared with the observationally constrained reanalysis of the global atmosphere and 15 surface conditions ERA-Interim (Dee et al., 2011). All data have been interpolated to a common 1° x 1° grid using a bilinear interpolation scheme. The color scale has been adapted from the model evaluation chapter of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC-AR5, Flato et al. (2013)) to allow for an easy comparison with the CMIP5 multi model mean bias (their Fig. 9.2b). The mean near-surface temperature from the individual models agrees with the ERA-Interim reanalysis mostly within ±3 °C. Larger biases can be seen in regions with sharp gradients in temperature, 20 for example in areas with high topography such as the Himalayas and the sea ice edge in the Southern Ocean. Near-surface air temperature In the AMIP simulations (left two columns in Figure 1), the model MPI-ESM shows only very modest changes in the simulated mean near-surface temperature bias, whereas particularly EC-Earth3 and CNRM-AM-PRE6 show considerable improvements compared with their CMIP5 versions over North America. The cold biases over large parts of Antarctica found in CNRM-CM5 are also reduced in the EMBRACE version of the model, possibly related to updates in the turbulence 25 scheme and the increased vertical resolution in the lower troposphere. In contrast, the warm bias over Central Africa in CNRM-AM-PRE6 and HadGEM3-A is worse compared with their CMIP5 counterparts and might be partly related to reduced (convective) precipitation in this region (see also Figure 2) in the EMBRACE versions of the models. In the HadGEM3-A model, the increase in the near-surface temperature bias over India seems to be related to less summer monsoon rainfall (see also Sect. 3.1.2). 30 In the concentration-driven historical coupled simulations (right two columns in Figure 1), EC-Earth3 shows a bias reduction over many parts of the continents as well as over tropical and subtropical oceans, in particular over the Southern Ocean, Central Africa and Northwest America. Despite these bias reductions, the globally averaged mean bias remains similar at about -1.1 °C. This is a consequence of reductions in the warm bias e.g. in the Southern Ocean leading to less error compensation of negative biases in other regions. While the CMIP5 version HadGEM2-ES shows a globally averaged 5 negative bias of about -0.4 °C in near-surface temperature, HadGEM3-GC2-N96 has a positive global average bias (~0.7 °C). This is particularly caused by larger positive biases over most parts of the southern hemisphere ocean as well as over the tropical areas of Africa and South America in HadGEM3-GC2-N96. In these regions, the near-surface temperature biases in the EMBRACE version are up to 2 °C larger than in the predecessor version. Williams et al. (2015) comment that, while both models have a large downwards surface flux bias over the Southern Ocean, the larger coupled SST (and upper ocean 10 heat content) biases appear to be related to changes to both the lateral and vertical ocean heat transports associated with the change in ocean model and ocean resolution. The HadGEM3-GC2 errors also include a contribution associated with too shallow Southern Ocean summer mixed layers. Model biases in HadGEM3-GC2-N96 are, however, reduced compared with the CMIP5 version particularly over the American Arctic with bias reductions of about 1 °C. The MPI-ESM model shows only rather small changes in the simulated annual mean surface temperature between the CMIP5 and EMBRACE version. 15 Similar to the HadGEM3-GC2-N96 model, the warm bias over Southern Ocean is slightly worse in the EMBRACE simulation than in the CMIP5 simulation. In the AMIP simulations, biases in the near-surface temperature climatology from the EMBRACE models are particularly reduced in mid-latitudes such as over North America but are increased in some models over many parts of the tropical continents. In most of the analyzed models, a warm bias over Central Africa and northern South America is still present in 20 the EMBRACE simulations. Particularly in these two tropical regions, however, the observational uncertainties are large as can be seen by comparison of ERA-Interim and the Climate Research Unit (CRU) dataset showing differences of up to 2-3 °C. Only the temperature bias found in the simulations from the CNRM and HadGEM models when compared to ERA-Interim are larger than this estimate of the observational uncertainty in these regions. In the coupled simulations, large biases are still present in the Southern Ocean, in particular along the coast of Antarctica. The coupled 25 EMBRACE models do not systematically outperform (EC-Earth, MPI-ESM) or are slightly worse in terms of RMSE (HadGEM) than their CMIP5 counterparts in reproducing the ERA-Interim global near-surface temperature distribution. Here, it needs to be kept in mind that the EMBRACE models are still prototypes and not yet fully tuned, which is a particularly challenging and time consuming task to do for coupled models. Shown are the differences between the 20-year climatology from ERA-Interim and from left to right (1) the AMIP simulations from the CMIP5 models (2) the corresponding EMBRACE models (3) the coupled historical simulations from the CMIP5 models, and (4) the corresponding EMBRACE models. As alternative reference data sets, data from the NCEP 1 reanalysis and the CRU 5 dataset are shown in the two lowest rightmost panels. The global averaged annual mean bias ("bias") and root mean square deviation ("rmsd") compared with ERA-Interim are given above the individual panels. Total precipitation Biases commonly found in the simulated mean precipitation from CMIP5 models include too little precipitation along the equator in the western Pacific associated with ocean-atmosphere feedbacks (Collins et al., 2010) and too high precipitation 10 amounts in the tropics south of the equator related to an unrealistic double-ITCZ in many models, particularly in the Pacific (Oueslati and Bellon, 2015). Figure 2 shows the biases in annual mean precipitation averaged over the 20-year period 1986-2005 from the CMIP5 and EMBRACE simulations compared with data from the Global Precipitation Climatology Project (GPCP, Adler et al. (2003)). Similarly to Figure 1, also the color scale of Figure 2 has been matched with the one use in IPCC-AR5 to allow for an easy 15 comparison with the CMIP5 multi model mean bias (Flato et al. (2013), their Fig. 9.4b). The model data have been interpolated to the 2.5° x 2.5° grid of the GPCP observations using a bilinear interpolation scheme. The corresponding relative bias (%) in annual mean precipitation from the models compared with GPCP is shown in Fig. S1 in the supplementary material. In contrast to the AMIP simulations with the MPI-ESM model showing no large changes in the amplitude and geographical distribution of the precipitation bias between the CMIP5 version (MPI-ESMnoembrace) and the 5 EMBRACE version (MPI-ESMwithembrace), the EMBRACE models CNRM-AM-PRE6, EC-Earth3, and HadGEM3-A show considerable reductions in the precipitation biases compared with their CMIP5 versions. The CNRM-AM-PRE6 AMIP simulation shows a considerable reduction in the wet bias over large parts of the tropical ocean by up to 2 mm day -1 but a slightly worse dry bias in the tropical regions of South America and Africa compared with the CMIP5 simulation from CNRM-CM5. EC-Earth3 also shows a reduction in the wet bias over most parts of the tropical oceans by about 1 mm day -1 10 and in addition a small reduction in the dry bias over the tropical parts of South America and Africa in comparison to EC-Earth. While the pattern of precipitation biases in HadGEM3-A is similar to that in HadGEM2-A, the magnitude of the bias is reduced in many regions, particularly over the tropical Indian Ocean and West Pacific. In the coupled model simulations (rightmost two columns in Figure 2), EC-Earth3 shows a similar reduction, compared with EC-Earth, in the dry bias over northern South America and in the wet bias over the tropical Atlantic, to that seen in the 15 AMIP configuration. The differences between the CMIP5 and the EMBRACE simulation from EC-Earth in most other It is noteworthy that the bias reduction in precipitation over tropical oceans with the EMBRACE models is smaller in the coupled experiments than in the atmosphere-only simulations. This is partly due to compensation between precipitation and SST biases in coupled models (e.g., Levine and Turner (2012), Vanniere et al. (2014)). Quantitative assessments are, however, not possible as the model setups of the coupled simulations analyzed here do not match exactly the ones used for 25 the AMIP simulations. The tropical precipitation in three out of four EMBRACE models analyzed is clearly improved, which can be partly attributed to improved convective precipitation in the models as well as other updates in the atmospheric components of the model. For example, snow and rain are now prognostic variables in the EMBRACE version of EC-Earth. The wet biases in these regions in the CMIP5 simulations have been reduced by up to 1-2 mm day -1 . This reduction in the wet bias of the 30 models also holds when using CMAP data as reference for comparison ( Fig. S2 in the supplementary material) even though the reduction in absolute bias is smaller as the CMAP data show less precipitation in the Tropics and are thus closer to the model results than GPCP. In the following sections, more regional or process-specific climate phenomena known to exhibit systematic errors in present-day GCMs are evaluated. The following subsections cover: (i) the South Asian and West African monsoons, (ii) coupled equatorial oceanic climate, and (iii) Southern Ocean clouds and radiation. South Asian monsoon The South Asian monsoon (referred to as the SAM hereafter) provides over 1 billion people with their primary source of water (Turner and Annamalai, 2012). Reliable estimates of possible future changes in the SAM are therefore crucial for long term planning in the region (Menon et al., 2013). 15 The SAM has two distinct seasonal components. The winter monsoon is dominated by a planetary-scale circulation linked to the Siberian anticyclone and Aleutian low centered over the North Pacific. These features induce northerly flow across South Asia from November to March with minimal amounts of precipitation (Chang et al., 2006). The summer monsoon starts in April, with the onset of rain over southern India and Myanmar generally occurring in early June and propagating northwest, reaching northern India by mid-July. The monsoon rainy season extends to the end of September before reverting back to 5 winter monsoon conditions by November (Chang et al., 2006). Due to the importance of ocean-atmosphere interactions in the evolution of the SAM and because we are primarily interested in evaluating model configurations that can be used for making future projections, here we analyze the ability of the coupled EMBRACE models to represent the main features of the summer SAM. Figure 3 shows seasonal mean (June to September, JJAS) precipitation from the coupled models and the differences relative to the satellite product TRMM 3BV43 (Huffman et al., 2007). Figure 4 and Figure 5 show near-surface 10 temperature and 850hPa zonal wind speed compared to data from the ERA-Interim reanalysis. Also shown are the alternative observation-based datasets GPCP-SG (precipitation) and the NCEP 1 reanalysis (Kalnay et al., 1996) (near-surface temperature and zonal wind speed). The years used to calculate the averages shown are given above each panel ("yrs"). (rightmost two columns) differences relative to TRMM. Columns 1 and 3 show the original CMIP5 model versions, columns 2 and 4 the EMBRACEupdated models. The domain averaged annual mean ("mean") and linear pattern correlation ("corr") (leftmost two columns) and 5 mean bias ("mean") and root mean square error ("rmse") (rightmost two columns) compared with TRMM are given above the individual panels. In the observations, a precipitation maximum is seen on the west coast of India, with a relative minimum on the lee side of the Western Ghats. Further maxima are seen along the coast of Myanmar and Laos (east coast of the Bay of Bengal) and along the foothills of the Himalayas. A broad region of precipitation is also evident in central and north-east India. EC-Earth 10 and MPI-ESM capture these primary rainfall features with varying degrees of accuracy. EC-Earth overestimates rainfall over the ocean adjacent to the Western Ghats and over the Bay of Bengal. Farther east, over Myanmar and Laos, precipitation is underestimated. The positive precipitation bias over the ocean is clearly improved in EC-Earth3. Both MPI-ESM versions underestimate rainfall over the Indian subcontinent, with particular negative biases associated with the Western Ghats mountains and the foothills of the Himalayas, likely caused by the low resolution of MPI-ESM. There is little difference 15 between the two MPI-ESM configurations. The major precipitation biases are also largely unchanged between the two HadGEM models, with an underestimate of precipitation across India and a secondary negative bias south of India along the equator. The process of irrigation that is missing in current GCMs might contribute to the dry bias over Northern India. Saeed et al. (2009) found that temperature biases caused by a too strong differential heating between land and sea if no irrigation is considered can lead to unrealistic simulations of the SAM circulation and associated rainfall in climate models. The HadGEM models (particularly HadGEM3-GC2) show a large warm bias in 2m temperature over the Indian land mass 5 ( Figure 4). This error is linked to excess downwelling surface shortwave radiation (of up to 60 Wm -2 in the JJAS mean) due to a lack of optically thick clouds over India. The lack of simulated rainfall exacerbates this problem further, leading to a dry land-surface bias, reduced surface evaporative cooling and increased surface sensible heat flux. The converse is seen in both EC-Earth coupled simulations, with a cold bias of ~5 °C over India linked to an underestimate of downwelling solar radiation of ~40 Wm -2 . The domain averaged cold bias in the EMBRACE simulation with EC-Earth is, however, 10 considerably reduced from -2.1 °C in the CMIP5 version of the model to -1.3 °C in the EMBRACE version. updated models. The domain averaged annual mean ("mean") and linear pattern correlation ("corr") (leftmost two columns) and mean bias ("mean") and root mean square error ("rmse") (rightmost two columns) compared with ERA-Interim are given above the individual panels. All of the models represent the cross-equatorial low-level jet and acceleration of the westerly monsoon flow across the Arabian Sea ( Figure 5), though the strength of the jet core and the eastward extension of the westerlies towards the 5 Philippines vary between models. Positive biases in 850hPa wind speed are reduced in the HadGEM3-GC2-N96 model and are replaced by a negative bias over the Arabian Sea. In contrast, the largely negative biases in EC-Earth are replaced by a positive bias in EC-Earth3. Both EC-Earth configurations, and to a lesser extent the MPI-ESM models and HadGEM3-ES, have a cold SST bias across the western Indian Ocean and Arabian Sea (as seen from the biases in 2m temperature in Figure 4). For a given low-level wind speed a cold bias in Arabian Sea SSTs will act to decrease surface ocean evaporation (relative 10 to the correct SST) and thus reduce the atmospheric moisture flux into India and consequently precipitation, while a cold bias in the equatorial Indian Ocean contributes to the meridional temperature gradient and thereby enhances the monsoon flow (Levine and Turner, 2012;Levine et al., 2013). In particular the performance in terms of averaged RMSE of EC-Earth3 for the variables precipitation, near-surface temperature and 850hPa zonal wind speed is improved compared to its CMIP5 predecessor (see EC-Earth panels in Figures 15 3,4,5). For HadGEM3, the domain averaged bias an RMSE for 850hPa zonal wind speed is reduced from 1.97 to 1.66 ms -1 compared to HadGEM2. In contrast, 2m temperature and precipitation patterns from HadGEM3 show improvements in some regions but degradation in agreement with ERA-Interim in other regions such as India. This kind of performance degradation seen in some regions is probably at least partly related to these being prototype model configurations and thus may or may (rightmost two columns) differences relative to ERA-Interim. Columns 1 and 3 show the original CMIP5 model versions, columns 2 and 4 the EMBRACE-updated models. The domain averaged annual mean ("mean") and linear pattern correlation ("corr") (leftmost two columns) and mean 5 bias ("mean") and root mean square error ("rmse") (rightmost two columns) compared with ERA-Interim are given above the individual panels. Figure 6 summarizes the annual cycle of SAM sampling both precipitation and dynamical measures. Figure 6a shows the mean annual cycle of precipitation spatially averaged over 5° to 30° N and 65° to 95° W. EC-Earth overestimates both the duration of the monsoon rainy season and the mean rainfall intensity during the peak monsoon. Both these biases are 10 improved in EC-Earth3. There is little difference between the two MPI-ESM models, which at this spatial scale exhibitaccurate simulation of monsoon rainfall. The two HadGEM configurations underestimate rainfall, with biases particularly large in the early monsoon period (May to July). Through bias compensation the multi-model mean provides the most accurate mean annual cycle. Figure 6b shows the annual cycle of the Webster and Yang (Webster and Yang (1992), hereafter WY) dynamical monsoon index and Figure 6c the Goswami index (Goswami et al. (1999), hereafter GM). The WY 15 index is based on vertical shear of the tropospheric zonal wind speed (u 850 hpa -u 200 hpa ) averaged over 40°-110° E and 0°-20° N, while the GM index is a measure of vertical shear in the meridional wind speed (v 850 hpa -v 200 hpa ) averaged over 70°-110° E and 10°-30° N. Both capture the interplay between large-scale dynamics and atmospheric diabatic heating over the Indian region. The WY index is a measure of the large-scale south-westerly monsoon circulation, while the GM index is a measure of the Hadley circulation intensity and meridional propagation. All models exhibit considerably more accuracy in simulating these dynamical measures compared to SAM precipitation, particularly the WY index, although part of this improved 5 performance stems from error compensation between lower tropospheric (850 hPa) and upper troposphere (200 hPa) wind speed biases (not shown). 5 All of the EMBRACE coupled models exhibit considerable biases with respect to monsoon precipitation. Only EC-Earth3 showed a measureable improvement over its CMIP5 predecessor. Most of the models appear to capture the large-scale dynamical evolution of the SAM, but fail to capture the associated evolution of precipitation, particularly the sub-continental distribution of rainfall, although at the scale of "all India" the MPI-ESM models do capture the annual cycle quite well. Models continue to have severe problems capturing the subtle interactions between deep convection, cloud-radiation processes, precipitation, and surface evaporation and the associated interplay between diabatic heating over land and the large-scale monsoon circulation and associated oceanic evaporation. West African monsoon West Africa also experiences a summer monsoon from May to October (Nicholson and Grist, 2003) with rains starting in 5 May at the Guinea coast and propagating northward to the Sahel region (~15° N) by mid-July (Sultan and Janicot, 2003). Failures in the west African monsoon (hereafter WAM), or lack of northward propagation into the Sahel can have devastating consequences for the population of this region, as evidenced by the extensive famines during the 1970s and 1980s linked to decadal variability in WAM rainfall (Held et al., 2005;Nicholson et al., 2000). As with the SAM, the WAM also results from the seasonal development of a low-level thermal gradient between the tropical ocean and the Sahara 10 (Caniaux et al., 2011;Lavaysse et al., 2009). This monsoon circulation and the associated low-level moisture flow interact with westward propagating, synoptic-scale, African Easterly Waves (AEWs, (Poan et al., 2013;Poan et al., 2015)) that grow on the southern and northern flanks of the African Easterly Jet (Thorncroft and Hoskins, 1994b, a) (hereafter AEJ). This interaction between AEWs and the monsoon moisture flux supports the development of organized mesoscale convective systems (MCS) embedded within the AEWs. These MCS deliver the majority of rainfall over West Africa (Fink and Reiner, 15 2003;Mathon et al., 2002). Such interaction across scales (convective-meso-synoptic-planetary scales) is at the heart of WAM dynamics and is a challenge for GCMs, which prevents them to accurately simulate this system, including both natural variability and a forced response to increased greenhouse gases driving precipitation changes (Biasutti, 2013;Roehrig et al., 2013;Ruti and Dell'Aquila, 2010). Africa compared with observations from TRMM and ERA-Interim reanalysis data, respectively. Differences between TRMM and GPCP, for precipitation, and ERA-Interim versus Climatic Research Unit (CRU, Harris et al. (2014)) data, for 2m temperature, give an estimate of observational uncertainty in the region. The WAM is marked by a maximum in precipitation stretching from the Atlantic coast across to the Darfur mountains in Sudan over a latitude band ~5° N to 15° N. Directly north of the precipitation maximum, near surface temperatures increase rapidly over the Sahara. Surface warming 25 induces a deep near-surface low-pressure system (the Saharan heat low, Lavaysse et al. (2009)) that is one of the main drivers of the low-level south-westerly flow into West Africa. The years used to calculate the averages shown are given above each panel ("yrs"). (rightmost two columns) differences relative to TRMM. Columns 1 and 3 show the original CMIP5 model versions, columns 2 and 4 the EMBRACEupdated models. The domain averaged annual mean ("mean") and linear pattern correlation ("corr") (leftmost two columns) and 5 mean bias ("mean") and root mean square error ("rmse") (rightmost two columns) compared with TRMM are given above the individual panels. All the coupled models exhibit a positive precipitation bias over the Gulf of Guinea. This error is reduced when the models are run with prescribe SSTs (not shown). Such precipitation errors are associated with a warm bias in all three models' SST fields off the coast of Namibia and Angola (evident in the 2m temperatures, Figure 8). The warm SST bias, in combination 10 with the predominant southerly low-level atmospheric flow into West Africa ( Figure 9) drives a large (and excessive) lowlevel moisture convergence into the Guinea coast region and is arguably the main cause of the precipitation bias. Positive SST biases in this region are common to coupled GCMs , and are thought to arise from a combination of poorly resolved coastal ocean dynamics (Wahl et al., 2011;Xu et al., 2014) and atmospheric wind forcing (Richter and Xie, 2008;Voldoire et al., 2014) and poor simulation of marine stratocumulus clouds (Huang et al., 2007). EC -15 Earth3 has somewhat improved SST biases in this region compared to its CMIP5 version which may partly explain the reduced rainfall bias off the Guinea coast. Similarly to the analysis of the SAM, EC-Earth3 shows an improvements in the averaged RMSE values for precipitation, 2m temperature and 925hPa zonal wind speed in the WAM region compared to its CMIP5 version (see numbers above EC-Earth panels in figures 7, 8,9). For the MPI-ESM and the HadGEM models, some regional improvements in these variables are 5 partly compensated by performance degradation in other regions resulting in similar of slightly worse domain averaged RMSE values. 1 and 3 show the original CMIP5 model versions, columns 2 and 4 the EMBRACE-updated models. The domain averaged annual mean ("mean") and linear pattern correlation ("corr") (leftmost two columns) and mean bias ("mean") and root mean square error ("rmse") (rightmost two columns) compared with ERA-Interim are given above the individual panels. Latitudinal cross-sections of precipitation, 2m temperature and key radiation variables averaged from 10° W to 10° E for the JJAS season are shown in Figure 10. A maximum in 2m temperatures (Figure 10a) coincides with a minimum in sea level pressure ( Figure 10b) associated with the Saharan heat low (around 22° N). While there is some discrepancy between the simulated 2m temperature and the two observationally based data sets (ERA-Interim and CRU) all models capture the sharp increase in temperature around 15° N although maximum temperatures over the Sahara can vary by 5 °C across models. 5 Most models also capture the location and intensity of the Saharan heat low fairly well. Surprisingly, the warmest model over the Sahara (MPI-ESM) has the weakest low-pressure minimum and HadGEM3-GC2, with one of the deepest low pressures, has relatively cool temperatures over the Sahara. Possibly more significant, the location of the low-pressure minimum in HadGEM3-GC2 is displaced ~500 km south of the observed minimum. A key driver of high Saharan surface temperatures is surface absorption of solar radiation. Figure 10c shows downwelling surface solar radiation (SWD) with 10 CERES-EBAF satellite derived estimates as an observationally-based reference (Loeb et al., 2009) clouds reduce the amount of SW radiation absorbed by the atmosphere-surface system relative to an equivalent clear-sky atmosphere (i.e. increased SW reflection). This is clearly visible around 10° N where CERES indicates a reduction in absorbed SW of -90 Wm -2 due to clouds. Cloud effects drop to -10 Wm -2 over the Sahara. Both HadGEM models simulate 25 SW CRE over the Sahara close to 0 Wm -2 , indicative of zero cloud cover. This may partly explain the high bias in SWD seen in HadGEM. The longwave cloud radiative effect (LW CRE) is shown in Figure 10f, with positive values indicating clouds reduce the amount of outgoing longwave radiation (OLR) relative to a clear-sky equivalent atmosphere (i.e. more terrestrialemitted radiation trapped in the atmosphere). The precipitation-cloud maximum at 10° N is delineated by a maximum in LW CRE of 40 Wm -2 . The majority of models underestimate LW CRE compared to CERES particularly in the latitude band 10°-30 20° N. In this band most models also underestimate the negative cloud radiative effect SW CRE indicating model clouds in this band are optically too thin, consistent with an underestimate of rainfall in this band in most models. Zonally averaged precipitation between 10° W and 10° E from TRMM-3B43 and GPCP-1DD show relatively good agreement (Figure 10g). The majority of models fail to represent the rapid increase in precipitation between 0° to 5° N close to the Guinea coast due to excessive precipitation over the ocean. Most models represent the second maximum in precipitation around 12° N, linked to AEWs on the southern flank of the African Easterly Jet. HadGEM, EC-Earth and MPI-ESM are all somewhat deficient in rainfall, particularly in the northern maxima region, consistent with the cloud-radiation errors discussed above. There is no clear improvement in precipitation between the CMIP5 models and the EMBRACEupdated models. 5 In addition to simulating seasonal mean statistics of the WAM and SAM, it is important models also represent the underlying weather variability that makes up the seasonal mean precipitation. Any future changes in intra-seasonal precipitation 5 variability will likely have as big an impact on societies in the two regions as changes in seasonal mean monsoon rainfall. The 3-10 day band-pass filtered variance of precipitation (Figure 11) emphasizes the dominant timescale of precipitation variability over West Africa. This variability is associated with westward propagating AEWs and MCS embedded within these waves. Both TRMM and GPCP show large variance in precipitation at these timescales, stretching from the Darfur mountains west across the Sahel region, with maximum values westward from ~0° E to the Atlantic coast, coincident with 10 the southern flank of the AEJ. Despite the relatively similar time mean (climatological) precipitation from TRMM and GPCP-1DD, the precipitation variability from TRMM and GPCP-1DD show large differences. As the base TRMM observational data are at 0.25° spatial resolution and 3-hourly temporal resolution, whereas the GPCP-1DD data are at a spatial resolution of 1° and the highest temporal resolution is daily mean values, we would expect TRMM to sample the high temporal and spatial variability of 15 convective precipitation in this region more accurately than GPCP-1DD. In this analysis, we therefore use TRMM as our main reference dataset.EC-Earth appears to capture the northern band of precipitation variability quite well, although this is degraded in EC-Earth3 west of the 0° meridian. Both the HadGEM and MPI-ESM models fail to capture sufficient precipitation variability at these timescales over land compared with TRMM, with significant variability only occurring over the tropical ocean regions. All three coupled EMBRACE models show less precipitation variability than their CMIP5 20 counterparts. Such findings emphasize the need for an improved representation of wave-precipitation interactions in all coupled models before they can provide robust estimates of changes in intra-seasonal rainfall over this region. Higher model resolution is generally considered an important route for improving weather timescale variability in climate models (Jung et al., 2012;Roberts et al., 2015). In the following section we present analysis of EC-Earth simulations run with prescribed SSTs (AMIP mode) sampling horizontal resolutions from T159 (125 km) to T1279 (16 km). In this analysis we focus on the potential benefit increased atmospheric model resolution brings to simulating synoptic (weather) timescale precipitation variability over both the WAM and SAM regions. Presently these findings are for one EMBRACE model only, but are likely pertinent to model development priorities across modeling groups. Representing synoptic time scale precipitation variability in monsoon systems: The role of increased model resolution 5 While an accurate representation of the mean monsoon climatology, in particular the annual cycle, is a fundamental requirement of GCMs, rainfall variability within the monsoon season is also of importance to the predominantly agrarian societies of West Africa and South Asia. Over the Sahel, the majority of precipitation occurs from intermittent mesoscale convective complexes (MCS), embedded within westward propagating synoptic African Easterly Waves (Mathon et al., 2002), with a clear peak in precipitation variability on the 2-8 day time scale (Kiladis et al., 2006). Similarly over the SAM 10 region, a significant amount of rainfall is associated with synoptic-scale monsoon depressions that develop over the Bay of Bengal, before propagating northwestward across India and eventually dissipating over northwest India or Pakistan (Hunt et al., 2016). To assess the ability of GCMs to accurately simulate this synoptic rainfall we follow the approach described in the previous section and apply a 3-10 day band-pass filter to model and observed precipitation to highlight variability at the time scales of interest. 15 It is becoming increasingly established that higher model resolution provides a more realistic representation of the underpinning processes controlling weather and precipitation variability (e.g. Dawson and Palmer, 2015;Demory et al., 2014;Jung et al., 2012). In order to assess the benefit higher model resolution brings to the simulation of sub-seasonal precipitation variability over the WAM and SAM, in this section we analyze one of the EMBRACE models (EC-Earth) run in AMIP mode for the period 1980-2009, sampling atmospheric model horizontal resolutions of; T159 (128 km), T255 (80 20 km), T319 (64 km), T511 (40 km), T799 (25 km), T1279 (16 km), with a common set of 91 vertical levels. Findings from this analysis may offer pointers to an optimal resolution for other models to aim at with respect to simulating sub-seasonal precipitation variability as well as seasonal mean rainfall. Figure 12 shows 3-10 day band-pass filtered precipitation variance for JJAS over Africa from two observational data sets (TRMM 3B42 and GPCP-1DD) and the six EC-Earth resolutions. The two observations differ markedly with respect to the 25 absolute magnitude of variance on these time scales. This is partly expected as the observational datasets feature a rather different horizontal resolution (0.25° vs. 1°). They do, however, exhibit some agreement in the spatial distribution of maxima and minima in precipitation variability, with a broad region of high variability stretching from Sudan west across to the Atlantic coast. GPCP-1DD indicates a northerly maximum in variability over West Africa around 12° N, associated with AEWs growing on the northern flank of the AEJ. Both data sets indicate a maximum in variability at the Atlantic coast, 30 around 10° N, and also relative maxima over the Ethiopian Highlands and Darfur mountains. In EC-Earth precipitation variability increases (and improves compared to the observations) as model resolution increases from T159 to T511. Beyond T511 there is little further increase in variability. In particular, as resolution increases from T159 to T511, higher variability appears eastwards back across the AEW wave track towards Ethiopia. There is also a clear increase in variability (wave activity/intensity) at the Atlantic coast. Perhaps surprisingly, this increase in precipitation variability is not seen in the 850hPa meridional wind variability (not shown), which is typical measure of the AEW activity. Meridional wind variability is well simulated at T159 resolution and largely does not change right up to T1279. Hence, the increased model resolution seems to impact directly on moist processes that lead to rainfall on the ground, while having only minimal impact on the 5 dynamical structure of the AEWs. It is also worth noting that the seasonal mean (JJAS) precipitation changes very little across the EC-Earth resolutions (not shown), suggesting at lower resolutions (below T319), seasonal mean precipitation in EC-Earth, while relatively accurate, is made up of incorrect higher time frequency (weather) variability/intensities. Figure 13 shows 3-10 day filtered JJAS precipitation variance over the extended SAM region, again both TRMM and GPCP-1DD observations are plotted. As over the WAM region, variability is significantly higher in TRMM than GPCP with this being particularly the case over the equatorial Indian Ocean. Also similar to the WAM, precipitation variability increases (and improves) in EC-Earth as model resolution increases from T159 to T511, with little change thereafter. This increase is 15 true for variability associated with the SAM itself, but is not the case for variability over the equatorial Indian Ocean, which in fact slightly decreases (and degrades) as resolution increases beyond T255. As with the WAM, there is only a slight change (an increase) in seasonal mean (JJAS) precipitation in EC-Earth with increasing model resolution (not shown). Although in regions of steep topography (such as the foothills of the Himalayas), there is an increase (improvement) in seasonal mean precipitation as model resolution increases. There is some suggestion of improvement in the representation of cloud-radiation interactions over the WAM region in moving from CMIP5 models to EMBRACE-updated models, with an impact on the large-scale dynamical structures over the region. Unfortunately, these bias reductions do not lead to clear improvements in regional rainfall (e.g. over the Sahel) or in rainfall variability. As with the SAM a major improvement in the representation of moist convection and its forcing of, and 10 interaction with, clouds, radiation and the surface energy budget appears to be the most important requirement for a major advance in the simulation quality of the WAM in present-day GCMs. Analysis of EC-Earth AMIP simulations at different model horizontal resolutions indicate an improvement in synoptic timescale precipitation variability as resolution is increased up to T511. This improvement occurs over both the WAM and SAM regions and, in both cases, seasonal mean rainfall is largely unchanged, suggesting mean WAM and SAM rainfall in lower resolution models (in the case of EC-Earth 5 lower than T511) may be correct but that this mean rainfall is composed of an incorrect underlying variability/intensity distribution. Furthermore, this indicates that not all model deficiencies in representing the synoptic precipitation variability in the monsoon regions can be simply solved by higher horizontal model resolutions. Other factors such as such as, for instance, deficiencies in the cloud and precipitation parameterizations are also expected to contribute. Coupled tropical ocean climate 10 In the tropical Pacific the dominant easterly trade winds induce oceanic upwelling along the equator, resulting in a cold equatorial tongue of surface waters stretching from the coast of Central America to the date line. This cold tongue inhibits deep atmospheric convection which becomes confined to west of ~170° E in the equatorial Pacific. In combination with the easterly trade winds and cold tongue, the mean equatorial ocean thermocline tilts from shallow depths in the East Pacific (mean 20 °C isotherm located at ~50 m depth) to deeper values (mean 20 °C isotherm at ~200 m) in the West Pacific. This 15 coupled feedback, referred to as the Bjerknes feedback (Bjerknes, 1969;Neelin and Dijkstra, 1995) plays a key role in determining the mean state of the equatorial Pacific climate, as well as the main modes of variability around this mean state, such as the El Niño Southern Oscillation (ENSO) (Bellenger et al., 2014). Similar coupled interactions, smaller in magnitude, also play a role in shaping the mean state of the tropical Atlantic (Xie and Carton, 2004). Key ocean-atmosphere feedbacks are the thermocline, the zonal advective and the Ekman feedbacks . Another conceptual 20 model for explaining the origin of ENSO is the stochastic theory from (Penland and Sardeshmukh, 1995) using a linear system forced by noise. For a comparison of different conceptual models, we refer to . Accurately representing the processes underpinning the mean state of the coupled tropical ocean is an important requirement of global climate models, necessary for confidence in their projections of future changes in both the mean state and ENSO variability, with changes in the latter being sensitive to small, systematic errors in the mean state (Bellenger et al., 25 2014;Guilyardi, 2006). An accurate coupled mean state may also be important for simulating longer timescale variability in tropical ocean heat uptake (England et al., 2014;Meehl et al., 2011). We implemented a number of performance metrics, developed by Li and Xie (2014), into the ESMValTool and used them to assess the ability of the EMBRACE AMIP and coupled models to simulate the coupled equatorial Pacific climate. Figure 14a shows latitude cross-sections of DJF zonal mean precipitation from the AMIP simulations. Zonal means are for 30 all ocean grid cells between 120° E to 100° W. Observed SST is from HadISST (Rayner et al., 2003) and precipitation from CMAP from (Xie and Arkin, 1997), GPCP (Adler et al., 2003;Huffman and Bolvin, 2012) and TRMM (Huffman et al., 2007). For AMIP simulations all SST fields match the observations by design, except for HadGEM3-A which deviates slightly due to using daily SST and sea ice fields from Reynolds et al. (2007). Observed SSTs have a relative minimum on the equator, ~0.5 °C cooler than the SSTs at 7-8° S and ~1 °C cooler than SSTs at 7-8° N. Maximum SSTs are north of the equator, ~0.5 °C warmer than at similar latitudes south of the equator. Observed precipitation shows a distinct maximum at ~8° N, with values of 7 mm day -1 (GCPC) to 8 mm day -1 (CMAP, TRMM). A second, weaker maximum (3-4 mm day -1 depending on the observational data set) is seen at 8° S. A precipitation minimum is located on the equator, coincident with 5 the SST minimum. The AMIP models reproduce this structure of precipitation, with only small deviations from observations. A different picture emerges for the coupled models (Figure 14b, maps of the annual mean bias in SST and precipitation from the coupled models zoomed in over the Pacific are shown in Figures S3 and S4 in the supplementary material). All models, apart from HadGEM3-GC2, exhibit a widespread cold SST bias across the tropical Pacific, including a significant cold bias 10 in the SST minimum at the equator ( Figure S3). This cold bias, however, has been substantially improved in the coupled EMBRACE simulations with EC-Earth3 and HadGEM3-GC2-N96. HadGEM3-GC2 has accurate SSTs, both north of and along the equator, but it exhibits a slight warm bias south of the equator and therefore fails to reproduce the north-south asymmetry in SST across the equator. This impacts the precipitation distribution in HadGEM3-GC2, with two maxima of similar magnitude symmetric about the equator and coincident with the model SST maxima ( Figure S4). In contrast, EC-15 Earth3, while having a distinct cold bias along and south of the equator, captures the south-north increase in SST across the equator. This meridional SST gradient appears crucial for capturing the observed asymmetry in precipitation, which EC-Earth3 successfully does. Both MPI models have a large SST cold bias in the tropics, particularly along the equator and simulate an ITCZ on either side of the equator. Comparing EC-Earth3 with HadGEM3-GC2, with respect to capturing the south to north increase in precipitation across the equator, it seems more important that models capture the corresponding 20 gradient in SST than the absolute magnitude of equatorial SSTs. Recent studies (e.g. Frierson et al., 2013;Marshall et al., 2014) suggest the overturning ocean circulation is responsible for a net transport of energy from the southern to the northern hemisphere, leading to the observed SST maximum being north of the equator. Kang et al. (2009) and Frierson and Hwang (2012) further argue that the location of the ITCZ, marking the low-level convergence of northern and southern hemisphere Hadley cells, is a direct result of this ocean-induced asymmetry in hemispheric energy and the southward directed, cross-25 equatorial upper branch of the Hadley cell balances the northward ocean energy transport. 5 Similar findings hold for the tropical Atlantic (not shown). Observed SSTs are maximum at ~4° N, although there is a less distinct minimum along the equator. Precipitation is also maximum north of the equator. All coupled models, apart from HadGEM3-GC2, again show a systematic cold SST bias throughout the near-equatorial Atlantic. As in the Pacific, HadGEM3-GC2 has relatively accurate absolute SSTs, but a warm bias south of the equator so it does not simulate the south to north increase in SST. This leads to two ITCZ precipitation maxima symmetric about the equator in this model. EC-10 Earth3 again has a general cold SST bias, but correctly simulates the south to north gradient in SST and as a result also correctly simulates a single ITCZ rainfall maximum north of the equator. In Figure 15 we follow the approach from Li and Xie (2014) to analyze the longitudinal structure of the coupled climate simulated in the equatorial Pacific. Figure 15 shows zonal mean precipitation, SST, 1000hPa zonal wind speed and 925hPa wind divergence, all averaged between 2.5° N and 2.5° S, from 120° E to 80° W. As not all ocean data were saved in the 15 EMBRACE simulations, the depth of the ocean 20 °C isotherm (as used in Li and Xie (2014)) cannot be plotted and is replaced by 925hPa wind divergence. Most AMIP models (left column in Figure 15) reproduce the zonal structure of precipitation across the Pacific, with minimal values from 80° W to 150° W and then an increase to a maximum in the West Pacific warm pool region ~145° E to 165° E. All models, with the possible exception of HadGEM3-A and CNRM-AM-PRE6 simulate too strong easterly trade winds (too negative values in Figure 15), particularly west of 160° W. In the AMIP models, this wind bias cannot impact the prescribed SSTs. In the same cross-sections for the three coupled simulations (right column) only HadGEM3-GC2 has an accurate 5 zonal structure of SST. All other models have a cold bias of 1 °C or more across the central Pacific (between 100° W to 170° E). Most models also underestimate precipitation in the equatorial band from 150° W to 160° E and feature a West Pacific rainfall maximum displaced 10-20° west of the observed maximum. Only HadGEM3-GC2, and to a lesser extent HadGEM2-ES, capture the correct zonal pattern of precipitation and location of the West Pacific maximum, indicating the important role of SST for the zonal structure of precipitation. Both EC-Earth models show a considerable easterly wind 10 speed bias across most of the Pacific, as does HadGEM2-ES east of 150° W. Excess 925 hPa wind divergence is seen in all three coupled models. HadGEM3-GC2 has the most accurate simulation of equatorial zonal wind speeds and wind divergence. This suggests that in the two EC-Earth models and HadGEM2-ES, excessive easterly winds induce too strong Ekman divergence and ocean upwelling along the equatorial Pacific leading to the cold SST bias. The two MPI-ESM models have cold SST biases across the Pacific of about 2 °C even though the simulated zonal wind speeds are relatively accurate, 15 contrasting significantly with the two MPI-ESM AMIP models, in which the largest (positive) easterly wind biases are seen. The cold SST bias in MPI-ESM is accompanied by a positive bias in 925hPa wind divergence (excessive low-level equatorial divergence), indicating too strong meridional (poleward directed) wind components near the equator in this model. The findings suggest excess surface momentum loss from the easterly trade winds, driving both a cold SST bias along the equator and excessive poleward directed winds just off the equator in the MPI coupled models. 20 5 Simulated moist convection over the tropical oceans is extremely sensitive to small errors (~0.5 °C) in both the absolute value, and the spatial gradient, of SSTs near the equator. HadGEM3-GC2 has the most accurate absolute value of tropical SSTs, but suffers from a double-ITCZ problem due its meridional SST gradients across the equator being incorrect. In contrast, EC-Earth3 has a systematic cold SST bias in both the equatorial Pacific and Atlantic but captures the correct meridional gradient in SST between the two hemispheres. As a result EC-Earth3 does not exhibit a double-ITCZ, with clear 10 precipitation maximum north of the equator in both basins collocated with SST maxima. Two of the three EMBRACE models (HadGEM3-GC2 and EC-Earth3) show improvement in simulated tropical SSTs compared to their CMIP5 versions. HadGEM3-GC2 in particular, has a very accurate zonal structure of SST across the equatorial Pacific, along with associated atmospheric phenomena (precipitation, easterly trade winds). EC-Earth3 also shows some improvement over its CMIP5 version. 5 Southern Ocean clouds and radiation The Southern Ocean plays a key role in the earth's climate, being one of the few extensive regions of the globe where the deep ocean is in regular contact with the surface, allowing significant atmosphere-ocean exchange of heat (Kuhlbrodt and Gregory, 2012) and CO 2 (Frölicher et al., 2015). Farther south, formation of Antarctic deep water efficiently transports surface waters into the deep ocean. Both these phenomena are key components of the global ocean overturning circulation 10 (Marshall and Speer, 2012). Trenberth and Fasullo (2010) show that GCMs (CMIP3) have a persistent underestimate in reflected shortwave (SW) radiation at the top of the atmosphere (TOA) over the Southern Ocean, implying too much SW radiation reaching the ocean surface. Linked to this many coupled GCMs also show a warm SST bias over extensive parts of the Southern Ocean. This bias increases the vertical stability of the upper ocean and can therefore impact the overturning ocean circulation. Trenberth and Fasullo (2010) and Sallée et al. (2013) suggest such biases compromise the reliability of 15 climate change projections in the region. To assess GCM simulated surface energy budgets over the Southern Ocean, a number of metrics have been implemented into the ESMValTool. In this section we apply some of these metrics to assess the EMBRACE models' ability to capture phenomena controlling the surface radiation budget of the Southern Ocean. We focus on austral summer, when incoming surface radiation is at a maximum and model errors are generally the largest. We analyze total cloud amount, cloud liquid 20 (LWP) and ice (IWP) water path, and surface and TOA solar radiation. Analysis is only performed for the AMIP simulations as the main findings also apply to the coupled models. Figure 16 shows cross-sections from 65° S to 30° S of simulated zonal mean DJF total cloud cover, LWP and IWP, TOA outgoing (SWUP) and surface downwelling (SWD) shortwave radiation compared to satellite observations. For LWP and IWP, ERA-Interim reanalysis data are also included as a second observationally based estimate (Dee et al., 2011). Observed 25 cloud cover increases from ~60% at 30° S to more than 90% around 60° S. Most models capture this poleward increase, although all, except HadGEM3-A, exhibit a systematic negative bias (of 5-15%) across the band ~45° S to 65° S. HadGEM3-A has the most accurate cloud cover and is a clear improvement over HadGEM2-A. CNRM-AM-PRE6 also shows improvement compared against its CMIP5 version. EC-Earth3 shows a small improvement, while the MPI-ESM model shows little change. 30 The impact of clouds on solar radiation can be summarized by cloud optical depth, which is a function of the cloud water content and the effective radius of cloud liquid droplets/ice crystals, integrated over cloud depth (Slingo, 1989). Here, vertically integrated LWP values are compared to observed estimates (LWP and IWP are not available from HadGEM3-A or EC-Earth). LWP observations are based on the University of Wisconsin (UWisc, O'Dell et al. (2008)) satellite passive microwave data set, IWP observations are MODIS collection 6 data (Platnick et al., 2003). Similar to Jiang et al. (2012), the MODIS IWP data representing in-cloud values have been multiplied with the observed ice cloud fraction for comparison with the grid-box averages provided by the models. Due to the large differences across remotely sensed LWP/IWP data sets, values from UWisc and MODIS should be viewed as indicative at best. We include LWP/IWP estimates from ERA-Interim 5 as a second constraint to provide some measure of this uncertainty. Our main motivation is to show the large range in both LWP and IWP across models, which may partly be due to the weak observational constraint. South of ~40° S, LWP differs by a factor of ~2 across models, with IWP showing an even larger inter-model spread (up to a factor of ~3). Such large differences will clearly impact solar radiation fluxes. Before robust guidance on model biases in cloud water biases can be provided for the Southern Ocean, further work is needed to quantify the uncertainty/accuracy of 10 LWP/IWP observations. For now we stress (i) the wide range of LWP and IWP across models and (ii) the lack of a robust observational constraint on these two variables. Observed SWUP also increases southwards, paralleling the increase in observed cloud cover (Figure 16a,c). The spread in both SWUP and SWD is decreased going from CMIP5 to the updated models. This is likely primarily a result of the reduced spread (and reduced bias) in cloud cover in the updated models. Nevertheless, a negative bias in SWUP (too little SW 15 reflection) of ~10-40 Wm -2 is still seen for all four updated EMBRACE models south of ~55° S. This translates into a positive bias in SWD of similar magnitude over the same region. The underestimate in SW reflection for most models is consistent with the (~5-10%) underestimate of cloud cover south of 55° S (only HadGEM3-A does not have a negative bias in cloud cover in this region). The SWUP negative bias is also consistent with an implied underestimate of LWP in EC-Earth3 and CNRM-AM-PRE6 if UWisc data are used as the observational constraint. 20 (c) and (d) CERES-EBAF (2001-2012). To gain more insight into the relationship between cloud cover and reflected SW, Figure 17 shows scatter plots of monthly mean TOA SWUP and surface SWD each plotted against monthly mean cloud cover for all available DJF months over 20year simulation period. Observations are from CERES-EBAF (2001) SWUP/SWD and MODIS-L3 cloud cover (2003. The figure is constructed as follows: for each ocean grid point in the band 30° S to 65° S, monthly cloud cover is 10 binned into 5% width bins (from 0 and 100%) and for each cloud cover occurrence the corresponding SWUP/SWD is saved to that bin. This is carried out for all grid points and DJF months, resulting in a mean DJF SWUP/SWD value for each of the 20 cloud cover bins and scatter plots of SWUP/SWD as a function of cloud cover for the region 30° S to 65° S. The fractional occurrence of cloud cover amounts for each 5% bin were also recorded and plotted as a frequency distribution ( Figure 17, middle row). 15 The observed cloud cover histogram shows the bulk of months have cloud cover >80%. Most models capture this distribution, with clear improvements in the updated versions of the CNRM and HadGEM models. EC-Earth3 underestimates the occurrence of cloud cover >90%. For the SWUP-cloud cover scatter plots, most models underestimate SWUP (and linked to this overestimate SWDN) for cloud cover <50%, although the fractional occurrence of cloud cover <50% is extremely low (middle row in Figure 17), so this bias may arise from poor sampling and will have minimal impact on the zonal mean SWD and SWUP biases in Figure 16. All models overestimate SWUP for cloud cover >60% (the most frequently occurring cloud amount). These biases range from ~25-30 Wm -2 (MPI-ESM) to ~5 Wm -2 (HadGEM3-A) and are generally coincident with an underestimate of SWD for the same cloud cover amounts. This finding is not consistent with the 5 zonal mean SWUP and SWD biases seen in Figure 16, particularly south of ~50° S, where all the models underestimate TOA SWUP and overestimate surface SWD, ranging from ±10-30 Wm -2 . To understand this inconsistency, Figure 18 and Figure 19 repeat the radiation-cloud histograms separately for the latitude 15 bands 30-45° S (referred to below as SOC-N) and 50-65° S (referred to below as SOC-S). For SOC-N the observed cloud histogram is shifted towards lower values, with a peak at 80% and a tail of occurrences down to 20%. HadGEM3-A captures this distribution as, to a lesser extent, does EC-Earth3. The other models all show too frequent cloud cover <60% and too little cloud occurrence >80%. The tendency for all models to have a positive bias in TOA SWUP for cloud amounts >50% is also seen in this region, although HadGEM3-A is quite accurate in this regard. Figure 16 indicates the updated EMBRACE models have relatively small zonal mean TOA SWUP and surface SWDN errors in the band 30° S to 45° S. For the MPI-ESM and CNRM models this partly results from error cancellation, with an underestimate of cloud amount balanced by the most frequent cloud amounts (>50%) being too reflective. HadGEM3-A has an accurate simulation of zonal mean SWUP and SWD in this latitude band from both accurate cloud amounts and accurate SWUP/SWD-cloud cover relationships. 5 Over the SOC-S region, Figure 16 shows all updated models have a negative bias in zonal mean TOA SWUP and a positive bias in surface SWD. The SWUP/SWD-cloud cover scatter plots for this region show more mixed results (Figure 19). This may partly be due to a small sample size, although the main findings we believe are robust. The observed cloud histogram 10 indicates monthly cloud cover >90% dominates at these latitudes. EC-Earth3 and, to a lesser extent, MPI-ESM underestimate the frequency of occurrence of such high cloud amounts and for these two models this is the leading cause of the negative/positive bias in the SWUP/SWD zonal means. While there is scatter in the observed SWUP/SWD-cloud cover relationship over SOC-S, EC-Earth3 and MPI-ESM capture the relationship quite well, suggesting clouds, when present in these two models in this latitude band, have an accurate representation of SW reflection and transmission. In contrast, 15 CNRM-AM-PRE6 and HadGEM3-A do well simulating the cloud distribution, but have more mixed success capturing the observed SWUP/SWD-cloud relationship. CNRM-AM-PRE6 reproduces this relationship best of these two models. HadGEM3-A reproduces the observed cloud cover histogram very well, but fails to reproduce the SWUP/SWD-cloud relationship, with an underestimate in TOA SWUP for cloud >95% of ~30-40 Wm -2 and a similar error of opposite sign in SWD. This is the leading cause of the zonal mean SWUP/SWD biases in HadGEM3-A. There is a clear improvement in cloud amounts simulated over the Southern Ocean in the majority of EMBRACE-updated models. This is particularly true for HadGEM3-A compared to HadGEM2-A where a systematic 10% underestimate of cloud cover is reduced to close to zero. The CNRM model also shows an improvement in cloud amounts across the Southern Ocean. SWUP and SWDN are also surprisingly well captured in most of the updated models, with only the MPI-ESM 10 showing a systematic bias in SWUP (too much reflected SW radiation at TOA) and SWD (too little SWD at the surface) for cloud amounts >60%. Three models show a tendency for compensating biases (too few clouds balanced by clouds being too reflective) resulting in accurate SWUP and SWD over the 30° S to 45° S band. Only HadGEM3-A captures both the cloud occurrence distribution SWUP/SWD-cloud relationship over this region. Farther south (50° S to 65° S) most models (apart from EC-Earth3) capture 15 the shift in most frequent cloud occurrence to >90%. In this region models have a greater problem simulating the SWUP/SWD-cloud relationship. For example, both HadGEM3-A and CNRM-AM-PRE6 have significant positive biases in surface SWD for cloud amounts of 95-100%, likely related to an underestimate of cloud optical depth for these cloud types. For further improvement of cloud and radiation processes over the Southern Ocean improved observational constraints, particularly with respect to in-cloud constituents (e.g. liquid and ice water amounts) are required. Discussion and conclusions 5 The tropical precipitation in three out of four EMBRACE models analyzed is clearly improved with wet biases in these regions reduced by up to 1-2 mm day -1 compared with the CMIP5 simulations. Precipitation, in particular in tropical regions, remains challenging to model with large biases in the West Pacific and Indian Ocean as well as in the ITCZ and SPCZ. Two of the EMBRACE-updated coupled models exhibit considerable improvements in tropical SSTs, while only one model (EC-Earth3) does not show a double-ITCZ in the Pacific. 10 Biases in the near-surface temperature climatology are still present over many parts of the tropical continents. For example, in most of the analyzed models, a warm bias over Central Africa and northern South America is found. In the coupled simulations, large biases are also still present in the Southern Ocean along the coast of Antarctica. This bias is consistent with the solar radiation biases seen in the four EMBRACE models south of 50° S. The ESMs still have significant problems in accurately simulating all features of the two large-scale atmospheric circulation 15 patterns, the South Asian and West African monsoons. Many of the problems can be traced to difficulties in accurately simulating moist convection over land and interactions between moist convection and (i) convectively-forced clouds and impacts on solar radiation and subsequently surface evaporation and soil water initially small biases in moist convection (e.g. in geographical location, intensity or temporal offsets within the diurnal cycle) can be amplified through these interactions, leading to systematic biases in seasonal mean values. (ii) Convective rainfall and its impact on surface-soil water amounts 20 and surface evaporation initially small biases can be amplified through feedback processes. For example, rainfall occurring too early in the diurnal cycle (a common bias in many GCMs) will allow a larger fraction of rainfall to be locally evaporated back into the atmosphere instead of percolating into the deeper soil and increasing total soil moisture amounts. A gradual drying out of the surface soil layer will induce upward percolation of soil water and a deepening of the drying signal. The result will be a drying out of soils and a reduced ability to locally sustain moist convection and rainfall, again leading to an 25 amplification of the original bias. Both feedback loops can be seen as local or regional processes. Once established, these biases can influence the large-scale (surface and mid-tropospheric) thermal gradients driving the monsoon circulation, pushing the simulated monsoon even further from the one observed. The representation of moist convection and its interaction with solar radiation (through convectively-forced clouds) and the land surface (through solar radiation and precipitation) are therefore key 30 parameterizations requiring improvement for significant progress in simulating both the South Asian and West African monsoons. Some improvements are seen in both the South Asian and West African monsoon from the EMBRACE models compared with their CMIP5 versions. However, significant biases remain, particularly with respect to regional rainfall patterns and the annual cycle of monsoon rainfall. Even more significant biases are seen for intra-seasonal rainfall variability, with little progress from CMIP5 models. In the three coupled model SAM simulations, biases in precipitation and monsoon circulation (given by the 850hPa wind field) are reduced compared to their CMIP5 counterparts. The primary reason for this is coupled 5 feedbacks that enable the damping of an atmospheric error (e.g. in wind speed or atmospheric moisture content) through the introduction of a compensating bias in surface ocean temperatures (e.g. a cold SST bias). The main model bias regarding West Africa relates to higher time frequency precipitation variability on time scales associated with African Easterly Waves. These systems, and the convective complexes embedded within them, deliver the majority of rainfall to the West African Sahel. A realistic simulation of AEWs seems an important prerequisite for increasing confidence in future rainfall 10 projections over the Sahel. Most of the EMBRACE models and their CMIP5 versions have severe difficulty in simulating these waves, with little improvement from CMIP5 to the EMBRACE-updated models. The models show quite some spread in their ability to simulate near-surface temperatures over the Sahara, with JJAS mean differences of up to 5 °C across models. Given the importance of the Saharan heat low in the overall West African monsoon circulation, more emphasis on simulating the surface energy budget over the Sahara seems necessary. All coupled simulations over West African suffer 15 from excess precipitation at the Guinea coast. This is a direct result of a warm SST bias in all models off the coast of Namibia-Angola. Reduction of this systematic bias, likely through updated ocean physics, resolution, and an improved simulation of marine stratocumulus clouds, will be a necessary step for improving coupled simulations of the West Africa monsoon. Analysis of AMIP-type simulations performed with EC-Earth at different horizontal resolutions of up to T1279 show an 20 improvement (i.e. increase) in the variability of precipitation on the synoptic time scale with increasing horizontal resolution up to T511. The seasonal mean rainfall over the WAM and SAM regions, however, does not change significantly with horizontal resolution suggesting that the reasonably good agreement of modeled and observed mean WAM and SAM rainfall in lower resolution models may be based on an unrealistic variability/intensity distribution and/or error compensation. The levelling off in increase of precipitation variability with increasing horizontal model resolution suggests that not all model 25 deficiencies can be solved by going to higher resolutions. Either a resolution higher than T1279 is needed or other nonresolution factors are involved such as too simple cloud and precipitation formation parameterizations. Many models suffer from an excessive cold tongue of water along the Pacific equator, with this tongue being both too cold and extending too far into the West Pacific. Combined with this cold tongue, coupled models also typically show (i) too strong easterly trade winds along the equator, (ii) equatorial rainfall shifted too far west in the West Pacific, (iii) an 30 equatorial thermocline that is too shallow in the East Pacific and too deep in the West Pacific, and (iv) a double-ITCZ, often with excess rainfall south of the equator. Comparison of the three EMBRACE coupled models show a general tendency for improved equatorial SSTs, both in the Pacific and Atlantic. HadGEM3-GC2 and EC-Earth3 show improvement in SST bias of as much as 1 °C in the zonal and DJF seasonal mean. HadGEM3-GC2, in particular, has a very accurate simulation of tropical SSTs and does not appear to suffer from an excessive equatorial Pacific cold tongue. This is a clear improvement over HadGEM2-ES and is an important reduction in a systematic bias. In combination with the SST improvement HadGEM3-GC2 also shows a clear improvement in the strength of the easterly trade winds along the equator. This is likely the primary cause of the reduced SST bias (through reduced Ekman driven upwelling along the equator). SSTs in EC-Earth3 are also improved relative to EC-Earth used in CMIP5. Although not as accurate in an absolute sense as HadGEM3-GC2, the 5 meridional structure of SST around the equator is better in EC-Earth3. This improved spatial structure plays an important role in EC-Earth3 not exhibiting a double-ITCZ, with an accurate northern hemisphere maximum in precipitation in both the Pacific and Atlantic. Along the equatorial Pacific, EC-Earth3 still suffers from a systematic cold bias (although improved relative to the CMIP5 version of EC-Earth), accompanied by too strong easterly trade winds. Most of the EMBRACE-updated models show a clear improvement in monthly cloud cover over the Southern Ocean 10 compared to their CMIP5 predecessors. These improvements feed through into reduced bias (and inter model spread) of both TOA outgoing solar radiation and surface downwelling solar radiation. A reduction in inter-model spread is also seen for liquid water path suggesting that the reduced spread translates into reduced model bias, although the observations of LWP over the Southern Ocean suffer from high uncertainties. All four EMBRACE-updated AMIP models have a negative bias in SWUP south of 50° S increasing to -20 to -40 Wm -2 in the 60° S to 65° S band. A similar magnitude positive bias in SWD is 15 seen in the same region. While the models show quite some improvement over their CMIP5 counterparts, the SWUP/SWD biases will drive a warm SST bias in the Southern Ocean, south of the Antarctic Circumpolar Current (ACC) with negative effects on vertical upwelling and, farther south Antarctic deep water formation and sea-ice amounts. The main outstanding cloud-radiation biases appear to be in the southernmost region of the Southern Ocean (e.g. increasing with increasing southerly latitude from 50° S). Whether this highlights problems that are specific to certain cloud types (e.g. 20 mid-level clouds in the cold sector of mid-latitude weather systems (Bodas-Salcedo et al., 2012)), problems in correctly delineating between liquid, solid and supercooled cloud water (Lawson and Gettelman, 2014) or problems simulating cloud formation in a relative pristine (natural aerosol dominated) region (McCoy et al., 2015) requires further analysis and, in particular, more robust observational constraints. Code and data availability 25 This analysis has been done with the ESMValTool, which is released under the Apache License, VERSION 2.0. The newly added ESMValTool namelist 'namelist_lauer18esd.xml' includes the diagnostics that can be used to reproduce the figures of this paper. This version will be available from the ESMValTool webpage at http://www.esmvaltool.org/ and from github (https://github.com/ESMValTool-Core/ESMValTool). Users who apply the software resulting in presentations or papers are kindly asked to cite the ESMValTool documentation paper (Eyring et al., 2016b) alongside with the software doi (doi: 30 10.17874/ac8548f0315) and version number. The climate community is encouraged to contribute to this effort and to join the ESMValTool development team for contribution of additional diagnostics for ESM evaluation. Data from the CMIP5 models is publically available through the Earth System Grid Federation (ESGF), the EMBRACE model runs can be made available on request from the host modeling groups.
2018-01-31T12:16:03.478Z
2017-06-20T00:00:00.000
{ "year": 2017, "sha1": "99f4576595d25e89eada1fa356c29e61eb028061", "oa_license": "CCBY", "oa_url": "https://www.earth-syst-dynam.net/9/33/2018/esd-9-33-2018.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2b8fc390e7464501116ec434c5c8713bfae0cd5e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
223559631
pes2o/s2orc
v3-fos-license
Hypoxia and proangiogenic proteins in human ameloblastoma Ameloblastomas are epithelial odontogenic tumours that, although benign, are locally invasive and may exhibit aggressive behaviour. In the tumour microenvironment, the concentration of oxygen is reduced, which leads to intratumoral hypoxia. Under hypoxia, the crosstalk between the HIF-1α, MMP-2, VEGF, and VEGFR-2 proteins has been associated with hypoxia-induced angiogenesis, leading to tumour progression and increased invasiveness. This work showcases 24 ameloblastoma cases, 10 calcifying odontogenic cysts, and 9 dental follicles, used to investigate the expression of these proteins by immunohistochemistry. The anti-HIF-1α, anti-MMP-2, anti-VEGF, and anti-VEGFR-2 primary antibodies are used in this work. The results have been expressed by the mean grey value after immunostaining in images acquired with an objective of 40×. The ameloblastoma samples showed higher immunoexpression of HIF-1α, MMP-2, VEGF, and VEGFR-2 when compared to the dental follicles and calcifying odontogenic cysts. Ameloblastomas show a higher degree of expression of proteins associated with intratumoral hypoxia and proangiogenic proteins, which indicates the possible role of these proteins in the biological behaviour of this tumour. Odontogenic cysts and tumours comprise a large part of oral and maxillofacial pathologies, and, therefore, there is a need to explore their biological behaviour, deepening knowledge about their development and progression. Ameloblastomas (AMEs) are one of the most common odontogenic tumours. They are characterized by a slow growth, high recurrence and morbidity rates, and local invasiveness 1 . Although surgery is the most acceptable treatment modality so far, it produces severe aesthetic and functional sequelae that lead to a loss of life quality 2 . A calcifying odontogenic cyst (COC) is an odontogenic cystic lesion that has a less aggressive behaviour than an AME. The prognosis of a patient with COC is favourable, with few recurrences reported after simple enucleation 1,3 . During tumour progression, the concentration of oxygen in the microenvironment around the tumour cells is reduced, resulting in intratumoral hypoxia, characterized by reduced oxygen pressure in the cells, which leads to various biochemical responses and may result in a number of compensatory cellular mechanisms that allow the continuation of neoplastic development. A hypoxic microenvironment is characteristic of many solid tumours. Hypoxia is also associated with a more aggressive phenotype, which affects angiogenesis and cellular invasiveness 4,5 . The signalling pathway involving VEGF/VEGFA and VEGFR is a promising target for cancer treatment, as it has been identified as the main regulator of tumour angiogenesis [11][12][13][14][15] . Among VEGF receptors (VEGFRs), VEGFR-2 is overexpressed under hypoxia 15 . VEGFR-2 plays an important role in activating the components responsible for proliferation, including endothelial cell invasion, migration, differentiation, and angiogenesis 16 . Some of these proteins have already been studied in AME, COC and DF. Three studies verified the expression of HIF-1 α and found a higher expression of this protein in AME when compared to COC and DF, and suggested that may be associated with its aggressive biological behaviour [17][18][19] . Three studies have verified the expression of VEGF and found a higher expression of this protein in AME when compared to odontogenic keratocyst, dentigerous cyst or tooth germ, suggesting that the up-regulation of this protein in AME might be associated Results All AME samples expressed HIF-1α, MMP-2, VEGF/VEGFA, and VEGFR-2. There was a strong immunoexpression of HIF-1α, predominantly in the nucleus of tumour cells (Fig. 1A,B). Strong cytoplasmic immunostaining of MMP-2 was observed mostly in the central cells of the islands formed by the tumour epithelium ( Fig. 2A,B) and in the peripheral cells of these islands (Fig. 2C,D). VEGF also showed strong immunoexpression in the central cells of the islands formed by the tumour epithelium, predominantly in the cytoplasm (Fig. 3A,B). VEGFR-2 showed immunostaining in the cell membrane of tumour cells (Fig. 4A,B). All proteins showed low immunoexpression in tumour stromal cells. There was a statistically significant difference in the immunoexpression of HIF-1α, MMP-2, VEGF/VEGFA, and VEGFR-2 proteins between the parenchyma of tumour cells and the stroma of tumour cells ( Fig. 5A-D), where the parenchyma cells of AME showed a higher immunoexpression compared to the stromal cells. It was also observed, significant difference in the immunoexpression of HIF-1α, VEGF/VEGFA, and VEGFR-2 between AME and COC samples and between AME and DF samples, where the expression of these proteins was higher in AME than in COC and DF (Fig. 6A,B,D), and there was a statistically significant difference in MMP-2 immunoexpression between AME and DF samples, where the expression was higher in AME than in COC and DF (Fig. 6C). Discussion In our study, AME samples presented strong HIF-1α immunoexpression, predominantly in the nuclei of tumour parenchyma cells, where the transcription factor is active and triggers the transcription of several genes, initiating several mechanisms, such as angiogenesis, cell proliferation, and tumour invasion 4,5,34 . Solid tumours often contain regions with low oxygen concentrations around tumour cells, due to insufficient blood supply, resulting in intratumoral hypoxia 4,5 . Hypoxia-induced factor 1 (HIF-1) is a transcriptional activator responsible for the regulation of elements responsive to this phenomenon. In a hypoxic condition, HIF-1α binds with the HIF-1β subunit, becoming active and migrating to the cell nucleus in order to regulate cell survival mechanisms 35 , this corroborates with our findings, suggesting that the HIF-1α found was active. Its activation plays an essential role in the invasive process, being fundamental for tumour growth and aggressiveness, with overexpression observed in many human tumours 4,[36][37][38] . Immunohistochemistry studies have observed the overexpression of HIF-1α in AME and have discovered that this is associated with its aggressive biological behaviour [17][18][19] . Our results show a strong cytoplasmic immunostaining of MMP-2 in the central cells of the islands formed by the AME tumour epithelium, a region most likely to suffer intratumoral hypoxia. MMPs are secreted by stromal cells, endothelial cells, or by tumour cells themselves 10,39 . Pinheiro et al. 39 also observed, through immunohistochemistry, the cytoplasmic expression of MMP-2 in the tumour parenchyma of AMEs and implied the high expression of this protein with the most aggressive infiltrative behaviour of this neoplasm. ECM degradation by these MMPs stimulates the release of angiogenic and growth factors, such as VEGF, which is considered to be the most important inducer of angiogenesis and vascular permeability 9,10 . Hypoxia may also induce macrophages to produce more VEGF and suppress immune response 40 . Tumours induce neovascularization in order to acquire nutrients for continuous growth and metastatic spread. This "angiogenic switch" is induced by VEGF, which in turn can be produced by cancer cells and host stromal cells in a tumour 8,41 , which contributes to the disruption of the balance between angiogenic promoters and inhibitors 42 . Experimental and clinical reports have confirmed that VEGF plays a central role in regulating angiogenesis and vasculogenesis in solid tumours 43,44 . Its signalling may affect several significant tumour functions in addition to vascular permeability and neovascularization 45 , such as tumour cell proliferation, migration, and autocrine invasion 46,47 . Kumamoto et al. 20 and Dineshkumar et al. 21 demonstrated a strong cytoplasmic expression of VEGF in AME, however, the immunoexpression was concentrated in the peripheral cells of the tumour epithelium, indicating the proangiogenic role of this protein. In our study, VEGF showed strong cytoplasmic immunoexpression in the central cells of the islands formed by the tumour parenchyma, farthest region from the stroma, the supporting tissue, indicating the possible activity of this protein in intratumoral hypoxia area, suggesting an alternative role to its angiogenic function. Tong et al. 45 suggested that the VEGF expression in a head and neck carcinoma cell line may play two different roles in tumorigenesis: (1) through its paracrine function, essential for tumour-associated angiogenesis, and (2) through its autocrine function, where VEGF plays an important role by directly enhancing mitogenesis and invasiveness by maintaining proliferation, enhancing survival, and increasing the invasion of carcinomas. Therefore, VEGF may serve a proangiogenic and protumorigenic role in the pathogenesis of neoplasms. In addition, Tong www.nature.com/scientificreports/ stated that VEGF-targeted therapy has the potential to fulfil both anti-angiogenic and anti-tumorigenic functions. Higher VEGF expression in tumour parenchyma, compared to tumour stroma, suggests a pro-tumorigenic role instead of a pro-angiogenic role, where it would be possible to observe a higher expression of VEGF in the stromal microenvironment. This hypothesis has been reinforced by several authors 8,[45][46][47] . Cystic lesions showed an increased concentration of VEGF in the cystic fluid that was produced by parenchymal cells, which can induce proliferation in cyst lining epithelial cells 6,27,28 . Altered Wnt pathway signalling has been identified in COCs 35 as well as ameloblastomas 36 , and may be one of the factors involved in expression of VEGF in these lesions 22 . In the study by Dineshkumar et al. 21 , there was no statistically significant difference in VEGF expression between AME and COC. In our study, VEGF had a higher expression in AME than www.nature.com/scientificreports/ in COC; however, there was no statistical difference. This finding reinforces the fact that COC, despite being a cystic lesion of odontogenic origin and having a milder clinical behaviour when compared to AME, still has aggressive characteristics 1 . Among VEGF receptors (VEGFRs), VEGFR-2 is overexpressed under hypoxia 15 . In our study, VEGFR-2 immunoexpression was observed in the cell membrane of tumour cells. VEGFR-2, localized on endothelial cell surfaces, appears to regulate VEGF endothelial cell permeability and proliferation 11,48 . VEGF receptors were originally believed to be expressed only in endothelial cells, however, it was later shown that they could also be expressed in tumour cells, including head and neck cancers 45,49 . This is the first study to evaluate the expression of this receptor in AME and COC. www.nature.com/scientificreports/ Our results show higher immunoexpression of HIF-1α, MMP-2, VEGF, and VEGFR-2 in ameloblastomas when compared to calcifying odontogenic cysts and dental follicles, suggesting that the relationship between these proteins may contribute to the behaviour of this neoplasm. As the centre of the tumour islands showing higher expression of HIF-1α curiously promote higher VEGF expression in the same region, as well as a higher expression of VEGFR-2 in tumour cells. From this, we can extrapolate that VEGF in the central cells of islands formed by tumour parenchyma may signal peripheral cells for greater proliferation and invasion, which is likely to be mediated by MMP-2 expression. The methods used in this work, although indicating the expression of the studied proteins, limit the depth of our knowledge regarding the role of HIF-1α, MMP-2, VEGF, and VEGFR-2, as the expression of proangiogenic proteins in the centre of tumour epithelial islands suggest a secondary role www.nature.com/scientificreports/ of these proteins in the proliferation, survival, and invasion of ameloblastoma. In this sense, mechanistic studies must be done to answer these questions. Methods Ethical approval. Immunohistochemistry. Immunostaining was performed using an immunoperoxidase assay and the EnVision technique as previously described by Costa et al. 35 . Formalin-fixed, paraffin-embedded tissues were studied by immunohistochemistry. Sections of a 5-µm thickness were obtained and mounted on 3-aminopropyltriethoxysilane-coated slides (SIGMA-ALDRICH). The sections were deparaffinised in xylol and hydrated in graded ethanol solutions. The sections were immersed in 20% H 2 O 2 and methanol in a 1:1 ratio for 20 min to inhibit endogenous peroxidase activity. A citrate buffer (pH 6.0) was used for antigen retrieval in a Pascal chamber (DAKO) after immersion for 30 s. Subsequently, non-specific biding sites were blocked with 1% bovine serum albumin (
2020-10-18T13:05:41.000Z
2020-10-16T00:00:00.000
{ "year": 2020, "sha1": "ff4b3030118337ad168387ca88da5f494e932098", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-74693-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3c01009ab6b577ddfc5afe16423e093fb7df6b62", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235409979
pes2o/s2orc
v3-fos-license
Effect of potassium application on morphophysiological two varieties of soybean under drought stress Domestic soybean production is getting weaker due to drought stress which affects all aspects of plant growth and metabolism, besides using adaptive plant varieties, increasing plant nutrition, especially potassium, also affects increasing plant tolerance to drought. The research aimed to examine the effect of potassium application on the morphophysiological of two varieties of soybean under drought stress. The research was conducted in a plastic house, Deli Serdang Regency, North Sumatra, using a randomized block design with three factors. The first factor was soil water content condition, which consisted of 80% and 40% field capacity. The second factor was variety, consisted of Grobogan and Dering-1. The third factor was the application of potassium, consisted of without potassium 100, 200, 300 kg KCL ha−1. Result showed that drought stress decreased potassium uptake, the relative water content (RWC) of leaf and plant height. Grobogan variety more adaptive to drought stress with the highest plant height (71.49 cm), K content (1.77%), K uptake (43.85 mg/plant), and leaf RWC (37.91%). Application of K at a dose of 200 kg ha−1was the most appropriate to encourage the growth of soybean plants under drought stress with the highest total K (1.76%) and leaf RWC (44.40 mg/plant). Introduction Soybeans are one of the strategic leading commodities, the need for the domestic food industry for this commodity is quite high. Currently, the average amount is 2.3 million tons of dry beans/year. Meanwhile, the average domestic production in the last five years has been 982.47 thousand tons of dry beans or 43% of the total needs [1]. Domestic soybean production is getting weaker due to the problem of drought stress which affects all aspects of plant growth and metabolism including osmotic balance [2]. Potassium has a very important physiological function and role to plant water. Potassium (K) is an essential nutrient that affects most of the biochemical and physiological processes that influence plant growth and metabolism. It also contributes to the survival of plants exposed to various biotic and abiotic stresses [3]. Beside of potassium fertilization, genetic factors (varieties) are also a concern in overcoming drought stress. The adjustments made by plants to drought are highly dependent on the level of stress experienced, the length of stress, the plant growth phase when experiencing stress and the type or variety of plants. Adjustments made by plants to the effects of drought are a form of adaptation to survive in drought-stressed conditions. The adaptations of plants when under drought stress will have a major impact on the yield of these plants [4]. Planting, fertilization and drought treatment Soybean was planted two seeds/polybags in a plastic house using topsoil as much as 10 kg after sieving, added 2.5 g dolomite/polybag (500 kg ha -1 ) and incubated for 3 weeks. The planting was done simultaneously with the application of potassium and SP-36 fertilizers, drought treatment was carried out in the V2-R5 phase using the gravimetric method. Fertilization was done by placing around the planting hole with apart7-10 cm. The dose of N was 25 kg ha -1 of urea was applied 2 weeks after planting and the P dose was 150 kg ha -1 of SP-36. Plant maintenance Maintenance includes staking, weeding at 2 weeks after planting, fertilizing urea and phosphorus simultaneously with planting. Pest and disease control was adjusted to the intensity of pests and diseases. Observation of morphological character Observation of morphological character such as plant height at 5 weeks after planting. Each physiological character at the vegetative end includes K uptake using the Atomic Absorption Spectrophotometry (AAS) method, and leaf Relative Water Content (RWC) using the gravimetric method with the equation below [5]. Data analysis Data processing using the SPSS statistical program (ver. 20). The data obtained were analysed using variance at the level of α = 5%. If there is a significant effect between the test treatments, continue with Duncan's Multiple Range Test (DMRT) [6]. Plant height Plant height was affected by the degree of drought and variety ( The vegetative phase is the phase of active development and division of cells so that they are very vulnerable to water deficiency. According to [7], drought stress conditions in the vegetative phase can reduce plant height. Sharifah and Muriefah [8], states that drought stress inhibits plant growth, causing plants to become stunted. Soybean plant height decreases with increasing drought stress. The inhibition of plant growth is caused by the disruption of the photosynthesis process due to lack of water. Potassium content and uptake Application of potassium fertilizer had no significant effect on K uptake but there was a significant effect on K content. Potassium application at a dose of 200 kg ha -1 (K2) resulted in the highest K content of 1.76% which was significantly different from without potassium application (K0) was 1.49% and there was a decrease if the K fertilizer dose was given higher or lower. This suggests that a dose of 200 kg ha -1 is the best dose to increase K content ( Table 2). In line with previous study of [9] that the best K dose to increase K content is 25.53 g/plant which produces 1.87% in plants and a decrease in K content will occur at a dose of 34.03 g/plant (1.63%), 8.5 g/plant (1.26%) and without K (0.91%). Drought stress had no significant effect on K content but had a significant effect on K uptake. At 80% FC resulted in plant K uptake of 46.54 mg/plant, which was significantly different from 40% FC (36.61 mg/plant). Decreasing soil water content inhibits the process of transporting potassium nutrients to plant tissues, where the process of transporting potassium through mass flow and diffusion mechanisms are both closely related to soil water content conditions. Kirkham [10], states that drought stress will affect plant physiological processes, namely changing water potential, osmotic potential, cell turgor potential, which can affect stomata behavior, mineral nutrient absorption and translocation, transpiration and photosynthesis as well as photosynthate translocation. Greenland [11] emphasized that there are at least three important things that must be considered to manage the availability of potassium in the soil, including groundwater content, turtuosity of the diffusion pathway (chaotic curves in the soil/irregularity of the paths through which cations move in the soil), and the concentration of the ions that will diffuse in the soil solution. The results of [12] stated that 100% FC of soilwater content produced significant K uptake, which was 2.95% higher than 50% FC of soil water content. The use of varieties had no significant effect on K uptake, but had a significant effect on plant K content. Grobogan (V1) produced K content of 1.77%, which was significantly different from Dering-1 (V2) which was only 1.49%. Plants are essentially a genetic and environmental constitution, while plant growth and production are influenced by the photosynthetic process in plants as stated by [13]. Differences in response for each variety can occur due to genetic differences in soybean varieties [14]. The response of plants to the environment differs depending on the type and cultivar of the plant. Plants can respond positively or negatively to changes in the growing environment. These various responses lead to interactions between the environment and the genotype, and this phenomenon was often encountered in multilocation testing. The response can be seen from the physical changes in plants in the form of changes in growth and changes in plant phenotypes. Plant responses can also be seen from changes in physiological processes such as the speed of photosynthesis and photosynthate translocation [15]. Leaf RWC There is a significant interaction of the effect of drought, variety, and potassium on leaf RWC ( Table 3). The application of potassium 200 kg ha -1 in Grobogan variety can increase leaf RWC by 70% compared to without potassium application at 80% FC and Dering-1 at 40% FC variety yields leaf RWC 118.87% higher than control. The application of potassium, in general, was proven to increase leaf RWCboth at 80% FC and 40% FC in Grobogan and Dering-1 variety. The potassium dose of 200 kg ha -1 gave the highest leaf RWC yield (53.33%) and there was a decrease at both lower and higher doses. This shows that the best dose of fertilization to increase leaf RWC was found in the K2 treatment (200 kg ha -1 ). Based on the analysis results (Table 2), it is known that the K content of the Grobogan (1.77 %) and Dering-1 (1.49 ), this is what later becomes evidence that Grobogan surpasses the growth of Dering-1 because Grobogan is very good at producing K nutrient levels where the K nutrient, in turn, causes Grobogan can survive better during drought stress. Subandi [16], states that the Pnutrient plays an important role in increasing plant resistance to abiotic stress (lack of water and Fe poisoning). Plants that have sufficient K can retain water content in their tissues, because they can absorb moisture from the soil and bind water so that plants are resistant to drought stress [3]. The results of the analysis (Table 3) show that the application of K 1 g/plant (200 kg KCL ha -1 ) was successful in helping plants to maintain the leaf relative water content (RWC) significantly at 44.40% and without K application only The response of plants to drought depends on the type of variety and the phase at which the drought occurs. Varieties that are said to be drought-tolerant may not be tolerant in all phases of their life and vice versa for susceptible varieties. The research results [17], show that the drought treatment of Dering-1 varieties (tolerant) is not as good as Grobogan (susceptible) varieties in the early reproductive phase (R1), namely Dering-1 plant height (25.1 cm) and Grobogan (41.0 cm) but in the final reproductive phase (R6) the result was Dering-1 (68.4 cm) better than Grobogan (63.6 cm) at drought stress 40% field capacity. The method of adaptation of plants to drought varies depending on the type of plant and the stages of plant development [2]. In accordance with Hasanah et al (2020), the result showed that the treatment of Kieserite under dryland condition give the diference response of soybean varieties. The difference in response of each variety can occur due to genetic differences [14]. Conclusions Potassium application at a dose of 200 kg KCL ha -1 produced plants with the highest K content and leaf RWC. Grobogan variety more adaptive to drought stress with the highest plant height, K content and leaf RWC. Drought stress of 40% FC reduced plant K uptake, leaf RWC and plant height. The interaction of 200 kg ha -1 potassium and Dering-1 variety can maintain leaf RWC in severe drought stress conditions (40% FC).
2021-06-12T20:02:54.327Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "3037dd87a61ab9e0e07c1ce8c8f6673fe43ee003", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/782/3/032069/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "3037dd87a61ab9e0e07c1ce8c8f6673fe43ee003", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics" ] }
270199187
pes2o/s2orc
v3-fos-license
Health and Economic Impact of Different Long-Term Oxygen Therapeutic Strategies in Patients with Chronic Respiratory Failure: A French Nationwide Health Claims Database (SNDS) Study Introduction Long-term oxygen therapy (LTOT) is reported to improve survival in patients with chronic respiratory failure. We aimed to describe effectiveness, burden, and cost of illness of patients treated with portable oxygen concentrators (POC) compared to other LTOT options. Methods This retrospective comparative analysis included adult patients with chronic respiratory insufficiency and failure (CRF) upon a first delivery of LTOT between 2014 and 2019 and followed until December 2020, based on the French national healthcare database SNDS. Patients using POC, alone or in combination, were compared with patients using stationary concentrators alone (aSC), or compressed tanks (CTC) or liquid oxygen (LO2), matched on the basis of age, gender, comorbidities, and stationary concentrator use. Results Among 244,719 LTOT patients (mean age 75 ± 12, 48% women) included, 38% used aSC, 46% mobile oxygen in the form of LO2 (29%) and POC (18%), whereas 9% used CTC. The risk of death over the 72-month follow-up was estimated to be 13%, 15%, and 12% lower for patients in the POC group compared to aSC, CTC, and LO2, respectively. In the POC group yearly mean total costs per patient were 5% higher and 4% lower compared to aSC and CTC groups, respectively, and comparable in the LO2 group. The incremental cost-effectiveness ratio (ICER) of POC was €8895, €6288, and €13,152 per year of life gained compared to aSC, CTC, and LO2, respectively. Conclusion Within the POC group, we detected an association between higher mobility (POCs autonomy higher than 5 h), improved survival, lower costs, and ICER − €6 238, compared to lower mobility POCs users. Supplementary Information The online version contains supplementary material available at 10.1007/s41030-024-00259-x. INTRODUCTION Long-term oxygen therapy (LTOT) is the recommended standard of care for patients with chronic respiratory insufficiency and failure (CRF) due to severe chronic obstructive pulmonary disease (COPD) or other causes [1,2].Besides obesity or pulmonary hypertension, other comorbid conditions, such as heart failure and its various etiologies, or lung cancer, can aggravate CRF [3,4].Although supplemental oxygen is an established therapy for patients with CRF, studies reporting the effect of LTOT devices on clinical outcomes, such as hospitalizations and mortality, and on the costs related to the healthcare resource use, are scarce and with a limited sample size [5][6][7]. Oxygen delivery devices allow mobility as needed, depending on the model.On the basis of reimbursement, logistics, and performance, patients have often a combination of devices for home oxygen delivery, a non-portable, fixed, device with continuous flow oxygen dispensation for in-home use, a portable device for ambulation, and a backup system.In reallife settings it is practically impossible to estimate which oxygen delivery device is the one bringing the most relevant clinical benefits to a patient with CRF, as in LTOT standard of care fixed and portable devices are often used in combination.The majority of the LTOT clinical trials focusing on the portable oxygen concentrator (POC) device, which operates by concentrating oxygen from the ambient air, have compared it with other portable solutions, such as liquid oxygen devices, or continuous flow oxygen cylinder [8][9][10][11][12].These trials have demonstrated no difference between the various modalities of portable oxygen devices (activities of daily living, oxygen saturation, Borg score or 6-min walking test [6MWT]), underlying the importance of patients' mobility backed by current guidelines [1,13,14].Appropriate LTOT use reduces hospitalizations and overall mortality in severely hypoxemic patients with COPD, improving cognitive function, emotional status, and potentially slowing the progressive impairments in independent living, affecting also young subjects [15][16][17][18].Pulm Ther (2024) 10:237-262 Different types of POCs are available on the market.When selecting a POC, certain measurements and specifications of the device should be considered in relation to the disease requirements and patient's lifestyle.These measurements are weight, oxygen flow rate, battery life, oxygen purity (typically around 90% or higher), noise level, durability, and additional features such as pulse dose delivery, continuous flow mode, travel-friendly accessories, and user-friendly interfaces.When the oxygen flow is intermittent, the pulse dose delivery can be adjusted manually or automatically (oxygen sensing technology), depending on the degree of physical activity and needs [19].The weight can go from the smaller models (2-5 lb, approximately 0.9-2.3kg) to the larger, more featurerich models (10-20 lb, approximately 4.5-9 kg).Patients' preference for portable options is recognized, since they support autonomy, independent living, and well-being of patients [20][21][22].Clinicians are becoming more attentive to the patient's perspectives and are aware that guidelines recommend keeping LTOT patients active and able to work [14,23]. The economic and social burden of chronic respiratory diseases is substantial across countries.Lost productivity and low mobility due to COPD are particularly worrisome in patients with comorbidities and expected to increase in the coming years [24,25].The American Thoracic Society raised awareness on the increase of respiratory diseases in association with multimorbidity, especially in the context of worldwide aging trends [26,27].Studies assessing the clinical and economic burden of LTOT on patients in France, and throughout Europe, are limited.As such, bringing new evidence on the current use of domiciliary LTOT in France could help improve care management for severe respiratory insufficiencies conditions.Also considering the country-specific reimbursement constraints, cost-effectiveness studies could elevate the quality of home oxygen services available to patients [14,28]. The primary objective of this study was to evaluate the health impact of LTOT according to the different oxygen delivery strategies in patients treated for CRF due to COPD or other causes, using the French national healthcare system database, SNDS.We examined clinical outcomes, such as survival, hospitalizations, and specialist visits, to estimate the cost-effectiveness of POC, used alone or in combination, compared with other oxygen delivery devices, such as stationary concentrators, used alone (aSC), compressed tanks, alone or in combination (CTC), and liquid oxygen, alone or in combination (LO2).The secondary objective was to estimate and compare cost-effectiveness of POCs based on the different level of autonomy. Data Source The study consisted of secondary data analysis based on claims data from the Système National des Données de Santé (SNDS) [national healthcare system database], which includes health information (hospital care and primary care) for 66 million people, more than 99% of the French population, with over 10 years of followup, owned by the Caisse Nationale d'Assurance Maladie (CNAM) [National Health Insurance] [29,30].The SNDS provides pseudonymous, comprehensive, and individualized data through a unique personal identification number.Details on the data available have been previously described [31]. In accordance with French regulations, the study protocol was approved by the ethics and scientific committee for health research, studies, and evaluations (CESREES) and by the French data privacy committee (CNIL; Decision DR-2021-228).Data access was delivered by CNAM after agreement.In the study personal data processing is intended for a research project not involving human subjects [32]. Study Design This is a retrospective comparative observational study based on the SNDS medico-administrative dataset.The study report was completed following the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines [33]. Study Period The study was conducted from 1 January 2013 to 31 December 2020.Long-term oxygen treatment (LTOT) was defined by at least 3 months of continuous oxygen supplementation.The LTOT index date was defined as the date of first prescription of a LTOT device happening during the inclusion period, from now on referred to as the index date.The inclusion period was defined from 1 January 2014 to 31 December 2019.The historical period corresponded to 12 months prior to the LTOT index date and was specified to enable the description of patient baseline characteristics.Patients had an individual follow-up period up to the date of death or up to 31 December 2020 (for maximum of 72 months).To maximize patient inclusion, a minimum time of follow-up was not specified. Study Population The study population encompassed all French adults (aged ≥ 18 years) with a recorded diagnosis of COPD or CRF, due to COPD or other causes, during the study period, and a reimbursement for a first-time prescription of a LTOT device occurring from 1 January 2014 to 31 December 2019 (incident and prevalent cases).The list of ICD-10 codes used to identify the diagnosis of COPD or CRF is provided in the Supplementary Material (Table S1).The list of the codes used to identify medical devices associated with LTOT [stationary concentrator (SC); compressed tanks (CTC); liquid oxygen (LO2); portable oxygen concentrator (POC)] is provided according to the French nomenclature list of codes, Liste des produits et prestations (LPP) [34] (Table S2). Patients treated with LTOT are often equipped with multiple oxygen delivery devices, according to the French guidelines for LTOT standard of care [35].For the analysis we identified different groups determined by the higher frequency of the device used, alone or in combination, during the follow-up period (Table 1).The groups were mutually exclusive, as patients were considered only once in a specific group and remained in the assigned group until the end of the follow-up. Only in the SC group were patients SC exclusive users, identified as SC used alone (aSC).In the other groups analyzed, the main device, used alone or in combination with other devices following standard of care, was used to identify the patients: CTC, LO2, and POC. In addition, the POC group was further examined.A secondary analysis was performed in two subpopulations of patients with potentially higher mobility (HM) (POCs autonomy higher than 5 h, Table 1 Study groups based on oxygen delivery devices (alone or in combination) Note: One patient might be observed with multiple medical device delivery during the follow-up period, e.g., one or multiple deliveries of SC followed by one or multiple deliveries of POC followed by one or multiple deliveries of LO2.When the use of POC, alone or in combination, was recorded during the follow-up period, the patient was included in the POC group Inogen only devices) and lower mobility (LM) (POCs autonomy lower than 5 h, all non-Inogen devices).The oxygen delivery devices included in the analysis reflect their availability on the French market and their level of autonomy between 2013 and 2020.Of note, in the higher mobility group only Inogen devices were detected in the SNDS dataset, as they were the ones approved for reimbursement within France during the prescription period considered in the study. Study Endpoints Patient characteristics, including gender and age, were described for all patients at index date. Comorbidities were selected as known to be significantly associated with CRF severity, and identified during the historical period according to algorithms used in the mapping published by the CNAM [36]. Overall survival, healthcare resource use (HCRU), and the associated costs were assessed.HCRU included all-cause private and public hospitalizations, respiratory-related hospitalizations, and rehospitalization (30-day gap between two respiratory hospitalizations), emergency room (ER) visits, outpatient general practitioner (GP) and specialist visits, and dispensing of observable LTOT medical devices. Patients' level of mobility, level of frailty during hospital stays, and access to healthcare were estimated.The patient's level of mobility was estimated using the French nomenclature list of LPP codes related to mobility assistance equipment (Table S3).The hospital frailty risk score (HFRS) was calculated using diagnoses recorded during hospital stay in the 2-year historical period (except for the patients included in 2014, who only had 1 year of historical before the index date, based on the data extraction provided by CNAM) [37].Potential territorial healthcare access disparities were estimated considering rural/urban areas distribution and the social deprivation index [38,39]. Descriptive Analysis Descriptive analyses were conducted depending on the nature of the variable.Qualitative variables were described by the number and frequency for each modality and quantitative variables by the mean, median, min/max values, quartiles, and standard deviation (SD). The healthcare use results were reported as the rate per patient per year. Comparative Analysis Propensity Score Matching A propensity score was generated for matching subjects of the "treated" and "control" groups as described.The studied outcomes were analyzed between the two comparable groups; a 1:1 non-discounted matching between patients was performed using the propensity score method.Matching variables, including the comorbidities associated with the diagnosis of CRF, due to COPD or other causes, are listed in Table 2.The probability of using a POC (alone or in combination) device was modeled using a multivariate logistic regression including the following covariates: (a) the sociodemographic characteristics at the time of the LTOT index (age, gender); (b) comorbidity type and number per patient; (c) the use of a stationary concentrator (at least one recorded delivery during the follow-up period); (d) index date LTOT delivery before or after 2018, considering how the SNDS listing changed in terms of reimbursement for this prescription [40].Of note, matching was not performed on respiratory disease severity and LTOT adherence/ utilization, as related data are not available in the SNDS database.Three different matchings were implemented.Conditional on the propensity score, the distribution of observed baseline covariates will be similar between the "treated" group (POC-equipped patients) and "control" group (aSC-, CTC-, LO2-equipped patients).Standardized mean differences were used to analyze the balance in measured baseline variables before and after matching.For this analysis, SD Pulm Ther (2024) 10:237-262 was expressed as a percentage, and the result was reported as "excellent balance" (SD < 5%) [41].The treated and control groups were compared two by two using the Student's t test (quantitative variables) and chi-square (qualitative variables).All tests performed were two-sided and considered significant at p = 0.05. Survival Analysis Overall survival (OS) was estimated using the Kaplan-Meier method considering all-cause death as an event. Patients were censured at the end of followup period, loss to follow-up, or death.In this case the number of days/months between the index date and the date of last observed care (up to 31 December 2020) was considered during follow-up.Whereas for the time to death the number of days/months between the index date and the date of death was considered. The comparison between the two groups of interest was performed with log-rank test and 95% confidence interval.The hazard ratio (HR) was estimated with the Cox method. Economic Evaluation All-cause HCRU was assessed for the following categories: private and public hospitalizations, medical visits defined by the ACE (actes et consultations externes), such as outpatient physician and paramedic visits, technical medical, imaging, or biological procedures, laboratory tests, dispensing of observable drugs and medical devices, financial sickness benefits, and invalidity pensions. Sociodemographic characteristics Age Gender Comorbidities [36] Acute and chronic heart failure For each oxygen delivery device, total direct costs associated with the HCRU parameters during the follow-up period were annualized to 2020 prices using Consumer Price Index and reported as the mean annualized cost per patient per year and quoted in EUR.The economic evaluation was conducted from the payer perspective, considering the actual rate expenditures reimbursed by the French national health insurance reported in the SNDS.The costeffectiveness of POC compared to other oxygen delivery devices (aSC, CTC, LO2) was assessed by the incremental cost-effectiveness ratio (ICER), which was expressed as cost per life-year gained [42]. ICER was calculated as the ratio between the difference of total care costs between the POC (treated) and the control groups and the difference of overall survival (expressed by the number of life-years gained, LYG) over the study period.The total LYG in the POC group compared to the control groups was therefore estimated by the difference in number of life-years lost between the two groups during the study. All analyses were performed in R Studio Version 4.3.1 (R Studio Inc., Boston, MA). Study Population The study population included 244,719 adult patients receiving LTOT between January 2014 and December 2019 (Fig. 1).Among them, five groups were identified according to the main oxygen delivery device used (either alone or in combination) during the inclusion period.The following four groups were described further: compressed tank, alone or in combination (CTC); liquid oxygen, alone or in combination (LO2); portable oxygen compressor, alone or in combination (POC); stationary concentrator, alone (aSC). Overall, the study population comprised elderly patients, with a mean age of 75 (SD 12) years, and similar gender proportion (48% women).Only 20% were younger than 65 years old.Sociodemographic and clinical characteristics of the study population are reported in Table S4. According to group assignment, before matching most of the patients were in the stationary concentrator group (aSC 38%), and in the mobile oxygen therapy group (46%), of which 29% were equipped with liquid oxygen (LO2) and 18% equipped with portable oxygen compressor (POC).Of note, among POC users, the majority used Inogen concentrators (49%) and/ or other devices [Philips (31%), Caire Eclipse (14%), Invacare (12%), GCE Zen-O (2%), and Resmed (1%)].Unmatched patients included in the mobile oxygen therapy were younger (73-74 years old), with lower proportion of female patients (approximately 45%), whereas patients equipped with aSC were older (78 years old), as a stationary concentrator is often prescribed at the end of life, with 53% female patients (Tables 3, 5).Six percent of the patients were included in the CTC group.The remaining 4%, representing patients with other combinations of LTOT, were not analyzed in the study as a result of the high heterogeneity of LTOT combination. At baseline, 88% of the LTOT patients presented at least one comorbidity, with the majority having more than one concomitantly (Table S4, Fig. 2).The most prevalent comorbidities were chronic respiratory diseases, excluding cystic fibrosis (86%), and cardiovascular disease (70%).For other main chronic conditions, such as chronic heart failure, cancer, and vascular risk treatments (without pathologies), the prevalence was 38%, 32%, and 30%, respectively.Obesity was recorded in only 5% of patients; however, this is likely an underestimation since diagnoses are only captured by recorded ICD-10 codes used to describe the underlying reason for the hospital stays, whereas weight and BMI are not available in the SNDS.A total of 13% of patients had concomitant chronic respiratory disease and cardiovascular disease.More specifically patients presented chronic respiratory disease overlapping with cardiovascular diseases, and chronic heart failure (13%) or vascular risk treatments (5%).Chronic respiratory diseases overlapped also with cancer and chronic heart failure and cardiovascular disease (6%).Remarkably, only 5% of the patients had chronic respiratory disease as a single disorder, whereas dyspnea, which is one of the symptoms of patients with CRF, was reported for 3% of patients. Besides sociodemographic and clinical characteristics, baseline differences between POC, alone or in combination, compared to aSC, CTC, and LO2 groups, were evaluated assessing the proportion of patients with a stationary concentrator equipment, in addition to the estimation of patients' mobility level, hospital frailty risk score, and potential healthcare access disparities, considering rural/urban areas distribution and the French social deprivation index. In the study population most patients were equipped with a stationary concentrator device, independently of group assignment (Table S5).In the total LTOT population, 24% received assistive mobility devices, with a higher proportion of patients in the aSC group compared to the POC group (26% vs. 19%, respectively).The four groups had comparable mobility needs, with the walking cane being the most frequently prescribed device Fig. 2 Venn diagram showing overlap between the main comorbid conditions known to be significantly associated with chronic respiratory insufficiencies and failure severity.This analysis was implemented to include patients with at least one comorbidity.Color coding is in accordance with the number of patients in each category of comorbid pathology, and within the Venn diagram the percentage of patients is reported Pulm Ther (2024) 10:237-262 compared to medical walkers and wheelchairs.According to the hospital frailty risk score, 59% of the total LTOT patients were not vulnerable, with the aSC group presenting a higher proportion of patients with low and intermediate frailty scores (41% and 4%, respectively; Table S6).Regarding potential healthcare access disparities, no major differences were observed in terms of rural/urban residence (Table S7) and social deprivation index (Table S8) between groups.Most patients lived in urban areas and a high proportion belonged to the more disadvantaged categories (approximately 70-75% in the three lowest levels). In summary, patients' distribution among groups was similar in terms of residence and deprivation index; only minor differences were observed regarding the use of assistive mobility devices and the estimation of hospital frailty risk.Once considered in the matching, these variables did not impact the results.For this reason, only the use of SC equipment was considered in the propensity score matching. Following the descriptive analyses, patients in the POC group, using POC alone or in combination, were matched to aSC, CTC, and LO2 treated patients, respectively, on the basis of age, gender, comorbidities, use of stationary concentrator equipment, and LTOT index date before or after 2018.The excellent balance between treated (POC) and control groups (aSC, CTC, LO2) achieved by the propensity score matching (Figs.S1-S3) allowed us to assess the costeffectiveness of this LTOT device.The overall survival, HCRU, and the associated costs were compared between the matched groups, starting from the oxygen delivery solution more distant to POC in terms of mobility and flexibility-the stationary concentrators. Baseline Characteristics Before matching, patients in the POC group had younger age and male prevalence compared to patients in the aSC group (Table 3).Patients in the POC group had higher chronic respiratory diseases (POC 93% vs. aSC 83%), cancers (POC 37% vs. aSC 29%), and vascular risk treatments (POC 36% vs. aSC 29%).Whereas aSC-equipped patients presented more often neurological or degenerative diseases (aSC 22% vs. POC 13%) and slightly more cardiovascular diseases (aSC 73% vs. POC 71%), in line with their older age. Propensity score matching between POC (treated) and aSC (control) groups yielded 18,295 matched pairs, which represented 20% of the aSC and 43% of the POC populations before matching.The two groups were comparable after matching (Fig. S1). Overall Survival The Kaplan-Meier curves demonstrated that patients with POC, alone or in combination, had a significantly better median overall survival (mOS) than those matched with aSC (48.8 months vs. 41.4 months, respectively) over the 72-month follow-up period (Fig. 3).The mortality risk was significantly reduced by 13% (HR 0.87 [95% CI 0.84-0.89],p < 0.0001) in patients in the POC group.The survival rate was estimated to be favorable for patients in the POC group at 12 months, with 86% of patients alive versus 80% in the aSC group. Baseline Characteristics Before matching, patients in the POC and CTC groups had similar age and gender distribution.Differences were observed only in terms of comorbidities (Table 4).Patients in the POC group had high prevalence of chronic respiratory diseases (POC 93% vs. CTC 86%), vascular risk treatments (POC 36% vs. CTC 31%), and slightly more psychiatric illness (POC 24% vs. CTC 20%). Propensity score matching between POC (treated) and CTC (control) groups yielded 21,552 matched pairs, which represented 98% of the CTC and 50% of the POC populations before matching.The two groups were comparable after matching (Fig. S2).Pulm Ther (2024) 10:237-262 Overall Survival Patients in the POC group had significantly better median overall survival (mOS), which was 41.2 vs. 33.3months than in the CTC group (Fig. 4).The estimated mortality risk was significantly reduced by 15% (HR 0.85 [95% CI 0.82-0.87],p < 0.0001) in POC compared to patients in the CTC group.The survival rate was estimated to be favorable for POC, alone or in combination, patients at 12 months, with 78% of patients alive versus 73% in the CTC group. Baseline Characteristics The subpopulations included in the mobile oxygen therapy options, liquid oxygen, and portable oxygen compressor were compared (Table 5).Before matching, patients in the POC and LO2 groups had similar age and gender distribution.Regarding comorbidities, patients in the POC group had higher chronic Propensity score matching between POC (treated) and LO2 (control) groups yielded 35 294 matched pairs, which represented 50% of the LO2 and 82% of the POC populations before matching.The two groups were comparable after matching (Fig. S3). Overall Survival In patients using POC, alone or in combination, the estimated overall survival (mOS) was 46.7 (45.9-47.6)months compared to 39.6 (38.9-40.4)months for patients in the LO2 group (Fig. 5).Although comparing two mobile oxygen therapeutical devices, the reduced estimated mortality risk, previously observed for POC compared to aSC and CTC groups, remained evident.Patients in the POC group had 12% (HR 0.88 [95% CI 0.86-0.9],p < 0.0001) lower risk of death compared to LO2 users.The survival rate was estimated to be more favorable for patients in the POC group at 24 months of follow-up, with 68% of patients alive versus 66% in the LO2 group.In summary, the patients equipped with portable oxygen concentrators, alone or in combination, showed consistently improved survival compared to the three other oxygen delivery solutions analyzed: stationary concentrators, compressed tanks, and liquid oxygen. Cost-effectiveness Analysis To further evaluate the effectivness of POC, alone or in combination, compared to the different therapeutical options, the HCRU and the associated costs were analyzed over the 72 months of follow-up.The HCRU and costs are presented in Table S9 (POC compared to aSC), Table S10 (POC compared to CTC), and Table S11 (POC compared to LO2).The respective costs analyzed from the index date up to the end of 72-month follow-up are expressed per patient per year. HCRU and Costs of Matched POC Versus aSC Groups Patients equipped with POC, alone or in combination, had higher all-cause hospitalization rate and relative risk compared to patients in the aSC group, represented by respiratory-related hospitalizations, but reduced readmissions and ER visits (Table S9).Medical visits (primary/community care or hospital visits) were comparable between groups, except for a higher relative risk of consulting a pulmonologist, a geriatrician, or a hematologist in the POC group.Total healthcare costs were evaluated per category in POC, alone or in combination, and aSC groups (Fig. 6).Yearly mean total healthcare costs per patient were 5% (€579) higher in the POC group (€10,861) compared to aSC (€10,282).All-cause hospitalizations were the main cost driver, followed by LTOT-related expenses, higher in POC patients. The survival benefit over the 72 months of follow-up was estimated through the cost-effectiveness analysis using the ICER. The cost-effectiveness analysis showed that POC, alone or in combination, was cost-effective.Incremental cost and efficacy were in favor of POC (Table 6).The ICER was equal to €8895 per life-year gained for patients in the POC group compared to aSC group. HCRU and Costs of Matched POC Versus CTC Groups Patients equipped with POC and CTC, alone or in combination, had comparable all-cause hospitalization rate, but patients in the POC group had lower ER visits (Table S10).Patients in the POC group had lower consultation rate with GP, but higher with pneumonologists, compared to the CTC group. According to cost-effectiveness analysis, incremental cost was higher for POC with a positive incremental efficacy (Table 7).POC presented an ICER of €6288 per life-year gained compared to the CTC group. HCRU and Costs of Matched POC Versus LO2 Groups Patients in the POC, alone or in combination, group had slightly higher respiratory-related hospitalization and readmission rates; however, the respective relative risks were comparable with the LO2, alone or in combination, group (Table S11).Patients in the POC group had lower Yearly mean total healthcare costs per patient were comparable, with €59 difference between POC (€12,094) and LO2 (€12,035) groups (Fig. 8).All-cause hospitalizations were the main cost driver, with no differences between groups.Whereas patients in the LO2 group had higher LTOT-related expenses. According to the cost-effectiveness analysis, incremental cost was higher for POC with a positive incremental efficacy (Table 8).POC presented an ICER of €13,152 per life-year gained compared to patients in the LO2 group. Differences in Mobility Between Two Subpopulations Within the POC Group In a secondary analysis, the cost-effectiveness of the device was assessed within the POC group in relation to the autonomy levels, and consequently patients' mobility, of the portable oxygen concentrators recorded between 2013 and 2020 in the SNDS database.Two subpopulations distinguished by higher mobility (HM) (POCs with an autonomy higher than 5 h, Inogen only devices) and lower mobility (LM) (POCs with an autonomy lower than 5 h, all non-Inogen devices) were described and compared for baseline characteristics, overall survival, HCRU and costs.Among the 40,617 patients with COPD and CRF equipped with POCs, 18,630 and 21,987 were identified in the HM and LM groups, respectively.Before matching, the HM (treated) group had higher a proportion of men compared to the LM (control) group (57% vs. 52%), and an average age of 72 (SD 11) years for HM and 75 (SD 12) for LM.In addition to POCs, some patients used SC (40% HM vs. 45% LM).After matching, 17,099 were included in each group and the subpopulations were comparable: mean age 72 ± 11, 56% male patients, 42% of SC equipment (Fig. S4).The Kaplan-Meier curves showed that POC HM patients had higher mOS compared to the LM group (52.7 vs. 43.4months in LM and HM, respectively) (Fig. 9).The estimated mortality risk was 19% lower (HR 0.81 [95% CI 0.78-0.83],p < 0.0001) in the HM group.The survival rate was estimated to be more favorable for HM patients at 12 months of follow-up, with 84% of patients alive versus 78% in the LM group. POC HM and POC LM groups had similar risk of all-cause and respiratory hospitalizations, and ER visits.However, despite the matching for age and comorbidities, POC LM patients had higher geriatric care visits (Table S12). The calculation of the ICER showed that POCs with HM were cost-saving (− €6238 per life-year gained) compared to LM (Table 9).Incremental cost and incremental efficacy were in favor of POC HM subpopulation. DISCUSSION In the current study evaluating portable oxygen concentrators compared to other LTOT therapeutical strategies, we addressed a current lack of evidence evaluating the use of domiciliary LTOT, with a focus on portable oxygen concentrators, using a French nationwide database that included all adult patients with COPD and CRF, due to COPD or other causes, with a prescription of a LTOT between 2014 to 2019.The study population comprised elderly patients, 90% of whom had at least one comorbidity affecting the disease severity.COPD has been shown to be the strongest predictor for a higher number of comorbidities, followed by the cumulative number of other factors, such smoking, male gender, and older age, in association with mortality [44,45].However, the prevalence of COPD is predicted to increase to 2.8 million by 2025 in France, especially among women and subjects aged ≥ 75 years [46].In our analysis the average age of matched patients was 75, presenting often coexisting comorbidities, such as chronic respiratory diseases concomitant with cardiovascular diseases, and cancer.In patients with COPD, comorbidities can significantly change the disease severity, affecting life expectancy and contributing to incremental healthcare costs in France [47].It is becoming evident that the current "individual disease-centered" management of patients with chronic respiratory diseases is costly and inappropriate, especially in older multimorbid patients [26].In the analysis patients were matched for comorbidities; however, the current study did not examine the impact of specific comorbidities or CRF-associated disease types on the cost-effectiveness analysis of LTOT devices.The use of different modalities of oxygen therapy depends on the level of daily activity, exercise capacity, patient's specific needs, including comorbidities.Among the four groups identified by the main medical device used, either alone or in combination, the POC group was associated with more favorable survival rates compared to other oxygen delivery solutions: stationary concentrators, compressed tanks, and liquid oxygen (aSC, CTC, LO2).Although this therapeutic intervention seems to be more effective for the selected clinical outcome, this should be interpreted with caution, considering the limitations intrinsic to the real-world setting.Being a retrospective study based on a medicoadministrative database, details on the level of mobility/autonomy of the patient at the time of LTOT prescription are unavailable.We could hypothesize that patients included in the POC group had a more favorable phenotype at the time of LTOT prescription, with higher baseline mobility compared to the other groups, considering that POC is prescribed to allow a more active life.The survival benefit observed in patients using POC, alone or in combination, may be the result of several factors not available in the SNDS, including socio-psychological implications related to the level of mobility and freedom offered by POC itself.Stakeholders need to be informed on the healthcare costs associated with LTOT patients depending on the different therapeutic strategies, and the potential benefits arising from higher autonomy of the device, to facilitate proper resource allocation and inform future policies.To the best of our knowledge this is the first study evaluating the cost-effectiveness of portable oxygen concentrators, used either alone or in combination, compared to other oxygen delivery solutions. The clinical performance of POCs compared to traditional portable systems such as compressed oxygen cylinders has been demonstrated to be equivalent during 6-min walk tests in patients with COPD and interstitial lung disease (ILD) [48].More recently, POCs have been shown to improve muscle oxygenation during walking in patients with ILD [49].Guidelines, previous trials, and ongoing center-based and home-based programs, including digital enabled interventions, reflect the importance of mobility in pulmonary rehabilitation in terms of improvements in exercise capacity, health status, and quality of life for the LTOT patients, including healthcare cost reduction as a result [50,51].Compared with other devices, portable systems are recognized to provide numerous advantages favoring compliance with LTOT, as recommended [1,2].The weight of portable oxygen cylinders can limit their use.When patients need to collect their cylinders from the hospital themselves or to follow ambulatory oxygen therapy, usage and treatment compliance are likely to be affected by the device weight [13].Today technology has made portable oxygen concentrators smaller, lightweight, and better performing.Considerable attention is paid to active patients, but also to those with a lower degree of physical ability.This analysis suggests that relevant clinical benefits might be associated with POC use, alone or in combination, compared to more burdensome solutions such as SC and CTC.Patients in the POC group had superior overall survival compared with all the groups analyzed.In addition, within the POC group, the higher mobility (POCs autonomy higher than 5 h) was associated with an even more favorable estimation, with 10 months higher survival (52.7 months vs. 43.4months in low mobility subpopulation).This secondary analysis highlighted the key role of mobility in LTOT patients' life expectancy as reduced mobility might increase the risk of chronic respiratory disease progression and severity.Consistent with other recent studies, this analysis confirmed that COPD (and chronic respiratory insufficiencies) generate substantial costs for the health system in France [52].Among the 66 publications reporting data on the healthcare resource use associated with moderate to very severe COPD, this study is among those with the longest follow-up period.In this study the economic burden of LTOT adult patients with COPD and CRF was assessed over 72 months.The yearly mean total costs per patient were estimated at between €10,000 and €13,000 in the four LTOT groups.Among the healthcare resource use considered, inpatient hospitalizations were the main cost driver, consistent with the steady increase in hospitalization rates and costs since 2000 in the area [53,54].In this analysis, hospitalization costs ranged between €9000 and €11,000, representing 78.8% of total HCRU costs in the LO2 group and 84.5% in aSC and CTC groups, followed by LTOT-related costs, ranging between 10% and 16% of the total HCRU costs.This distribution is consistent with an economic analysis comparing COPD in North America and Europe (France, Italy, the Netherlands, Spain, the UK), in which the majority (52-84%) of direct costs associated with COPD were due to inpatient hospitalizations [55].In our study the POC group did not show a significant reduction in all-cause hospitalizations during the 72 months follow-up period compared to CTC and LO2.However, the length of hospital stay could be different, and it was not analyzed.Moreover, it is important to note that the number of visits for geriatric care was lower in the POC compared to LO2 group, similar to what observed when patients equipped with higher autonomy POCs were considered.Regarding the other specialties, the number of visits remained comparable between groups.In patients equipped with POCs the higher autonomy allows them to visit pneumologists and physical and rehabilitation specialists more regularly for follow-up visits, not necessarily being related to a direct effect of POC usage per se. The current economic analysis demonstrated that POC-equipped patients, used alone or in combination, presented higher average annual costs compared to patients in the aSC group (€579, 5% higher), lower compared to the CTC group (− €502, 4% lower), and comparable costs to the LO2 group.According to the ICER, considering the standard "per life-year gained" used to evaluate high-value therapeutic interventions, at a willingness-to-pay higher than €8 895, €6288, and €13,152 per life-year gained POC would remain cost-effective compared to aSC, CTC, and LO2, respectively.Moreover, when the two subpopulations within the POC group based on the higher and lower mobility level were considered, POCs with an autonomy higher than 5 h were cost-saving (− €6238 per life-year gained) compared to LM. In summary, POC, used alone or in combination, might contribute to promoting longer survival.On the basis of this observation and on the HCRU evaluation, the portable concentrators could be a cost-effective alternative to CTC and LO2 at comparable costs. Strengths and Limitations The use of the French Healthcare database SNDS, which is a large, unbiased, and potentially the most comprehensive healthcare database in Europe, is the key strength of the current analysis.The size of the population analyzed and the long follow-up are advantages derived by using Pulm Ther (2024) 10:237-262 this database.For the analysis of the HCRU, the costs represent actual costs accrued available in the database, representing an accurate estimate for routine clinical care for LTOT patients in both inpatient and outpatient settings in France.However, the SNDS also has intrinsic limitations due to the health data reported for health insurance reimbursement purposes.The database has limited clinical information and biological results of patients.Data collected to evaluate degree of respiratory symptoms, including dyspnea, exercise capacities (e.g., 6MWT) and severity of respiratory diseases (e.g., spirometry parameters), or parameters related to quality of life, are not available in the SNDS.As a result of this limitation the effect of LTOT on the severity of respiratory diseases could not be evaluated beyond number of hospitalizations and mortality.Moreover, there is the possibility that some patients may have been miscoded.Data on patients' BMI are not available in the database; hence, the prevalence of obesity reported in the included population is likely an underestimation as only cases admitted to the hospital with obesity are reported.The patient's perspective is not included in the SNDS, since data on patients' adherence, such as how the patients use their oxygen device (e.g., duration of oxygen use hour/day), or reasons for their preference, are not available in the database. Regarding the study design, the results should be interpreted with caution because of the main limitation of the study, the inability to dissociate treatment effect of a particular oxygen delivery device from the patient's mobility level at the time of initial LTOT prescription, which one might be tempted to assume the clinical status of the patient from the choice of the device's autonomy.In addition, the analysis of the potential impact of the autonomy of oxygen delivery devices was limited to devices prescribed between 2013 and 2020, recognizing that devices currently on the market have evolved.in relation to cost estimation, the analysis was conducted from a third-party payer's perspective, and therefore indirect costs such as costs associated with absence from work and reduced productivity from disease severity were not included.Also cost estimation was limited to a 6-year horizon.Additional data from clinical studies would be necessary to precisely estimate the cost-effectiveness of POC and LTOT in general beyond this time frame, and controlling for severity, number of prior hospitalizations, and other potential confounders. CONCLUSION Despite the limitations of the study, these results provide up-to-date evidence on the improved overall survival rates and cost-effectiveness associated with the use of POCs, either alone or in combination with other LTOT devices, in adult patients with CRF, informing stakeholders about healthcare costs for the different LTOT solutions.Future comparative and controlled interventional studies with adequate sample sizes are required to fully understand the value of POC use regarding clinical outcomes, mobility, and health-related quality of life. Nicoleta Petrica, Stanislav Glezer and Abhijith Pg critically reviewed the manuscript.All authors approved the last version to be submitted; and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy of integrity of any part of the work are appropriately investigated and resolved. Ethical Approval. In accordance with French regulations, the study protocol was approved by the ethics and scientific committee for health research, studies, and evaluations (CESREES) and by the French data privacy committee (CNIL; Decision DR-2021-228).Data access was delivered by CNAM after agreement.In the study personal data processing is intended for a research project not involving human subjects. Open Access.This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http:// creat iveco mmons.org/ licen ses/ by-nc/4.0/. Stationary concentrator used alone (aSC) SC Compressed tanks therapy Compressed tanks-alone or in combination (CTC) CTC/SC-CTC/CTC-SC Mobile oxygen therapy Liquid oxygen-alone or in combination (LO2) LO2/SC-LO2/LO2-SC/SC-LO2-SC/LO2-SC-LO2 Portable oxygen compressor-alone or in combination (POC) POC/SC-POC/POC-SC/LO2-POC/POC-LO2/SC-POC-SC/ SC-POC-LO2 Other LTOT combinations therapy Other LTOT combinations not included in the previous groups Not defined risk treatments (without pathologies)Patient oxygen delivery equipmentPatients equipped with stationary concentrator(s) Patients with a LTOT delivery index date before or after 2018, considering how the SNDS listing changed for this prescription[40] Fig. 1 Fig. 1 Study flowchart of the population before matching.CTC compressed tanks, alone or in combination; LO2 liquid oxygen, alone or in combination; LTOT long-term oxygen therapy; POC portable oxygen concentrator, alone Fig. 5 Fig. 5 Kaplan-Meier survival curves of overall survival analysis in patients with POC versus LO2 Fig. 6 Fig. 6 Mean costs per patient per year and distribution by cost type in POC and aSC groups.OS overall survival, CI confidence interval Fig. 7 Fig. 7 Mean costs per patient per year and distribution by cost type in POC and CTC groups Fig. 8 Fig. 8 Mean costs per patient per year and distribution by cost type in POC and LO2 groups Fig. 9 Fig. 9 Kaplan-Meier survival curves of overall survival analysis in patients with HM POC versus LM POC.OS overall survival, CI confidence interval Fig. 10 Fig. 10 Mean costs per patient per year and distribution by cost type in POC HM and LM subpopulations Funding. This work and the journal's Rapid Service Fee were supported by Inogen Inc.Data Availability.The datasets generated for this study can be found in the SNDS database upon request to regulatory authorities.Declarations Conflict of Interest.The study was sponsored and funded by Inogen Inc., which has neither participated in the conduct of the study nor in the analysis of the data.At the time of study conduct and manuscript submission, Dr. Stanislav Glezer was an employee at Inogen Inc. and a shareholder of Inogen Inc.He worked as the Executive Vice President, R&D and Chief Medical Officer.Dr. Abhijith Pg is an employee at Inogen Inc. and a shareholder of Inogen Inc.He works as Director Medical Affairs.Dr. Gregoire Mercier served as French medical expert.He works as the Head of Data Science at the Montpellier University Hospital and the Desbrest Institute of Epidemiology and Public Health (IDESP), Montpellier, France.Dr. Jean-Marc Coursier served as French medical expert.He works as a pneumologist at the Antony private hospital, Antony, France.Nicoleta Petrica served as consultant data scientist, Alira Health, Paris, France.Maria Pini served as medical writer, Alira Health, Paris, France. Table 3 [43]tched and matched patients' sociodemographic and clinical characteristics of POC and aSC groupsA patient may have one or more comorbidities[43]SD standard deviation; POC portable oxygen concentrator, alone or in combination; aSC stationary concentrator, alone Kaplan-Meier survival curves of overall survival analysis in patients with POC versus aSC.OS overall survival, CI confidence interval Table 4 [43]tched and matched patients' sociodemographic and clinical characteristics of POC and CTC groupsA patient may have one or more comorbidities[43]SD standard deviation; POC portable oxygen concentrator, alone or in combination; CTC compressed tanks, alone or in combination Table 5 Unmatched and matched patients' sociodemographic and clinical characteristics of POC and LO2 groups Table 6 Results of the cost-effectiveness analysis, POC versus aSC groups Table 7 Results of the cost-effectiveness analysis, POC versus CTC groups Table 8 Results of the cost-effectiveness analysis, POC versus LO2 groups Table 9 Results of the cost-effectiveness analysis, POC HM versus POC LM subpopulations
2024-06-03T06:16:54.420Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "67ab0610c87a12b56fbc5017d4d75ba25d7ae189", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s41030-024-00259-x.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09b5f376230b8c0984e224264dacf111007287d2", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
211193864
pes2o/s2orc
v3-fos-license
Digital assessment of a retentive full crown preparation—An evaluation of prepCheck in an undergraduate pre‐clinical teaching environment Abstract Introduction Acquiring practical skills is essential for dental students. These practical skills are assessed throughout their training, both formatively and summatively. However, by means of visual inspection alone, assessment cannot always be performed objectively. A computerised evaluation system may serve as an objective tool to assist the assessor. Aim The aim of the study was to evaluate prepCheck as a tool to assess students’ practical skills and as a means to provide feedback in dental education. Methods As part of a previously scheduled practical examination, students made a preparation for a retentive crown on the maxillary right central incisor—tooth 11. Assessments were made four times by two independent assessors in two different ways: (a) conventionally and (b) assisted by prepCheck. By means of Cohen's kappa coefficient, agreements between conventional and digitally assisted assessments were compared. Questionnaires were used to assess how students experienced working with prepCheck. Results Without the use of prepCheck, ratings given by teachers differed considerably (mean κ = 0.19), whereas the differences with prepCheck assistance were very small (mean κ = 0.96). Students found prepCheck a helpful tool for teachers to assess practical skills. Extra feedback given by prepCheck was considered useful and effective. However, some students complained about too few scanners and too little time for practice, and some believed that prepCheck is too strict. Conclusion prepCheck can be used to assist assessors in order to obtain a more objective outcome. Results showed that practicing with feedback from both prepCheck and the teacher contributes to an effective learning process. Most students appreciated prepCheck for learning practical skills, but introducing prepCheck requires enough equipment and preparation time. | INTRODUC TI ON Acquiring practical skills is an important element of the dentistry degree programme. All dental students are frequently assessed on their manual skills. In many dental schools, students prepare for these practical examinations by practicing on artificial teeth. Instructors provide feedback during practical classes subsequently followed by summative assessments. Assessment of these examinations and the subsequent feedback must be as objective and consistent as possible. Unfortunately, despite assessor calibration, conventional assessments by means of visual inspection have been shown to result in subjectivity and inconsistency. [1][2][3] Confronted with diversity of assessment and inconsistent feedback, students lose confidence in the feedback. 4 Results from surveys show that students feel that inconsistent feedback impacts the learning process negatively. 4,5 Computer-aided assessment systems can provide objective and consistent feedback. [6][7][8][9][10] In several studies, the conventional assessment method was compared to the digital assessment method. All studies show that the digital method was more objective and consistent than the conventional method. 3,8,11 Taylor et al compared traditional ratings of regular undergraduate crown preparation on a typodont with ratings provided by a software program. They state that the sole use of a digital assessment system cannot mark the students' work in a valid manner, mainly due to shortcoming of the system (Prepassistant) used. 12 Nevertheless, researchers are positive about the opportunities offered by the integration of digital equipment into the teaching of practical dental skills. 10,11,[13][14][15][16][17] It is expected that adding digital information to the conventional feedback will help students understand the elements of the feedback better and thus achieve the desired results. This deeper understanding enhances the mastering of practical skills. One of the available digital preparation assessment tools is the prepCheck (Dentsply Sirona, Bensheim, Germany). The system was introduced to the University of Groningen, the Netherlands in order to improve the precision of the pre-clinical preparation assessment. The software is able to compare a preparation with a master preparation or to make use of geometric analysis function. For the retentive crown restoration, the geometric analysis includes a number of aspects: undercut, preparation taper, occlusal reduction, axial reduction, preparation type, margin and surface quality. At the University of Groningen, the axial reduction is not taken into consideration for the assessment of the student's preparation, because axial reduction is already obviated by a correct taper. Furthermore, not all aspects are weighted equally (Appendices 1 and 2). The aim of the present study was to compare the inter-rater concordance of the conventional assessment methods and the assessments made by instructors with the help of prepCheck. We also aim to evaluate how students feel about practicing with prepCheck and having their examinations assessed with the aid of prepCheck. | MATERIAL AND ME THODOLOGY In the academic year 2016/2017, 41 Bachelor students participated in the course "Chamfer preparation for placement of a full crown practical" for the first time. The dentistry degree programme of the University of Groningen made use of a digital 3D scanner (CEREC Omnicam; Dentsply Sirona, Bensheim, Germany) for assessing retentive crown preparations (maxillary right central incisor-tooth 11). Preparations were scanned by the students themselves and then independently assessed by two experienced instructors on various criteria, both digitally in the prepCheck software program (prepCheck, version 2.1 PRO; Dentsply Sirona, Bensheim, Germany) and in the conventional way, through visual inspection. After the examination, students were asked to give feedback about the procedure by means of a questionnaire. To prepare the students for the examination, three practical sessions of four hours each were organised. At that time, they could practice preparations, scanning and receive feedback. Students could additionally practice preparations, scanning and familiarise themselves with the digital assessment in their own time. The fourth session comprised the summative examination. Four scanners were available to the students during the practical sessions and the examination. Ethical approval was obtained from the Netherlands Association for Medical Education (NVMO) Ethical Review Board (NERB file number #832). | Inclusion and exclusion of students and instructors more detailed information about the assessment settings of the prepCheck software, which are based on the criteria in the assessment form. Appendix 3 lists the settings for each criterion. | Assessment procedure In the examination, students were given 120 minutes to prepare the maxillary right central incisor-tooth 11 (KaVo EWL model teeth, numbered "roots"; KaVo, Biberach/Riß, Germany) for subsequent placement of a retentive crown. In order to assess the preparations anonymously, each student was randomly assigned an examination number, which was printed on the assessment form (Appendix 1). The students had to engrave their examination number in the apical part of the maxillary right central and lateral incisor-teeth 11 and 12. To ensure that the examination number is also visible on the scan, the students engraved their examination number in the buccal plane of the maxillary left central incisor-tooth 21 ( Figure 1). When the students finished their preparation, they had to draw the occlusal line with a red pencil on the preparation ( Figure 4). This was checked, amongst other criteria, by the instructor (Appendix 1). Subsequently, they (partially) scanned the jaw with the CEREC Omnicam. To prevent fraud, the students were only allowed to remove the preparation (examination element) from the KaVo jaw (Basic study models, KaVo, Biberach/ TA B L E 2 Assessment criteria for the chamfer preparation Riß, Germany) after finishing this first scan. Following, the element was replaced by an unprepared maxillary right central incisor-tooth 11 (referred to as the "biocopy") and scanned as well to allow the prepCheck software to calculate the amount of tissue removed (see Figure 2). * The students had to enter the insertion axis, the outline and the occlusal line (representing the beginning of the non-retentive surfaces of the preparation) also digitally in the prepCheck software (see The prepCheck assessment method involved the instructor assessing the scan on the criteria set out on the assessment form using information provided by prepCheck. prepCheck calculated the acceptable margins for a preparation. Based on these calculations, the instructors awarded points for each criterion. The instructors were allowed to examine the physical tooth as well, in case there was concern about the scan (item No. 9, Appendix 1). All examination work was therefore assessed four times: | Variables and methodology All calculations were performed in SPSS. The significance level used was P < .05. | Inter-rater agreement Inter-rater agreement was determined by calculating Cohen's kappa (κ) for each assessment criterion (Table 3). For each criterion, the kappa was calculated for assessment with and without the aid of prepCheck. Kappa can vary between −1 and + 1 and is a measure of the agreement between the assessments of instructors 1 and 2. Since a binary value (0 or 1) was given for each criterion, kappa values could be calculated for all criteria except for "taper." Because 0, 1, 2, 3 or 4 points could be awarded for this criterion, a weighted kappa was calculated for "taper" by using a linearly weighted 5-point scale. As the taper criterion was binary, the weighed Kappa was not taken into account for calculating a mean kappa values for each assessment method and the differences between the kappa values. | Student perception of prepCheck as a learning aid and assessment tool All students completed the questionnaire (Appendix 4) anonymously, and before the official examination, results were published. The questionnaire has ten items. For the first seven items, the students indicated on a visual analogue scale (VAS) to what extent they agreed with the statements given. The range of the VAS was 0 to 100, with 0 representing "totally disagree" and 100 "totally agree." The seven statements were followed by two multiple-choice questions, allowing students to state how they preferred to receive feedback and what type of assessment they preferred. | Inter-rater agreement and total points There was very poor agreement between the instructors for the conventional assessment method (n = 41, Cohen's kappa values for the conventional method ranged between −0.007 and 0.378, see Tables 3 and 4). The number of total points for all items was 278 (instructor 1) vs. 289 (instructor 2). For the prepCheck assessment method (n = 41), the agreement between the instructors was moderate to perfect, with Cohen's kappa values between 0.769 and 1. The prepCheck assessment resulted in perfect agreement on "undercut," "palatal reduction" and "Chamfer." The number of total points for all items was 259 (instructor 1) vs. 255 (instructor 2). The average kappa value for the prepCheck assessment was 0.776 higher than the average Kappa value for the conventional assessment method. The weighed Kappa value for the taper was also almost twice as high for the prepCheck assessment as the value for the conventional method. | Student experiences with prepCheck Thirty-seven of the 41 students (90%) completed the questionnaire. Of these, thirty (81%) answered the open-ended question. prepCheck provided the students with a good understanding of the quality of their preparations, enabling them to specifically train certain aspects (mean VAS score 84.6). In addition, they felt that the feedback given by prepCheck clearly helped them prepare for the examination during the practice sessions (score 78.3). In general, they believed that prepCheck is a positive addition to the assessment procedure and made the examination results more objective (score 77.3). The students indicated that prepCheck gave them a better understanding of their progress during the practice sessions (score 77.1). Overall, they believed that the instructors had made an honest assessment of their work (score 72.8). Instructor feedback had been coaching in nature rather than judging ever since prep-Check had been providing feedback in the practical sessions (score 63.1). Interestingly, the only statement on which students agreed far less was that instructors (without using prepCheck) were consistent | Open-ended questions Feedback by prepCheck was felt to be consistent, objective, specific and accurate. When explicitly asked about aspects to improve, 43% of the students who filled in the open-ended questionnaire (n = 13) would have liked to have access to more scanning units. Due to the long waiting time, the assessment was seen as more hectic and chaotic than necessary. Additionally, learning the scanning procedure took a long time too: 37% of the students (n = 11) required longer preparation time before the assessment in order to practice scanning. Another point of criticism was that the scanner settings were too strict and making a scan without scanning clutter often proved difficult too. Finally, prepCheck rejected elements of the preparation that could not be seen with the naked eye or felt with the probe, which 27% of the students (n = 8) felt was unjustifiable. 10% of the students (n = 3) said that prepCheck could be a good instrument for formative evaluation during the examination. Students should be allowed to modify their chamfer preparation if desired. Another advantage of this approach is that this procedure would be more in line with clinical practice. Due to the nature of the open-ended questions, multiple nominations were possible. All comments are listed unchanged in Appendix 5. | Key findings The instructors' conventional assessments are markedly different although they used the same criteria. When they used prepCheck, their assessments are much more in agreement. Most of the students feel that prepCheck is a good additional teaching tool when learning practical skills. They prefer the combination of instructor and prepCheck for both feedback and assessment. Learning the scanning process and interpreting the scans took a lot of time, however, which makes the students feel that there was not enough time to prepare for the examination. TA B L E 4 Overall results of the different assessment methods which assessment method was used, but the distribution within the conventional method seems to be quite arbitrary. There was hardly any agreement between the instructors when they use the conventional assessment method. The "palatal reduction" criterion even has a negative Cohen's kappa value, which means that there was no agreement at all between the assessors. By contrast, moderate, strong and perfect agreements are found for the prepCheck assessment method. The large differences between the kappa values for the two assessment methods show that the use of prepCheck clearly improves the precision of the assessment. Because of the high agreement between the prepCheck assessments, it appears that using prepCheck leads to more objective, consistent and calibrated instructor feedback and assessment. Similar studies involving digital assessment systems show that such a system can increase inter-rater agreement for anatomical wax-up examinations 11 and that preparations can be assessed in a consistent and reliable manner. 3,10 ) These studies are in line with our findings and show digital assessment systems to be more precise than one using conventional methods. 3 Several studies show that dentistry departments have difficulties calibrating their instructors and objectifying assessments. 1,3,8,11 Since a digital assessment can lead to fewer differences of opinion amongst instructors, both assessment and feedback become more consistent. | Student experiences with prepCheck The students' ideas about prepCheck and how they feel about working with prepCheck were investigated with the questionnaire. The analysis of the students' responses shows that, on average, students agree with most of the statements in the questionnaire (Appendix 4). All statement means are higher than 50 on the VAS scale ranging from 0 to 100. A possible explanation for the small percentage of students who preferred prepCheck as the only source of feedback and assessment might be that students feel that prepCheck assesses their work more strictly than the instructor. Nearly, one-quarter of students still appreciates the instructor being the only assessor, perhaps because they do not sufficiently trust the new technology yet for the reasons outlined above. Learning to operate the scanner and the scanning process itself is experienced as clinical relevant but time-consuming, and some students indicate that they prefer to spend their time on practicing their preparation skills instead. Some students feel that they had too little time during the practical to prepare for the examination. This is in line with the findings of another study. 19 Statements Mean (scale from 0 to 100) SD A study by Gratton et al compares two digital assessment systems (one of which is prepCheck), showing that the two systems are equally effective and that there are no significant differences between them. This study includes a student-based questionnaire about their perception of and satisfaction with the use of scanners in the curriculum. The majority of students feels that digital techniques should be integrated into the teaching process. 20 The students also appreciate that they were given the opportunity to learn to work with new digital devices and technologies. Those findings are in line with our results. Also, results of the study by Callan et al show that students find it difficult to produce a scan of their preparations and that they like that the digital assessment eliminates the subjective element. Another advantage is that they do not have to look for an instructor for an assessment. However, it also appears to be difficult and time-consuming to navigate the software with the assessment criteria and to produce an accurate scan. Again, some students stated that they prefer to spend the time they now have to take to evaluate their preparations on actual practice. Nevertheless, the majority of students is generally positive about the option to modify their preparations after their flaws had been mapped. 19 | Outline, occlusal line and insertion axis In addition to determining the insertion axis, students also had to mark the outline and the occlusal plane on the scan. Based on the lines they draw, prepCheck calculates an assessment of the scanned element based on the criteria. The student determines where these lines must be drawn, which means they can vary. The closer the line for the occlusal plane is to the gingiva, the lower the number of degrees for the "taper" criterion will be. The closer this is to the tooth's incisal edge, the higher the number of degrees for the "taper" criterion will be, and thus, the greater the risk that "taper" will be higher than allowed (see Figures 8 and 9). Where the outline is drawn determines the shape of the chamfer and thus the "preparation type," but it may also have an impact on F I G U R E 7 Responses to the multiple-choice question "Which method for giving feedback in preparation for the examination are you most comfortable with?" represented as a pie chart (N = 37) F I G U R E 8 Because the line for the occlusal plane has been drawn closer to the gingiva, prepCheck concludes that 77% of the taper is within the margins allowed for this criterion "surface quality." If there is a rough area or sharp edge inside or above the outline whilst the outline is drawn above this flaw, it will be ignored in the assessment of the "preparation type," "margin quality" and "sur- | Examination of the process versus examination of the result Taylor et al support the notion that the use of a digital assessment system only is insufficient to validly assess student work. They examined the Prepassistant digital assessment system (KaVo, Biberach, Germany). The main limitation of a scanner as a digital assessment system, they found, is that it can only assess differences in measurements. 12 prepCheck also measures differences, calculates whether the results are within the margins set for the criteria and shows this as percentages. prepCheck does not mark the scanned work either. In the present study, some criteria were still traditionally assessed by the instructors: the student's preparation for the examination, observing the basics of ergonomics, correct use of instruments and mastery of the problem are more related to the process. No unintended realignment and logical insertion axis, no damage to adjacent teeth, the transitions between retentive and non-retentive surfaces are correctly drawn at the correct height, and the outline is 0.5 mm above the KaVo gingiva are items that are related to the result ("the preparation"). To assess these criteria and to calculate the total score for all criteria on the assessment form still requires an instructor, who also gives the final mark. | Relevance of the findings Teaching dental skills can be improved by adding prepCheck to the assessment procedure since the instructors are considerably more in agreement when they use prepCheck in their assessments. The results of the present study suggest that practicing with prepCheck is an effective aid for learning practical dental skills. Moreover, learning to work with modern digital technology is important for dentistry students because such techniques will become increasingly incorporated into the dental practice. Based on these experiences, prepCheck was further implemented into the Bachelor curriculum of the dentistry degree programme at the University of Groningen. | CON CLUS IONS Despite using the same criteria, instructors differ considerably in their assessments of preparations with the conventional assessment method. prepCheck increases agreement between instructor assessments. This calibration may be used to achieve objective assessment. Feedback given by prepCheck was seen as consistent, objective and accurate, and allowed students to practice preparations effectively. The students preferred receiving a combination of feedback from instructors as well as prepCheck. They felt that the examination should also be assessed by both the instructor and prepCheck. However, the scanning process took a lot of time, which meant there was insufficient time to practice in preparation for the examination. Students see prepCheck as an objective source of feedback and a valuable addition to the teaching of practical dental skills, but introducing prepCheck requires enough equipment and preparation time. F I G U R E 9 Because the line for the occlusal plane has been drawn closer to the incisal edge, prepCheck concludes that only 52% of the taper is within the margins allowed for this criterion CO N FLI C T O F I NTE R E S T The authors declare no conflicts of interest. AUTH O R S CO NTR I B UTI O N S US built the design of the study, collected the data and secured fund- Assessment form * The instructor is called before the jaw is removed and then assesses items 5-8. ** With the help of the prepCheck software and the assessment matrix. TA B L E A 2 X and Y values in The purpose of this questionnaire is to learn your opinions about and experiences with the practical work and the assessment procedure used in the Chamfer preparation for a full crown practical examination. You are not obliged to complete this questionnaire. By placing a vertical dash on the scale printed under each of the statements, you can indicate to what extent you agree with the statement. To round off, there are two multiple-choice questions. Please circle your answer. Completing this questionnaire takes about a minute. 1. Examination assessment by instructors is an honest procedure. 2. prepCheck is an improvement to the assessment procedure in order to achieve objective test results. 3. I believe that the instructors (without the use of prepCheck) are consistent in their feedback during the practice sessions leading up to the examination. 4. The feedback given by prepCheck helps me during the practice sessions leading up to the examination. 5. prepCheck helps me to evaluate my preparations, so that I can concentrate on certain elements of the preparation. 6. prepCheck helps me to monitor my progress during the practice sessions. 7. Since prepCheck has been used to give feedback during the practice sessions, instructors give feedback like a coach rather than an assessor. End of the questionnaire Thank you for completing this questionnaire. Your answers will be processed anonymously. Once our study is completed, the questionnaires will be destroyed. A PPE N D I X 5 Critical answers given to the open-ended question in the questionnaire 1. Suggestion: The process could be improved by being allowed to use the scanner for formative evaluation during the examination and subsequently improving the preparation if necessary. 2. The scanner is a good additional element in the practical but there was not quite enough time for learning tooth preparation and learning to work with the scanner. 3. The scanner is a good additional element in preparing for the examination. I would prefer it if the scanner were not used as the sole assessor, since the prepCheck settings are too strict because the scanner sees things that cannot be observed with the naked eye. 4. There was not enough time to practise. It is difficult to obtain a high-quality scan. The instructors are not in agreement. I felt like a guinea pig. 5. Assessment by the scanner is fine but the prepCheck settings are unreasonable. It would be better to use a margin of 1% instead of 0% for the criteria. 6. The examination is hectic because of the unavailability (and waiting for) the scanners. 7. The scanner is a very handy tool during the practice sessions. It is too strict for the assessment, because whilst working on the preparation, you cannot assess it as the scanner does. The combination of scanner and instructor is better. 8. There was not enough time to practise because using the scanner took a lot of time during the practice sessions. 9. The scanner is a good additional element whilst practicing but less suitable for examination assessment. The scanner is too strict, and when you are preparing the chamfer, you cannot see/assess the criteria as the scanner does. 10. The scanner is a good aid when practicing. The scanner allows practicing your preparation and having it assessed without the need for an instructor. The scanner is not suitable for the examination assessment because it is too strict. Plus, you cannot pass the examination if there are computer malfunctions in the scanner. If you need to remove more tissue at a later stage, this is possible in the clinical situation but not (yet) during the examination, which means you will fail it. 11. Not enough time. 12. The assessment process was chaotic. We had to wait for a scanner to become available. There was not enough time. Learning the scanning process took a lot of time and energy on the part of the students. 13. There was not enough time to practise, and we need more practice to use the scanner properly. It is difficult to obtain a scan without grey blocks. Disadvantage: making a scan during the examination raised the work pressure. Waiting for a scanner and the scanning itself took too much time during the examination. 14. It would have been more comfortable if there had been more supervision whilst we were practicing with the scanner. Making a good scan is difficult. The examination was chaotic because we had to wait for the scanner and had to make a temporary preparation in the meantime. 15. During practicing, the scanner has a major advantage: it helps one to improve (one's preparation) on very specific points. The disadvantage is that the scanner is very strict and assesses details at one hundredth of a millimetre (eg 0.03 mm) which cannot be seen with the naked eye. 16. Too little time to practise. 17. The examination was chaotic. The scanner does add value to the learning process. 18. The examination is not anonymous if you have to send an e-mail with your self-assessment, although this is not such an issue with the scan results as the scanner makes anonymous assessments. 19. It sometimes took a very long time before you could make a scan. 20. Working with the scanner is insightful, which makes practicing easier. The problem is that there are not enough scanners, which meant we had to wait a long time for one to become available. 21. More time should be allocated to practising. 22. Having more scanners would be nice, but this is not necessary. Not all instructors worked equally well with the scanner. It would be better if all instructors had a thorough understanding of the scanner (better supervision whilst learning to work with the scanner). Even if you scan your work correctly, the scan still contains noise. 23. It is nice that the scanner measures more objectively than an instructor but I'm afraid the scanner is too strict. The scanner says that I have not removed sufficient tissue but such a small amount cannot be seen with the naked eye. The same applies to the taper criterion. I now have the impression that students will not pass the examination if it is (only) assessed by the scanner instead of by an instructor. 24. Because there is too little time to practise and there are not a lot of scanners (and scanning takes a whilst), I have only been able to make few scans as feedback. 25. Could make only a few scans during the practice sessions. 26. Waiting for a scanner when another student is scanning takes a long time. The examination procedure is less clear than it usually is. 27. I have not been able to practise sufficiently with the scanner. We had to wait for a scanner often; this took time off the practical work. The 0% norm is too strict, because the scanner sees things that you cannot see with your own eyes or feel with a probe. It would therefore be better if students are allowed during the examination to improve their preparations after making a scan. 28. Had to wait too long for a scanner. If possible, make more scanners available as a suggestion for subsequent practicals and agree on a maximum time to use the scanner. 29. Had to wait a long time for a scanner during the examination. 30. prepCheck allows for a consistent and highly accurate assessment of the preparation. Instructors sometimes say something is "just about right"; prepCheck is much clearer in this respect.
2020-02-20T09:17:27.085Z
2020-02-18T00:00:00.000
{ "year": 2020, "sha1": "bf6255f6fa9c9956fd7d25d0f3c311e44460e304", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/eje.12516", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "76901bef1b7b4bf36ff1059db3906368fa6a6282", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
214151694
pes2o/s2orc
v3-fos-license
Different Effect of Sliding Mode Controller on DC-DC Boost Converter with Output Voltage Variation Analysis : In current days, solar, wind and PV are Maximam used for increases power as per essential and generation. The power which is received from source of energy required to be converted using AC to DC or DC to DC converters. The Boost converter is used for step up this voltage. These kind of sources are capable of being adjusted by nature, thus the problem arises to maintain constant imposible to avoid output in all conditions.The Boost converter converts the low value of input voltage into higher value of output voltage when recive the input. By using sliding mode control of the converter this problem of gaining constant output under variable conditions can be solved. To implement this technology there are many schemes with converter but the problem arises when the parameter changes capacitance and inductance as load varies. In Similar work is to resolve this kind of problem with the help of competitive analysis of Slide Mode controlled compared in terms of transient characteristics load and other parameters variations with PID controller, these controllers are evaluated and modeled by MATLAB simulations. The voltage of input source is taken as DC source at variable conditions. This modeling is done in MATLAB/Simulink software. I. INTRODUCTION DC-DC switch mode converters electronics circuit which is used to convert a voltage from higher to lower level. They much more used in some electronic device like DC drive system, Smart grids, and distributed power supply system. In current days, DC-DC power converters are used in generation of energy with help of solar and wind system. The power which is composed from these sources is converted with help of AC-DC and DC-DC converters but the which is gained form these are flexible in nature. So, for continues production and get complete output voltage many controlled converts are used. DC-DC switching type converters are commonly used and one of the best power electronics circuits which convert unregulated electrical voltage from one level to regulated desired another level electrical voltage by switching technique and eliminated unfiltered output issue can be solved through SMC. There are many option to contrivance this with converted, but the difficult arise when in the form of capacitance installation and the load varies. The converts benefits in electricity system is regulate the parameters giving difference in maintenance & load the essential creation voltage to control the output voltage of converter we can use to control semi control device at duty cycle. The idea of work by controlling method to control the duty cycle according to changes the several parameters of structure by using SMC.A different method has been ready to control the duty cycle on converter and to increase system effectiveness for all parameter. In this paper, It's shows performance of slide mode controller DC-DC buck convertor compared with PID controller in basis of load and different parameter variation. These kind of controller are evaluate by MATLAB simulation and modeled also. A. Boost Converter A Boost converter is like a power converter which can convert small value of input voltage to higher of output voltage. It's like switching mode power supply containing minimum of 2 semiconductor switch(Diode, Transistor ) and minimum one energy part. The inductor current should be same from starting and ending of commutation cycle. So, when change the current is zero: …. (5) Substituting and from (2) and (4) in (5). This can be written as: … (6) It shows the relation between output voltage and duty cycle. So, it's clear shows that for control the output voltage through duty cycle , where D is changed from 0 to 1. II. CONTROLLER DESIGN METHODOLOGY The main aim of control in DC-DC boost control the output (V 0 ) t of DC voltage at given reference value (V ref ). Main concept in this process is to control he accepted out in the existence of the input voltage derivations and load disturbance. The conventional controller sliding mode control is proposed to regulate the output voltage V 0 of the converter. The sliding mode voltage controller is helpful because of it's high performance in easy implementation. The concept of sliding mode controller includes designing a slide surface within the state space and a control low. Which should direct the system trajectory into slide surface. A pre define surface is sliding surface in the state space region. To design a slide mode controller (SMC) converter assume that output voltage is a control variable and a state variable of full order slide mode controller which controlled can be expressed by variables X. …. (7) Where X1, X2, X3 are voltage error, rate of change of voltage error, and the integral of voltage error. is voltage reference is sensed output voltage, is gain/proportion of sensed output voltage and MOSFET M1 is control switch. In this system, a normal sliding mode control rule that use the control or switching function can be written as, Where, S is the instantaneous state variable trajectory. It can be represent as, …. (9) The a1, a2,a3 are control parameter known as sliding coefficients. For the SMVC boost converter , The deviations are as illustrated. Equating gives the equivalent control function. Equivalent control function is mapped into the duty radio control: Where, , given the relationships for ramp and the control signal . Where, is …. (11) Calculation of control signal (11) can be done using modest summing gain and utilities. The parameters of circuit can be normally calculated by known values , are poper selections of a1, a2, a3. V. SUMMARY In this topic of the represents the simulation analysis using data analysis and output waveform of model using boost converter implementing slide mode controller and PID controller. So, the conclusion of this chapter concise that SM controller is best suitable to obtain the different values the PID controller and also it's possible to obtain a desired output by changing control parameter further, the transient setting time around 3.5 ms , and it is also independent of magnitude and direction of step load change and the operating input voltage. It shows the strength of SMC in terms of robustness in dynamic behaviour at different operating condition. VI. CONCLUSION This research shows that the analysis between two controller one is PID (Proportional Integral Derivative) and other one is SMC (Sliding Mode Controller). The designed controller are feeding resistive load analysis the result is different condition at integrated converter and controllers attained zero steady state error. The work is carried out by two different condition, first is when the operating mode is for 200V to 400V and the second one is operating at 350V to 400V. A system is designed in MATLAB and calculate the increase time, setting time, overshoot from the graphical response of voltage. Which is achieved from MATLAB. The maximum comparative simulation output waveform between two controller. Which shows that the system works more effective at higher voltage rating and decrease the output current. As compare to the result of PID and sliding mode controller (SMC) in the case of stability slide mode controller in the case of stability slide mode controller is more perfect and time is less then PID. The data analysis shows in table all specific results with sliding mode controller compare to PID controller.
2020-02-06T09:10:44.721Z
2020-01-31T00:00:00.000
{ "year": 2020, "sha1": "484026461570464fabe2899e94b07da2f3397cd4", "oa_license": null, "oa_url": "https://doi.org/10.22214/ijraset.2020.1087", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e7d99281f6b89e1fbb1876636a1cf86391618fa2", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
6492057
pes2o/s2orc
v3-fos-license
ALMA Memo #587 Inference of Coefficients for Use in Phase Correction I We present a Bayesian approach to calculating the coefficients that convert the outputs of ALMA 183 GHz water-vapour radiometers into estimates of path fluctuations which can then be used to correct the observed interferometric visibilities. The key features of the approach are a simple, thin-layer, three-parameter model of the atmosphere; using the absolute measurements from the radiometers to constrain the model; priors to incorporate physical constraints and ancillary information; and a Markov Chain Monte Carlo characterisation of the posterior distribution including full distributions for the phase correction coefficients. The outcomes of the procedure are therefore estimates of the coefficients and their confidence intervals. We illustrate the technique with simulations showing some degeneracies that can arise and the importance of priors in tackling them. We then apply the technique to an hour-long test observation at the Sub-Millimetre Array and find that the technique is stable and that, in this case, its performance is close to optimal. The modelling is described in detail in the appendices and all of the implementation source code is made publicly available under the GPL. INTRODUCTION The performance of millimetre and sub-millimetre wave interferometers is often limited by the fluctuation of the properties of the Earth's troposphere along the lines of sight of each of the elements of the interferometer. If not corrected, these fluctuations lead to a fluctuating delay to each element and subsequent loss of correlation (and therefore sensitivity) and a limit on the maximum usable baseline length. Some recent simulations of the effect of these fluctuations on ALMA science have been presented by us (Nikolic et al. 2008) and other authors (e.g., Asaki et al. 2005). ALMA plans to correct for these fluctuations by a combination of two techniques: (i) Fast-switching, that is interleaving science observations with observations of near-by phase calibrators that allow antenna phase errors to be solved for (ii) Water-vapour radiometry, that is observing the emission of atmospheric water vapour along the line of sight of each element of the array, and inferring and correcting from these observations the fluctuating path to each element. The current plan for ALMA is that fast-switching will be used with a cycle period of between around 10 and 200 seconds while fluctuations on timescales from 1 second up to the fast-switching timescale will be corrected by the water vapour radiometry technique. The actual fast-switching time that will be used will depend on the weather conditions and the scientific requirements of each project as well as the achieved accuracy of phase correction with the water-vapour radiometers. Furthermore we expect to be able to use the WVRs to correct for the phase transfer errors, that is the errors that arises due to the different lines of sight to the phase calibration and science sources. One of the key requirements for radiometric phase correction is a prescription for converting the observed sky brightnesses in the neighbourhood of the 183.3 GHz water vapour line, as measured by the WVRs, into a path delay that can be used to correct the observed visibilities. Any such prescription is complicated by the significant pressure-broadening and the high cross section of this line (it can be close to saturated even in the dry conditions of the ALMA site) which means that the optimum conversion strategy is quite a sensitive function of the prevalent atmospheric conditions. In this paper we assume that over the short timescales of interest (i.e., less than approximately 200 seconds) the differential path fluctuation between two telescopes can be predicted by a constant times the differential fluctuation of the sky brightness between the same telescopes. The constant of proportionality is the phase correction "coefficient" and we give in this paper an initial prescription for calculating these coefficients (Section 2). We analyse this prescription with simulations (Section 3) and then apply it to a test observation at the Sub-Millimetre Array (Section 4). METHOD Our aim is to find estimates of, and confidence intervals on, the coefficients to be used for phase correction. We denote the coefficients as dL/dT B,i where L is the excess path to the telescope and T B,i are the sky brightnesses as measured by the four channels of the WVRs. We use this notation in general although this differential only makes mathematical sense when there is a known and invertible function which connect the cause of fluctuation in L with T B,i . In this paper we assume that the fluctuation in path are caused by fluctuations of the total water vapour column only so that the fluctuation water vapour column is also the sole cause of fluctuations in T B,i . The approach we take in this paper is to construct a physical model of the atmosphere with a number of unknown parameters and use some observables and the basic physical considerations to constrain the possible values the parameters. Once the distribution of possible values of model parameters are known, we can use the same model to compute the distribution of phase correction coefficients. The physical model employed in these initial studies is extremely simple: • We assume water vapour is the only cause of opacity and path fluctuation • We assume all the water is concentrated in a thin layer at a single pressure and temperature • We make the plane parallel approximation for computing effects due to changes in elevation of the antenna This means that the model has only three unknown parameters, namely total water vapour column (c), temperature (T ) and pressure (P). The reason for using such a simple model is that it allows very extensive computational analysis while containing the basic ingredients which we know must influence phase correction. As we gain experience with simulations and observations at the ALMA site, we intend to extend this model in directions which show an actual improvement in the inference of the phase correction coefficients. For the time being we will assume that the only true observables that we have available to constrain the model are the four absolute sky brightness temperatures measured by the WVRs. In additional to the direct observables, we have some constraints on the possible values the model parameters can take; these are called priors. In principle the priors are a joint probability function of all of the model parameters, but in the present paper we simplify by taking them to be separable into a product of functions of one parameter only, i.e., p(cT P) = p(c)p(T )p(P). The method of solution of this problem that we describe here is the standard Bayesian Markov Chain Monte Carlo approach. As usual, the inference problem is described by the Bayes equation (see, for example Jaynes 2003; Sivia & Skilling 2006): where the symbols have following meaning: θ is the vector of model parameters (in this case {c, T, P}) D is the observed data (in this case the sky brightness temperatures observed by the WVR) p(θ ) is the prior information (in this case constraints on model parameters as mentioned above) p(D|θ ) is the likelihood, i.e., the probability of observing the data we have given some model parameters θ p(D) is the so-called Bayesian evidence, that is a measure of how good our model is in describing the data p(θ |D) is the posterior distribution of model parameters Likelihood The computation of the likelihood of an observation given model parameters can be factored into three stages: (ii) Calculation of the temperatures recorded by the WVRs given a sky temperature. Here we model frequency response of the units and the coupling to the sky. (iii) Calculating the probability of observed data given the model temperatures obtained in the previous step The sky brightness is computed using a simplified model, in which we assume that the only relevant contributors to the atmo- spheric opacity in this band are the water vapour line at 183 GHz and the water-vapour pseudo-continuum. We assume the water vapour line has a Gross line shape and parameters derived from the HITRAN database entry (Rothman et al. 2005) and suitable correction for pressure and temperature as detailed in Appendix A. The parametrisation that we use for the pseudo-continuum follows closely that in the program am by Paine (2004) and is also given in Appendix A. With the opacity calculation, the sky brightness can be calculated using simple radiative transfer. The resulting sky brightness temperature is shown for a range of conditions in Figure 1 illustrating the change in the shape of the water vapour line with pressure, temperature and total water vapour column. The three model parameters are the total water vapour column (c), temperature (T ) and pressure (P). The water vapour column parameter is taken to refer to the zenith column; for the cases when we investigate non-zenith measurements, we assume that this parameter scales according to the plane-parallel approximation while the other parameters remain unchanged with a change in elevation of the telescope. With the sky brightness known, the temperature seen in each of the WVR filters can be calculated. In the present study we assume the WVRs are double-sideband (like the production units on ALMA will be) and that their filter set corresponds to the prototype filter set. The reason for using the prototype filter set definition is that later we will make an analysis of a sample test data set collected with the prototype radiometers at the SMA. The prototype filter set is shown in Figure 2, which also shows for contrast the filter set of the production ALMA WVRs. In this study we assume the filters are perfectly sharp and at their design centre frequencies and bandwidths. The average sky brightness across each filter sideband is calculated using the five point Gauss-Legendre Quadrature (e.g., Abramowitz & Stegun 1964) which provides reasonable (but, since it is not an adaptive algorithm, a non-uniform) accuracy and is extremely efficient since the sky brightness needs to be calculated at only five frequencies per filter sideband. The second effect which needs to be taken into account at this stage is the non-perfect coupling of the radiometer in the sky. Our initial analyses of sky-dip measurements during this test campaign (to be published subsequently) suggest that the coupling was about 0.91 and that the termination temperature of the parts of the beam that did not reach the sky was about 265 K. We therefore use these values in the analysis of the data from the SMA later (Section 4); however for the simulations shown in Section 3 for simplicity we assume perfect coupling. The results at stage of computation are the four temperatures, T B,i , seen by the WVR as functions of θ ≡ {c, T, P}. Given these parameters θ , what is the probability of observing a set of temperatures T * B,i ? This is the probability that we denote by the likelihood function L and it is governed by the instrumental effects within the radiometer. For the present analysis, the dominant source of error is the uncertainty in the absolute calibration of the radiometers. The production radiometers specification require this error to be smaller than 2 K at all times. If the underlying errors were normally-distributed (which may be a reasonable approximation) that would require an underlying σ of less than 1 K. We do not however know at this time precisely what the final distribution will be and further it is likely that the calibration errors will be correlated between the filter outputs. Nevertheless, for simplicity we presently take the errors to be normally and independently distributed so that the likelihood function takes the well known form: where we take σ T,i = 1 K. When we better understand the calibration uncertainties in calibration of the WVR units, it will be important to incorporate this information into the above equation. (By comparison the thermal noise in one second of integrating time will be below 0.1 K.) Priors Besides the sky brightness observed by the WVRs, we have (and will have) some constraints on the model parameters that are the results of physical considerations, or derived from independent past observations. For the purposes of this paper we will consider three different priors, shown in Table 1, with the aim of illustrating the effects they have on the final results. The priors are specific to the Mauna Kea site rather than ALMA site since they will also be used for analysis of testing data collected at the Mauna Kea. The conclusions derived from them are expected, however, to transfer directly to the ALMA site. 3 Pressure constraint p(c) = 1 0 mm < c < 5 mm 0 otherwise p(T ) = 1 260 K < T < 280 K 0 otherwise p(P) = 1 570 mB < P < 590 mB 0 otherwise. The first prior we consider (number 1 in Table 1) is an extremely relaxed prior which puts very loose constraints on the pressure and temperature of the water vapour layer: (i) Uniform probability that the pressure is between 100 and 650 mBar. Since the mean pressure at the peak of Mauna Kea is 605 mBar this prior only assumes that the water vapour is not extremely high in the atmosphere (ii) Uniform probability that the temperature is between 200 and 320 K. Clearly this is a much wider range than typical tropospheric temperatures at altitudes where there is significant water vapour. (iii) Uniform probability that the zenith water vapour column is between 0 and 5 mm. This uninformative constraint is used for all of the other priors too. Since this prior is less informational then the constraints we will have even without any ancillary measurements at the site, it is used to illustrate the degeneracies present if no priors at all are present. The second prior we consider (number 2 in Table 1) is designed to be representative of information on water vapour have might have with basic understanding of the site but without any sophisticated ancillary measurements. In this prior, we assume we know the temperature of the water vapour layer within 20 K and the pressure of the layer to within 80 mBar. This prior is used to analyse the test data from Mauna Kea. Finally, the third prior we consider (number 3 in Table 1) is used as an illustration of the improvement in accuracy that can be obtained with a tight prior on one of the parameters. In this case we still assume that we know the temperature to within 20 K but that we know the pressure of the water vapour layer to within 20 mBar instead of 80 mBar assumed in prior 2. Such a constraint on pressure of the water vapour layer might be derived from, for example, determination of its height using one of the techniques described below. Markov Chain Monte Carlo With the expressions for the likelihood and the priors that we have described above, we have the necessary information to compute the Bayes equation (Eq. 1). In general, this is computationally expensive (see for example, MacKay 2003) because of the large volume of parameter space that must be characterised in order to determine the maximum of p(D|θ )p(θ ) and the numerical value p(D) = dθ p(D|θ )p(θ ) . The approach we take is standard Markov Chain Monte Carlo (MCMC) using the Metropolis et al. (1953) algorithm (for a tutorial review, see also Neal 1993). In this approach a chain of points in the parameter space is calculated such that the next point in the chain is found by proposing a new point by random displacement from the current point and calculating the relative likelihood of the two points. If the new point is more likely it is accepted onto the chain; if it is less likely, it accepted with probability determined by the ratio of the likelihoods of the current and new points. The proposal distributions we use in this paper are Gaussian with σ c = 0.001 mm, σ T = 0.2 K and σ P = 0.5 mBar. We use the implementation of the MCMC algorithm in the BNMin1 library (Nikolic 2009). By construction of the Metropolis algorithm, the density of points of the chain in a volume of parameter space is an estimate of the posterior probability p(θ |D). This straight-forward approach does not however allow estimation of p(D) on its own, and so it is not possible to analyse the relative benefits of different models. There are however techniques available for estimation of p(D) and we intend to implement these in the future to allow proper model comparison. Marginalisation and calculating the dL/dT B We use the information contained in the Markov Chains in two ways: we marginalise and histogram the points to make estimates of the model parameters; and we, for each point in the chain, calculate the phase correction coefficients dL/dT B,i . The marginalisation of Markov Chains in directions parallel to parameter axes is trivially accomplished by simply ignoring those parameters. Subsequent computation of histograms is also easily done (we use the numpy.histogram routine for this). Computation of the coefficients dL/dT B,i is more involved as new physics must be introduced to compute the delay introduced by the water vapour layer. In the interest of simplicity, we split this calculation into two parts: calculation of the non-dispersive and the dispersive path delay. We compute the non-dispersive path delay using the Smith-Weintraub equation as described in Appendix B1 taking into account the temperature and the pressure of the water vapour. As we are interested here in the effects of water vapour only, we do not here consider the effect a change in temperature will have on the refractive index of the dry air. The dispersive delay calculation is more complicated as well as dependent on the observing frequency (unlike the rest of the discussion presented in this memo). However, at the frequencies of relevance to the SMA test data presented later, the dispersive effect is relatively small, i.e., around 5%. We therefore only calculate an adjustment using the ATM by Pardo et al. (2001) which is then used to scale the non-dispersive path. The details of this calculation are presented in Appendix B2 and the conclusion is that we scale up the non-dispersive phase coefficients by a factor of 1.05 to take into account the dispersion at 230 GHz. With the path delay calculated we calculate the coefficients dL/dT B,i by making a small perturbation to the parameter c, i.e., the quantity of water vapour, and computing the differential as a finite difference. SIMULATION In this section we present analysis of a simulated single observation of the sky brightness with a WVR. The goals for this section are to illustrate the outputs of the technique we have described above, to show the effects and the importance of the priors, and to illustrate the approximate accuracy with which it will be possible to predict the correction coefficients dL/dT B,i from the inputs consisting of the sky brightness only. The atmosphere from which we simulate our data point has 1 mm of water vapour toward the zenith at a temperature of 270 K and pressure of 580 mBar. We assume the observation is toward the zenith. As described above, we simulate the sky brightness measured by the filter set of prototype WVRs. We find in this case that the simulated temperatures are: 194.8, 142.6, 90.7 and 47.5 K for the inner to outer channels respectively. These simulated sky temperatures are then used for subsequent inference as the observable temperatures (T * B,i ). The first inference we present is with the weak priors from row 1 of Table 1. Recall that in this case we assume weak constraints on the model parameters, including allowing the pressure to be higher than the ground level pressure; hence, this case is an illustration of the inference when essentially no prior information is supplied. The posterior distribution of the model parameters for this case is shown in Figure 3. Like in the other figures of the model parameter posterior distribution, we present these results as the marginalised posterior distributions of each of c, T and P; and as the joint distribution of c vs P, c vs T and c vs T with the remaining (third) parameter marginalised. The marginalised probabilities in the top row of Figure 3 show that in this case we can place relatively weak constraint on the amount of water vapour (to about 5%) and very poor constraints on both the pressure and the temperature of the water vapour. It can also be clearly noted that the posterior distributions are not well approximated by a Gaussian distribution; for example, the posterior distribution p(c) shows a long tail toward higher water columns. The reason for the poor inference can be understood from the second row of Figure 3 which shows the joint distributions of each combination of the model parameters. Considering first the plot of the joint probability p(T P) in the right panel of the lower row, Figure 3, we can see that the retrieval of the pressure and temperature are almost exactly degenerate. In other words if a certain combination of pressure and temperature explain the observed data well, then a higher temperature and a proportionally higher pressure describes the observation also sufficiently well. This degeneracy between pressure and temperature also affects the accuracy of the retrieval of the water vapour column as shown in the left and middle panels of the lower row of Figure 3. There we see that the extreme values of pressure and temperature that are permissible due to the degeneracy give rise to a tail of likelihood to higher values of the retrieved water vapour column. The next result we describe combines the same simulated data point with priors in the second row of Table 1. These are the priors that we might have without significant ancillary information, i.e., that we know the temperature of the water vapour layer to within 20 K and its pressure to within 80 mBar. The posterior distribution from this inference is shown in Figure 4. It can be seen from the top row of this figure that these results are qualitatively different from the inference with very noninformative priors. In this case, the inference of the water vapour is well approximated by Gaussian with a full-width-half-maximum of about 0.012 mm and the entire distribution is within 0.02 mm of the model value. The inferences of the temperature and pressure are still poor however; in fact, it can be seen that their distributions fill almost entirely the space allowed by their priors, indicating that the priors in this case are providing important information. The lower row of Figure 4 again provides an explanation for the marginalised distributions of the model parameters. The pressure-temperature joint distribution again shows the degeneracy which explains the poor retrieval of each of those model parameters individually. The water column-pressure and water columntemperature distributions still show spreads but with two important differences: (i) The priors mean that the range over which pressure and temperature can vary is much smaller, therefore leading to smaller errors in the retrieved water column (ii) The joint distributions are highly elongated along the vertical axes, which means a relatively large change in temperate or pressure is required to cause an error in the retrieved water column The condition (ii) is somewhat specific to the simulated point in parameter space, and will not be as true for a general combination of filter centres/widths and observed sky brightnesses. Since the inference shown in Figure 4 is constrained to a reasonable volume of the parameter space we can compute the dL/dT B,i to find how well we can predict the phase correction coefficients. The results are shown in Figure 5 as the marginalised distribution of each of the coefficients (upper part of the figure) and also the joint distribution of each pair of coefficients (lower part). It can be seen in the upper part of this figure that the posterior distributions are non-Gaussian, in fact almost square, and of width of about 10%. This large spread is probably dominated by the uncertainty in the retrieved temperature of the water vapour which influences its refractive index (see Equation B3). The joint distributions plotted in the lower part of Figure 5 show that in this case the errors on inference of the coefficients are very highly correlated, with similar correlation for each pair of the channels. This means that it is unlikely that making use of all of the radiometer channels simultaneously would reduce the error in phase correction due to the uncertainty in the inference of the coefficients. Under different conditions and perhaps with additional observational data, this may be however be possible. The last result that we show in this section is an inference with a tight prior on the pressure (or, equivalently, the height) of the water vapour layer, i.e., 570 mBar < P < 590 mBar, but with the prior on temperature as before. The posterior distribution of the model parameters for this case is shown in Figure 6. We again find that the results of the inference are qualitatively different, primarily in that the posterior distribution of the temperature of the water vapour is now approximately Gaussian and with a full-width-half-maximum of about 8 K, significantly less than its prior range of 20 K. The posterior distribution for the pressure however is, as expected, completely dominated by the prior and simply flat over its prior range. The implication is that a tighter a-priori range of the pressure allows a much better inference of the temperature of the water vapour layer. The posterior distributions of the dL/dT B,i coefficients for this posterior distribution are shown in Figure 7. The qualitatively different inference of the temperature has an important impact on the quality of the inference of these coefficients too, as can be seen from the approximately Gaussian shape of coefficients for channels 1 to 3 and by their significantly tighter ranges. For example. the distribution of coefficient for channel 2 for the inference with prior 2, (Figure 5) is essentially flat with a range of about 0.008 mm/K. With a tight prior on the pressure however, the distribution of this coefficient is close to Gaussian with a full-width-half-maximum of 0.003 mm/K, clearly substantially better. Table 1). The top row shows the marginalised distributions of each of the model parameters, while the bottom row shows the joint distribution of combinations of two parameters with the third marginalised. Discussion One of the main aims of this section was to illustrate the outputs of an analysis based on the methods described in Section 2, i.e., viewing the problem of estimating the phase correction coefficients as Bayesian inference problem. The main outcome of such an analysis are the posterior distributions for the phase correction coefficients such as presented in Figures 5 and 7. The posterior distributions allow us both to pick a specific coefficient to use for each channel of the radiometer and give us confidence intervals for the accuracy of those coefficients. Obtaining such confidence intervals is important since the combi-nation of dynamic scheduling, wide range of ALMA configurations and a large number of projects will mean that each science observation is likely to be in conditions which are just marginal for its requirements. Therefore, if phase correction doesn't work as well as expected, it is likely to seriously impact the science aims. We should note however that the confidence intervals calculated using the model we have specified and therefore do not properly capture the probability that simply our model is not very good. Within the Bayesian framework this can be done through the evidence, or p(D), value and we intend to implement this functionality in the near future. Beside the distributions of the phase correction coefficients, the outcome of the analysis is also the full joint distribution of the model parameters, which in this case are the water vapour column, and its pressure and temperature. It is the availability of this full posterior distribution that makes reliable estimates of the phase co-efficients possible. This becomes particularly important when more parameters are added to the problem, as will no doubt be necessary in our case. With more parameters it becomes less and less feasible to pick a single representative point in the parameter space to calculate the phase correction coefficients at and making use of the full distribution becomes increasingly important. The model we have been using in this paper is fairly simplistic in that it assumes that the only observables we have are the four absolute sky brightness temperatures observed by the radiometers. What we find is that if we have no further constraints at all, we can estimate the water vapour column to an accuracy of about 5%. We can not however constrain the temperature of the water vapour at all because it is almost exactly degenerate with the pressure and therefore we can not make any estimate of the phase correction coefficients (since they depend on the temperature, see Equation B3). This shortcoming can be improved on by adding even a fairly loose constraint on the pressure and temperature of the water, such as can be derived from historical distributions of water vapour in the atmosphere and its temperature. Such loose constraints should be able to provide estimates of the phase correction coefficients in the 10% range. Even tighter a-priori constraints can provide further improvements as shown in Figures 6 and 7. With the model as simple as the one we have presented here and no data from the site of ALMA itself it would be somewhat premature to discuss in detail already the improvements in accuracy particular constraints can make. We however note that we will have a substantial amount of ancillary information such as: • Ground level air temperature, pressure and relative humidity and a number of points at the site • Inference of the vertical temperature profile of the atmosphere at the centre of the array from a commercial O 2 line sounder • Library of vertical profiles of atmospheric parameters from past radiosonde launches • Meso-scale meteorological forecasts • Inference of atmospheric parameters from specialised telescope observations such as sky-dips and crossed beam observations Inferences from these measurements can be used as a-priori probabilities on model parameters, or, indeed, in some cases, as further observables which are analysed simultaneously with the sky brightness measurement. It should be noted however that further and better prior information on the temperature and pressure parameters will leave two other sources of uncertainty: errors due to the inaccuracies in the model that we use and limitations in estimating the water vapour column which arises due to calibration accuracy of the WVRs. The best way of tackling these uncertainties may turn out to be to use the observed correlation between changes in the excess path to the telescopes (δ L) and the fluctuations of the temperature observed by the radiometers (δ T B,i ) as an additional observable that can constrain the models. We will discuss this approach in the next memo in this series. Finally, we now consider ways in which the model presented above may be improved. Firstly, the model may fairly easily be re-parametrised in terms of the height of the water vapour layer, the temperature lapse rate of the atmosphere and the ground level pressures and temperatures instead of the current parametrisation in terms of pressure and temperature of the layer itself. This would allow us to easily take into account the measured groundlevel temperature and pressure and the further information that we will have on the temperature lapse rate (through historical radiosonde measurements and oxygen line profiling) and water vapour height (through historical radiosonde data and specialised telescope scans). The second improvement is to consider the effect of a thick layer of water vapour, perhaps with an exponential fall off in water vapour content as a function of height. Assessing the benefits of such improvements to the model will require computation of the evidence value and, ideally, test data from the ALMA site. APPLICATION TO TEST DATA COLLECTED AT SMA In this section we analyse an observation taken with the two prototype ALMA water vapour radiometers (Hills et al. 2001) at the Submillimetre Array (SMA) on Mauna Kea. The observation was made on 17 February 2006 and consisted of an approximately one hour long track of a bright quasar. The interferometric visibility between the two SMA antennas with the water-vapour radiometers was recorded together with the sky brightnesses seen by the radiometers. This observation was part of the testing campaign of the prototype radiometers at the SMA which will be described in detail in a separate paper. The effective LO frequency of the observation was 235 GHz and both the upper and the lower sideband were recorded. In the present analysis we use only the upper side band with the on-sky frequency of 240 GHz. The water-vapour radiometers were read out with an integration time of 1 second while the interferometer was read out at 2.5 seconds. In order to match the two data sets, the radiometer data ware re-sampled to 2.5 second time resolution, and a small adjustment to the time-stamps was made to correct for a known timing drift problem. The radiometer data were recorded in already calibrated format and required no further processing. The interferometric data were converted to a text format by M. Reid of the SMA. Subsequently the variation in the observed visibility was transformed to the path fluctuation between the two antennas. The total sky brightness temperatures observed by four channels of two radiometers during this observation are shown in Figure 8. It can be seen that observed brightness is increasing during the observation which is a consequence of the decreasing elevation of the source and therefore increasing airmass as the observation progressed. It can also be seen that the innermost two channels (channels 1 and 2, top row of Figure 8) were almost saturated, indicating significant water vapour along the line of sight. The overall levels of the blue and red curves in this plot allow us to make inference about the atmospheric conditions at the time of the observation while the relative fluctuations between the two curves, once multiplied by the phase correction coefficients, should correlate closely with the path fluctuation measured by the interferometric observation. A retrieval of atmospheric parameters can be made from every one second integration of each of the two radiometers without significant loss in accuracy since, as mentioned before, the thermal noise in one second is much smaller than the expected calibration error of the radiometers. In practice however we expect retrievals will be made rather less often since it is likely that variation in atmospheric conditions will be fairly slow at the ALMA site. In this study we will present retrievals at three different time-resolutions: Figure 8. The sky brightness temperatures observed by the two prototype WVRs while at tests at the SMA, which was tracking a quasar at the time. Blue is one of the radiometers and red is the other. The four panels represent the four radiometer channels. (i) Calculation of the marginalised zenith water vapour at 300 time points during the observation (ii) A detailed analysis of a single data point in the middle of the observation (iii) Calculation of the marginalised phase correction coefficients at three points of observation We first present a sequence of 300 retrievals from sky brightness temperatures measured at intervals spaced by 25 seconds during the observation. As with the other retrievals in this section, these were made with the priors as shown in row 2 of Table 1. We plotted these retrievals in Figure 9 by marginalising them to obtain the posterior distribution of the zenith water vapour column and plotting these histograms in colour scale as a function of time. Therefore in this plot, time runs in the horizontal direction, the zenith water vapour column parameter along the vertical and the colour represents the posterior probability. The first point to note is that because we retrieve for the zenith water vapour column, we do not expect to see a large increase in the parameter c as the air-mass increases during the observation. The overall range of the water vapour measured in the retrievals is about 5% indicating that at least approximately the plane-parallel approximation holds and the referencing to zenith column is a reasonable approach. Secondly it can be noted that although fluctuations in the water vapour column can be seen, adjacent retrievals (which are separated by 25 seconds in time) generally show similar values indicating a high degree of stability in the retrieved posterior distribution. The magnitude of the fluctuations seen in the retrieved water column is about 50 µm of water vapour which corresponds approximately to a 300 µm of path fluctuation (using the rough con-version of 1 mm water ≈ 6 mm path). As we will present later, e.g., Figure 13, this roughly corresponds to the fluctuations seen in the path by the interferometric measurements. Therefore, over the hour of the observation and a large change in airmass there is no obvious evidence of instability in the water vapour column retrieval. A more detailed analysis of the retrieval from one set of sky brightness measurements in the middle of the observation is presented in Figure 10 in the same format as the previous plots of the posterior distributions of model parameters. As the line of sight water vapour during this observations is significantly higher than the simulations shown in Section 3, some qualitatively differently different results are seen. The first noticeable feature is that in this retrieval the temperature of the water vapour layer is in fact quite well constrained, to a range of around 4 Kelvin, in contrast the results at 1 mm water vapour line of sight column where the posterior distribution filled the entire prior range of 20 K. This is a direct consequence of the almost complete saturation of the innermost channel of the radiometer which means that it is in effect measuring the temperature of the water vapour rather than line of sight column. The saturation of the inner channel, however, now also causes a degeneracy between the pressure and the water vapour column parameter leading to a non-Gaussian posterior distribution for c. The reason for this is that at lower pressure, the line opacity becomes highly peaked, but this can not be detected because of the saturation. The outcome therefore is that at this point in parameter space the water vapour column is somewhat less well constrained and the temperature is better constrained. It should again be noted that the inference errors we present are derived from the model itself and therefore do not account for inaccuracies of the model. We next consider the posterior distribution of the phase correction coefficients for this detailed retrieval, as shown in Figure 11. Because of the high line of sight water vapour column, it can be seen that the coefficient for channel 1 is very high, i.e., around 1.1 mm/K. This means that thermal noise of 0.1 K within the radiometer would be sufficient to produce a high random path fluctuation of 110 µm. Furthermore, other effects which are not modelled are likely to cause large errors in the path fluctuations calculated from this channel. Therefore, in this case we do not expect this channel to provide useful data for the phase correction itself. It can also be seen in Figure 11 that although the posterior distributions of the phase correction coefficients are reasonable cen-trally peaked, they do all have a tail to higher values. This tail is a result of the non-Guassian inference of the total water vapour column, as seen in the top-left panel of Figure 10. The errors on these coefficients are not dominated by the error on inferred water vapour temperature because, as discussed above, the saturated inner channel in allows for this error to be minimised in this case. Finally it should be noted that the confidence interval of the inferred phase correction coefficients is about 10 to 15% in this case. The final part set of inferences we made was to repeat the prediction of the phase correction coefficients at the beginning and at the end observation so that we can examine how they change during the course of the observation. The results of this analysis are shown in Figure 12, with each retrieval on a separate row. It can be seen in that the values of all of the coefficients increase as the observation progresses. This is of course due to the increase in airmass. The greatest increase is seen in channel 1, which starts at a median value of about 0.45 and finishes at a median value of about 2.25. Phase correction We next examine how well we would have been able to do phase correction at the SMA during this observations if we had used the phase correction coefficients retrievals as shown previously. As mentioned previously the SMA interferometer made it possible to infer the fluctuation of path between the two antennas of the array that the WVRs mounted on them. We can also difference the signals between two radiometers to calculate the differential fluctuation of sky brightness as seen from the two antennas. Those two signals are compared (for each of the four channels of the WVRs) in Figure 13. Of course we hope that when multiplied by the phase correction coefficient, the sky brightness fluctuations will look very similar to the path fluctuation. The comparison shown in Figure 13 can immediately confirm that channel 1 is unlikely to yield a useful phase correction, while channel 2 show some correlation with path, and channels 3 and 4 show very high correlation. We next quantify the performance that the phase correction technique would have produced based on the phase correction coefficients inferred in this section. For this we consider both the observation as a whole and three sub-sets of the observation, namely the first, the middle and the last thirds. The reason for considering the subsets is to investigate if the change in the phase correction coefficients inferred during the observation ( Figure 12) is reflected in improved correction performance. Before processing the data further we remove from both the interferometric path and the sky temperature measurements a threeminute running mean. This trend removal should approximately model the effect of the further offset calibration scheme that ALMA will use in practice. Although we do not present them here, the data without the running mean removal lead to generally the same conclusions as below. We next asses the potential performance of the phase correction. For these tests we consider phase correction using the fluctuations observed in only one channel at the time. In principle, using a combinations of channels rather than a single one should give better performance because of the reduced effect of thermal noise and possibly the averaging out of errors in the inferred phase correction coefficients. We do pursue this multiple-channel approach further in this paper but we expect it will be used in practice with ALMA. We calculated the corrected path fluctuation using the inferred phase correction coefficients and we also find the 'best-fit' phase coefficient, i.e., the coefficient which would have produced smallest corrected path fluctuation given the data that we have. The results are present in Table 2 with following columns: (i) The root-mean-square of the uncorrected path fluctuation as seen by the interferometer, σ raw (ii) The approximate median inferences for the phase correction coefficients for each channel and for each observation subsetd L dT B,i as shown in Figure 12 (iii) The root-mean-square path fluctuation after correction using radiometric data from each channel, σ r,i (iv) The phase correction coefficient that would have given the optimum reduction of path fluctuation, Opt( dL dT B,i ) (v) The resultant root-mean-square path fluctuation that would have been obtained if the optimum correction coefficient were used, σ opt,i As can be seen from the table, channel 1 can not be used in these conditions used for phase correction, as expected from the results already presented. This is evident in phase correction with the 'optimum', i.e., best-fit, phase correction coefficient which makes a negligible improvement to the path fluctuation, reducing it from 157 micron to 156 µm. If the phase correction were done with the inferred coefficient of 1.25 mm/K (this was calculated for the middle of the observation) then the path fluctuation would have been drastically increased. This is due to the high effect of thermal noise in this channel when the line is essentially saturated and other effects such as temperature variations of the water vapour. The results for the other three channels show that significant improvements in corrected versus uncorrected path fluctuations can be made based on the radiometer data. Furthermore, it can be seen path corrected with our inferred coefficients dL/dT B,i is close to the path corrected with the best-fit coefficient, i.e., is fairly close to the limit that be achieved with the present data. The difference between using the inferred and best-fit coefficients results in changes of as little as 1 µm rms in path for some of the data sections up to 7 µm path rms in the worst combination of section and filter. Some trends can however be noticed in the data presented in the table. Firstly, excluding channel 1 which is saturated, the inferred phase correction coefficients are systematically underestimating the best-fit coefficients. The only exception is for channel 2 in the last third of the observation, which may however be also affected by saturation. Secondly there is evidence that at lower elevations the gap between correction using inferred and best-fit coefficients widens, suggesting that refinements to the model may be necessary. Discussion The analysis presented in Table 2 shows that the Bayesian inference approach that we described in Section 2 can produce very useful phase correction coefficients even with a very simple atmospheric model. We have shown that the inferred coefficients give a corrected path fluctuation to within about five percent of what the optimal coefficient would have given. There are however a number of interesting points that these data (and the simulations) raise. Firstly, the confidence interval for the inference from our model analysis ( Figure 12) show a range of about 10% and furthermore the optimal coefficients found empirically are about 15% higher than the median inferred values. This needs to be compared against the specification for ALMA phase Table 2. Results of analysis of SMA data from February 17th. The part I of the table shows results for channels 1 and 2; the second part for channels 3 and 4. The UT range column indicates the time range of the data analysed: the first row corresponds to the whole set and the other three rows to respective thirds of the data. The column σ raw is the RMS of phase fluctuations without correction. The columnd L dT B,i is the best estimate from retrieval of the correction coefficient for the ith channel and for that section in time (see also Figure). The column σ r,i is the RMS phase fluctuation after correction using the estimated coefficient. The column Opt( dL dT B,i ) is the coefficient which would give the minimum RMS give the radiometer and interferometer data. The column σ opt,i is that optimum RMS. correction which is: If we interpret the first term on the right hand side as the budget for the thermal noise contribution to corrected phase fluctuations then the second term, which is proportional to the raw phase fluctuation, among other effects (e.g., Nikolic et al. 2007) must account for the imperfect phase correction due to errors on the inferred phase correction coefficients. Therefore we need to be able to infer phase cor-rection coefficients to better to 2% accuracy, but the present analysis of the model and the data suggest that we will do significantly worse than that. There are a number of improvements that we are planning that will allow better inference of the phase correction coefficients. One of these is the use of the observed correlation between the change in path and sky temperature which is likely to allow a significant improvement of the inference accuracy and will be presented in one of the next papers in this series. Secondly, it should be noted that in this case one of the limiting factors on the inference is the absolute calibration of the radiometers. Although we do not think that the a-priori calibration of the radiometers when mounted on the ALMA telescopes will be better than what we assumed here, it will be probably the case that the major sources of calibration inac-curacy will be fixed in time. Therefore we may be able to improve the accuracy of these devices over time by empirical corrections to their calibration. We have not discussed in detail in this paper the dispersive path delay, in particular how it depends on the parameters other than the water vapour column. This can be investigated by more fully featured atmospheric models along the lines of the simple calculations performed in Appendix B2. We have also not considered the 'dry' path fluctuations, that is path fluctuation due to changes in refractive index of dry air. It is not presently clear how significant in practice these will be but data to be collected during the commissioning of ALMA should provide constraints on this. CONCLUSIONS We have described an approach for calculation of the phase correction coefficients based on: • A very simple, essentially minimal, model of the atmosphere with three parameters: water vapour column, pressure and temperature • Observables which are the four absolute sky brightness temperatures measured by the water vapour radiometers • Priors on the model parameters based either on basic considerations or ancillary measurements • Bayesian inference The approach produces the full probability distributions of the phase correction coefficients making it easier to asses in a timely way the accuracy of phase correction that can be expected. The simulations we presented in Section 3 show that although there are inherent degeneracies in retrieval of atmospheric paramours from sky brightness only, with the appropriate use of prior information fairly good retrievals may be obtained. We have then applied this approach to one test observation taken during the testing of the ALMA prototype radiometers at the SMA. We found that the inferred water vapour column during this observation is stable and its fluctuations are not greater than expected given the observed path fluctuations (Figure 9). In tests of applying the phase correction, we found that good phase corrections can be made by taking representative inferences of the phase correction coefficients and applying them to correct the observed path fluctuation: in fact the typical corrected path fluctuation is within about 5% of the best correction that could be achieved with these data, assuming only one channel is used at a time. In order for it to be possible to meet the very demanding ALMA specifications, it is clear to we will need to make use of further data and possibly more complicated models. The approach we are currently working on is to incorporate information from the occasional phase calibration observations of the observed correlation between path and sky brightness fluctuations into the inference process. The source code for all of the algorithms presented in this paper is available for public download under the GNU Public License at http://www.mrao.cam.ac.uk/ ∼ bn204/alma/ memo-infer.html. The observational data from the SMA is also available at the same internet address. (Rothman et al. 2005).
2009-03-19T18:18:59.000Z
2009-03-06T00:00:00.000
{ "year": 2009, "sha1": "63207c1fdadbfd3b8a649045d0f94b37baa47203", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8d25a214c86e3f7f39004e4dbc637948ca7704be", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
1401960
pes2o/s2orc
v3-fos-license
DPC4 Expression in the Small Intestinal Adenocarcinomas Background Small intestinal adenocarcinomas (SACs) are rare malignancies of the alimentary tract with uncertain carcinogenesis. Methods We investigated the expression of deleted in pancreatic cancer 4 (DPC4) in 188 cases of surgically resected SACs, using tissue microarray technology. Results Twenty-four of the 188 tumors showed complete loss of Smad4/DPC4 expression in cytoplasm (score, 0; 12.8%). Eighty-four and 31 cases were moderately and strongly positive, respectively (score, 2 and 3; 44.7% and 16.5%, respectively) and 49 cases were focally or weakly stained (score, 1; 29.1%). Immunohistochemistry analysis showed that the expression of Smad4/DPC4 was related to an increased risk of lymphatic invasion but not to other clinicopathological features of the tumors (tumor location, differentiation, growth pattern, T stage, direct invasion, vascular invasion, and nodal metastasis). There was no significant association between Smad4/DPC4 expression and patient survival. Conclusions The present research is the first study to evaluate Smad4/DPC4 expression in a large sample of SACs with clinicopathologic correlation. Future studies should focus on the immunohistochemical and molecular characteristics of SACs to clarify their tumorigenesis. Specimen selection Carcinomas arising from the small intestinal mucosa, including the duodenum, jejunum, and ileum, were selected in this study. Carcinomas that continued into the small intestines from the neighboring digestive system, such as those of the stomach, cecum, appendix, ampulla of Vater, or pancreas, were excluded from this study. Tumors located in the serosa or the subserosa of the intestinal wall with no mucosal involvement were considered secondary carcinomas metastasized to the small intestine, and were also excluded. A tumor with mucosal involvement, regardless of the serosal extension, was characterized as a primary small intestinal lesion. In total, 195 specimens of surgically resected SACs were gathered from the surgical pathology departments of 22 hospitals. The histologic features of all specimens were reviewed by two pathologists (S.-M.Hong and G.S.Yoon). The patients' biological data and personal information (sex, age, diagnoses of previ-ous or pre sent malignancies, additional previous or present modalities of treatment such as radiation or chemotherapy, latest date of follow-up, and survival status) were collected through review of the medical records. Histologic data were obtained from pathologic reports and microscopic review. The tumor location, size, growth pattern, and date of operation were collected from the patients' pathologic reports. Microscopic features including differentiation of tumors, invasion depth, peritoneal seeding, invasion status of the pancreas or other intestinal loop, lymph nodal metastasis, and the invasion status of nerve fibers, blood vessels, or lymphatic channels, were obtained from the microscopic review of hematoxylin and eosin (H&E)-stained slides. This study was approved by the Institutional Review Board at Kyungpook National University Medical Center. Tissue microarray (TMA) Areas of invasive adenocarcinoma were selected on corresponding H&E slides. Core biopsies, 1.0 mm in diameter, were obtained from each donor block and arrayed without flipping into recipient paraffin blocks on 1.2 mm center, 3.0 mm edges; the array had a maximum of 27 rows, with four cores from each case, resulting in four histological spots on the corresponding slides: two invasive carcinomas, one metastatic lymph node, and one normal small intestinal mucosa. If there was no lymph node metastasis, three invasive carcinomas and one normal small intestinal mucosa were used. The positive controls were normal liver, kidney, spleen, placenta, and normal small intestinal mucosa. Immunohistochemistry Immunohistochemical staining using the Benchmark XT slide stainer (Ventana Medical Systems, Inc., Tucson, AZ, USA) was performed in accordance with the manufacturer's instructions. Smad4 (1 : 100, clone B-8, Santa Cruz Biotechnology, Santa Cruz, CA, USA) was applied to TMA slides. The stain ed sections were reviewed without any knowledge of the clinical data of the patient cohort. Cytoplasmic staining in less than 10% of tumor cells was given a score of 0, focal or weak staining (10-50% staining) were scored as 1, and diffuse moderate and diffuse strong cytoplasmic staining (more than 50%) were scored as 2 and 3, respectively. Moderate staining is similar in intensity to that of internal controls, such as fibroblasts or endothelial cells, weak staining is paler, and strong staining is darker. Negative staining in the internal controls was regarded as false negative staining. 9,16,17 Statistical analysis Statistical analyses were calculated using SAS ver. 9.2 (SAS Inc., Chicago, IL, USA). The relationship between the clinicopathological features and expression decrease of Smad4/DPC4 in immunohistochemical staining was estimated using the χ 2 test and Fisher's exact test. A p-value of less than 0.05 was regarded as statistically significant. Using the multivariate logistic regression model, we evaluated the relationship of clinicopathologic features to Smad4/DPC4 expression in immunochemical staining. Overall, patient survival was defined as the date from surgical resection of SACs to death or the last follow-up of the patient. Survival rates were analyzed by the Kaplan-Meier method. A comparison of survival rates with regard to the expression of Smad4/DPC4 was investigated using the log-rank test and the Breslow test. The regression models were adjusted for age, sex, histological type, and the pT stage as characterized by the tumor-node-metastasis staging system. Then we calculated the significance using the Cox proportional hazards model. Association between Smad4/DPC4 expression and clinicopathological features As reported in detail in Table 1, there was no significant association between the expression of Smad4/DPC4 as evaluated through immunohistochemistry and the clinicopathological features of the tumors (tumor location, differentiation, growth pattern, T stage, direct invasion, vascular invasion, and nodal metastasis), with the exception of lymphatic invasion (p=0.037). The odds ratio from the adjusted logistic regression analysis revealed that the intensity and positivity of Smad4/DPC4 expression was associated with increased risk of lymphatic invasion (95% confidence interval) ( Table 2). Association between Smad4/DPC4 expression and patient survival The univariate analysis showed no significant difference in survival based on the intensity of Smad4/DPC4 expression (Fig. 2). Negative Smad4/DPC4 expression produced mild survival benefits, although the results were not statistically significant (p=0.2661 in the log-rank test and p=0.3603 in the Breslow test) (Fig. 3). Using the Cox proportional hazards model, the hazard ratio for the mortality rate based on positive Smad4/DPC4 expression was 1.80, although this was not statistically significant (p=0.065) ( Table 3). Difference in Smad4/DPC4 expression between primary tumor lesions and metastatic lesions of lymph nodes We performed Smad4/DPC4 staining in 38 of 86 cases showing lymph node metastasis. Compared to primary tumor lesions, expression of metastatic lymph nodes was increased in 4 cases and decreased in 15 cases; in 19 cases, there were no expression differences. There were no significant correlations with other clinicopathological features (Table 4). DISCUSSION We evaluated the clinical informations and histological characteristics of 197 cases with surgically resected SACs. Our key findings include the following: 1) SACs are usually diagnosed at an advanced stage; 2) SACs with sporadic adenomas or peritumoral dysplasia have better anticipated survival; and 3) distal location (jejunum and/or ileum) and lymph node metastasis of SACs are the most important prognostic factors. 1 A few studies have attempted to define the tumorigenesis of SACs, including studies of Smad4/DPC4 expression. Blaker et al. 5 studied the molecular features of 17 SAC cases using com-parative genomic hybridization, microsatellite analysis, and SMAD4 mutational analysis. They found a 18q loss in 8 cases (47%) and a loss of heterogeneity (LOH) of 18q in 13 cases (76%). SMAD4 sequence alterations (24%) were found in five cases (24%); three of these cases had missense point mutations with loss of the wild-type allele and one case had a 7-bp deletion with retention of the wild-type allele. The other alteration was a silent polymorphism. Svrcek et al. 3 conducted a TMA study of 27 SAC samples using several immunohistochemical stains to evaluate the expression of Smad4/DPC4, p53, beta-catenin, and DNA mismatch repair (MMR) genes such as hMLH1, hMSH2, and hMSH6. Five cases showed an absence of Smad4/DPC4 expression and 14 cases showed p53 overexpression. Βeta-catenin nuclear translo- Cytoplasmic staining in less than 10% of tumor cells is scored '0', focal (10-50%) or weak staining is scored '1', and diffuse (more than 50%) moderate and diffuse strong cytoplasmic staining are scored '2' and '3', respectively. a Age, gender, histologic type, pT stage adjusted; b p < 0.05. cation was observed in two cases. Loss of hMLH1 was found in two cases but no depletion of hMSH1 and hMSH6 was detected. Wheeler et al. 4 studied the immunohistochemical features of 21 SACs including the expression of beta-catenin, E-cadherin, p53, adenomatous polyposis coli (APC), and MMR genes (MLH1 and MSH2). They reported increased nuclear translocation of beta-catenin in 48% of cases and overexpression of p53 in 24%, similar to Svrcek et al. 3 Wheeler et al. 4 observed decreased membranous expression of E-cadherin in 38% of cases. There was no APC gene mutation and no loss of MLH1 or MSH2 expression. Zhang et al. 6 published an immunohistochemical investigation of SACs compared to colorectal carcinomas (CRACs). They reported that a complete loss of APC immunoreactivity occurred in 8 of 26 (31%) SACs and 36 of 51 (71%) CRACs. Nuclear translocation of beta-catenin occurred in 5 (19%) SACs and 36 (71%) CRACs. In contrast to other studies, they found a total loss of nuclear staining for one or more of the MMR enzymes at a similar low frequency in both SACs (2 of 25 cases, 8%) and CRACs (10 of 47, 21%). The frequencies of aberrant p53 and retinoblastoma expression were also similar between SACs and CRACs. To the best of our knowledge, the present research is the first study to evaluate Smad4/DPC4 expression in a large number of SACs with clinicopathologic correlation. Our study included 24 Smad4/DPC4-negative cases (12.8%). This is a slightly lower rate compared to previous research by Blaker et al. 5 (24%) and Svrcek et al. 3 (18.5%), and may be influenced by the criteria used to classify negative staining. The study by Svrcek et al. 3 classified specimens into two groups, positive and negative, with only diffuse strong staining regarded as positive. 3 In this study, however, we categorized positive groups based on the intensity and partiality of stains. If the criteria of Svrcek et al. 3 are used, the "negative" rate increases by about 38.8% (73/188 cases). There was no significant correlation between Smad4/DPC4 expression and clinicopathological characteristics, with the exception of lymphatic invasion. According to the odds ratio, the intensity and positivity of Smad4/DPC4 expression was related to an increased risk of lymphatic invasion (Table 2). There was no significant association between the Smad4/DPC4 expression and nodal metastasis, however, so the interpretation of this result may be controversial. The Smad4/DPC4 expression of metastatic lymph node lesions was the same as in half cases of all the primary tumor. Fifteen cases had decreased expression in lymph nodes and four cases showed increased expression. No clinicopathologic features were significantly related to expression. This result may be correlated with the association between Smad4/DPC4 expression and lymph node metastasis, which was not statistically significant. This research is the first to investigate the relationship between Smad4/DPC4 expression and patient survival in SACs, although there was no significant association between them. A mild survival benefit was observed with negative Smad4/DPC4 expression, but it was not significant. These negative results have a few possible explanations. First, the loss of Smad4/DPC4 expression may occur too early in carcinogenesis to affect the prognosis of the disease. In addition, the loss of Smad4/DPC4 expression may not influence the invasion or metastasis of SACs. Finally, because most of our cases were at an advanced stage−pT3 and pT4 (89.9%)−we could not determine the step at which the loss of Smad4/DPC4 expression occurs in carcinogenesis. In conclusion, the present study provides a small foothold in the effort to establish the tumorigenesis of SACs. To clarify this process, future studies should evaluate the immunohistochemical and molecular characteristics of these tumors. Conflicts of Interest No potential conflict of interest relevant to this article was reported.
2016-05-12T22:15:10.714Z
2012-10-01T00:00:00.000
{ "year": 2012, "sha1": "5829913c1bb8987cb7e3e040331cffb7b9dc1ad2", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4132/koreanjpathol.2012.46.5.415", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5829913c1bb8987cb7e3e040331cffb7b9dc1ad2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
204361999
pes2o/s2orc
v3-fos-license
Futures Work as a Mode of Academic Engagement: Assembling Smart Energy Futures for Finland Strategic research indicates a problem- and future-oriented, collaborative process of knowledge creation. Analyzing a Finnish research project, Smart Energy Transition, and a related Delphi survey, we conceptualize strategic research as visioneering and as translations of technologies, time frames and narratives into a relational actor network. We ask 1) How does strategic research condition and contribute to academic practices of visioneering, 2) What are the available means to problematize futures and create intressement in a Delphi survey, and 3) How do academics carrying out strategic research align themselves as part of actor networks? We find that strategic research brings forward and operationalizes new practices in the boundaries between science, business and policy. In our case, the notion of disruption was used to problematize futures. Moreover, plural time frames of short-term changes in actor networks and long-term speculative visions supported intressement. Alignment of academic actors in the project hinged on several issues including research methodology, specific academic backgrounds and expertise, public energy discourses, and national and industry interests. Introduction 'Smart grids' and 'smart energy' have become prominent labels for an ongoing technological change in energy sources, distribution systems, business logics, and demand (Ferrari and Lösch, 2017).Visions of a smarter energy production sys-tem include ideas on how to tackle global environmental problems while at the same time creating pathways for new cleantech industries, new jobs, and sustainable energy production (Leipprand et al., 2017).Yet, key technologies, their diffusion and integration with existing systems and the local contexts, contain significant uncertainties.Visions of smart energy are thus representations of anticipated and desired, yet highly uncertain and debated, futures (Ferrari and Lösch, 2017;Ballo, 2015;Engels and Münch, 2015;Butler et al., 2015). While futures research and scenarios have been identified as particular forms of creating expectations and demand for new technologies and securing resources for further development of the technology (Bell, 2011;Geels and Smit, 2000;Borup et al., 2006), the active work of making visions and the unfolding of their impacts has received less attention.In this paper we follow a track identified by Ferrari and Lösch (2017) and focus on visioneering and 'visions as socio-epistemic practice' .We conceptualize such practices with the help of actor network theory and the notion of translation (Callon 1986a(Callon , 1986b;;Latour 1993).At their core, actor networks are composed of human and non-human actors, scientific facts, engineering achievements, and social arrangements, each of which have identities and properties that have been adjusted to fit each other.Actor worlds come together via translations of existing entities into specific networks by the selective and purposeful interpretation of their key properties (Callon, 1986a(Callon, , 1986b)). Visions as socio-epistemic practice strongly implicate a political and practical involvement of academics and blurring boundaries between science and society.Addressing such conditions, science and technology studies have highlighted the multiple ways in which academics and the institutions of science are intertwined with the surrounding society (e.g.Jasanoff, 2015Jasanoff, , 2009;;Nowotny et al., 2001).The thesis of entrepreneurial science (Eztkowitz, 2011) emphasizes the interaction of academics and the private sector in commodifying knowledge.Yet, the increased emphasis on impactful science also calls for further societal contributions.A broad range of academic work related to, for example, energy futures can be conceptualized as scientific policy advice (SPA), which is characterized by fieldspecific expert knowledge (Kropp and Wagner, 2010) and transdisciplinary pragmatic approaches to problem solutions (Leipprand et al., 2017).To spur such work, national research policy agencies have introduced specific funding schemes and criteria that reflect a research paradigm of futureoriented, challenge-driven strategic research (Rip, 2002(Rip, , 2004;;Aarrevaara and Dobson, 2016), which is also the institutional context of our study. Science and technology studies have furthermore called for attention to modes of engagement and ongoing boundary work between science and the users of scientific knowledge (Lam, 2010;Möllers, 2017).Researchers should conform to a T-shaped identity of being both generalist and specialists (Rip, 2004).They also need to become the double servants of politics: First, they are expected to contribute to political processes by providing insights into the challenges ahead and visionary ideas about them, and second, to help decision makers to better address such challenges.Overall, the tenets of strategic research call for ongoing boundary work between science and politics or business (Lam 2010, Möllers 2017).Academics in strategic research not only tailor their knowledge into particular social concerns and thereby bridge between the conceptual domains of basic and applied research (Calvert, 2006;Möllers, 2017), but also actively construct demand for their knowledge and make themselves useful in the given political and practical contexts (Latour, 1993;Calvert, 2006;Hoppe, 2015). We contribute to the discussions on visions as practice and strategic research by drawing attention to the ongoing tailoring and adjustment of the research activities vis-à-vis social expectations.More specifically, we take a Delphi survey as a particular research operation and trace how the survey questions reflect the processes of tailoring and pragmatic interests around the survey.The empirical material stems from a large research project called Smart Energy Transition (SET).The project was funded by a strategic research program of the Academy of Finland, premised on producing useful knowledge for societal purposes, and designed to use futures study methods in a constructive manner.Drawing on data including the funding application, projectinternal position papers, participant observation, presentations, and interview data, we provide a close-range account of attempts to problematize energy systems, interest actors, and create a politi-cized space of possibilities in relation to smart energy technology.Specifically, we ask 1) How does strategic research condition and contribute to academic practices of visioneering?2) What are the available means to problematize futures and create interessement in a Delphi survey?and 3) How do academics carrying out strategic research align themselves as part of actor networks? We also aim at a pragmatic contribution.By following up how time scales and uncertainties were constructed and negotiated in the empirical case, we want to highlight the questions of closure and convergence in visioneering.Strategic research is premised on grand social challenges that call for concerted action.Yet the involvement of researchers in policy processes should thrive on transparency and openness regarding the means, paths, and potential actors (Leipprand et al., 2015).We address these issues in respect to existing concerns relating to the Delphi technique (Riikonen and Tapio, 2009), as well as addressing them on the broader level of strategic research. The paper is organized as follows.We begin by elaborating on the concept of strategic research and how it encourages a close interaction among academics, politicians, and other societal stakeholders.We then briefly introduce the Delphi survey as a futures research method and the data we draw on, and proceed to focus on the SET research proposal, the ways to meet the request for politicized co-creation of research, the resulting actor network and the problematization of energy futures.Thereafter we follow more closely the technical elements of the network and process of drafting the Delphi survey questions and the technology portfolio.In the discussion section we return to the notion of strategic research and argue that it can be understood as an active way of constructing possible futures.Rip (2002Rip ( , 2004) ) dates the rise of strategic research to the 1970's and claims that such research blends aspects of 'basic' and 'applied' research into a new concept which reflects a practice of scientific inquiry combined with social engagement.At least since then, and voluminous through institutions such as EU Framework programs, problem-and solution-oriented research has proliferated.A brief look at our case study also highlights the logic.In the case of Finland, the Strategic Research Council (SRC) at the Academy of Finland was founded in 2014.The SRC aims to provide the scientific community with an opportunity to produce scientific information for government policy and decision-making.More specifically, the goal is to engage the end-users of research knowledge as early as possible and through this early engagement have the research needs of the end-users considered by the research teams.The logic of the funding instrument rests on co-creation or codesign on the one hand and the shared goals and practices of interaction on the other (Aarrevaara, 2015;Aarrevaara and Dobson, 2016). Strategic research as translation The practice of social engagement and co-creation can be understood in different ways.Studying the scientific policy advice related to German Energiwende, Leipprand et al. (2017) claim that academics engage with advocacy coalitions and with the narratives they use in order to promote political goals.Supplementing politics, scientific work and the facts derived from it are used to pinpoint problems, potential actors, means-ends chains, and potential policy pathways.Controversies and gaps between opposing advocacy coalitions can be (and have been in the German case) mediated by providing knowledge that is normative but transparent.Being located close to policy making, researchers may become the "cartographers of policy pathways" (Edenhofer and Kowarsch, 2015;Leipprand et al., 2017).Yet, we suggest that the framework of scientific policy advice delivers a rather linear view on academic futures creation which does not fully take into consideration how researchers are embedded in the broader society that provides them with resources and commissions them to attempt translations and carry out practices of visioneering. Visions as practice can alternatively be understood as attempts to translate existing entities into a network with a joint effect of constructing viable socio-technical arrangements.Ferrari and Lösch (2017, 79) suggest that socio-epistemic practices of visioneering can: "produce and designate spaces of possibility," "normatively translate the use of the spaces of possibility into an urgent need for the current society," and ultimately also "result in practical changes in the socio-technical arrangements and constellations they address."Visioneering hence contributes particularly to the aspects of problematization and interessement that Callon (1986b) identifies as the early moments of translations. Sociology of translation underscores the active and open-ended nature of futures making.Callon (1986aCallon ( , 1986b) ) and Latour (1993) (Freeman 2017).With this vocabulary, it becomes apparent that each element-be it organization, social actor, or a technical component-may have an interest in future energy solutions and a need to be represented in the actor network.Strategic research as an engaged form of collaborating with end users of research, hence can be viewed as attempts to assemble actor networks, represent entities and speak-for aligned interests. Interests are suggested and represented through simplifications that contain the essential role of each actor for a particular actor world.The castings that are suggested and formulated in a responsive manner by spokespersons may however be challenged.Callon highlights that simplifications, which are needed to assemble the actor world, contain the seeds of controversy as they are but partial images of actors, as if they only existed in order for the project to unfold.Indeed, Latour (1993, 65) insists that translations are by definition misunderstandings that serve to align the diverging interests of the parties involved.It follows that not all translations succeed and dissidence will follow (Callon, 1986b).Moreover, if the work of translating actors and assembling interesting futures is premised on productive misunderstandings, the request for transparency around scientific policy advice becomes conceptually difficult: each of the viewpoints of actors are partial, science-actors are no different and ultimately the viewpoints and workings of actors cannot be transparent to others but merely translated. For these very reasons, the notion of translation can be also used to conceptualize the interface between science and society.Freeman (2017) suggests that research projects at the same time realize translations and are realized by them.This is to argue that the work of researchers may be organized by the same principles (of administration and governance) that they are to study (Freeman, 2017) and that researchers look for demand for their research and move horizontally between the laboratory and the social context of the produced knowledge (Latour, 1993).In our empirical case, it is to argue that insofar as the researchers are successful in participating and speaking an entity such as smart energy transition (for which no shared understanding exists), they also constitute (a need to study) smart energy transition.It is this dynamic that we seek to capture with our first research question: How does strategic research condition and contribute to academic practices of visioneering? The notion of tailoring (Calvert, 2006;Möllers, 2017) highlights the problematic aspects of visioneering and the boundary work that is performed between science actors and the users of knowledge.It denotes, firstly, efforts by researchers to tailor forward, i.e. point out how their results can be applied and what the relevance of their work is.In our empirical study, we have operationalized the question of forward tailoring in asking how the background of researchers affected the SET project proposal and the Delphi survey questions.On the other hand, reverse tailoring, Möllers (2017) suggests, involves attempts to redefine the social problems as formulated by funders to better fit the researchers.Turning this into an empirical question we report on how the SRC and the specificities of the call, the contemporary political power balance of Finland, affected the research proposal and the Delphi survey as a particular operation. A priori, we do not think that strategic research necessarily produces an excessive need to tailor or particularly problematic identities for researchers.It calls for extending roles or switching them towards entrepreneurial scientists, but as Lam (2010) reports, such roles are increasingly common.Indeed, existing research on the SRC also indicates interesting results regarding the changing role of researchers (Aarrevaara, 2015;Aarrevaara and Dobson, 2016).Their central aim in the projects is to pursue high-quality research, but alongside this, a picture emerges of researchers actively functioning as a type of facilitator within the project.Such activities are clearly linked to their will to influence societal matters and processes.Rather than focusing only on the scientific work, these researchers put time and effort into building cooperation systems, not only between researchers and stakeholder partners, but also between the different stakeholder partners.Such triangle-like cooperation building is seen to benefit the issue to a degree that makes such actions worth the effort.This finding is particularly interesting as it surpasses the idea that researchers need external mediators between the scientific world and the rest of society in order to get their message across. Delphi-survey as a tool for scenariobuilding, problematization and interessement Before moving to our empirical analysis on the translation efforts around smart energy technology, we briefly introduce the Delphi survey as a technique and a key ingredient of these efforts.The Delphi survey as a technique was developed in the 1960's to conduct anonymized and iterative polling of expert opinion (Linstone and Turoff, 2010;Gordon, 2000).Diverting from the aim of producing reliable predictions, Turoff (1970, see also Hasson et al., 2000) has developed a 'policy Delphi' and suggested that Delphi processes can be geared to explore underlying assumptions leading to different judgments and to educate respondents on a topic.Delphi surveys are frequently used to support scenario work (Nowack et al., 2011) and useful basic distinctions between Delphi methods can be derived by considering differences in scenario types.An established way to classify scenarios is to distinguish between scenarios of probable, possible, and preferable futures (Börjeson et al., 2006;Masini, 1994).Scenarios of possible and preferable futures imply an increasing scope of action as futures are not viewed as being determined but as being actively made.Indeed Börjeson and colleagues (2006) suggest that the purpose of scenario building might be used as a basis for a typology (Figure 1). Predictive scenarios spotlight particular technologies (Geels and Smit, 2000) and may seek to address the conditions of their further development in the form of a what-if analysis.In the field of energy studies the 'grid parity of photovoltaics' exemplifies predictive deterministic scenarios.Normative scenarios are more outspoken in terms of political goals: They are built on a desired end-state and look for the means to achieve this state.Backcasting as a particular method can be viewed as a transformative scenario that is built on a problematic view of current trends and a need to change the parameters and structures of the system in which futures unfold (Robinsson, 1982).An example of this type of scenario setting would be processes that fix and aim at, for example, a given share of renewable energy production.Exploratory scenarios, according to Börjeson and colleagues (2006), seek to answer the question (Börjeson et al., 2006) 'What can happen?' referring to either external factors or strategic actions in particular futures.Specifically, in comparison to the what-if type of scenarios, they state that "explorative scenarios resemble what-if scenarios, but the explorative scenarios are elaborated with a long timehorizon to explicitly allow for structural, and hence more profound, changes" (Börjeson et al., 2006, 728).Exploratory scenarios of, for example, smart energy technology may thus play with long enough time periods in order to evoke uncertainty and complexity and yet leave the desired end-state or outcome unarticulated.In general, Delphi studies need to strike a balance between time scales that either allow or limit exploration of new emerging solutions (Börjeson et al., 2006; see also Ferrari and Lösch, 2017). Data and methods The case we use consists of three layers: the call by the SRC, the research proposal by the SET consortium and the Delphi survey planned by researchers in the project.In terms of the level of the SRC, we rely on previous published work (Aarrevaara 2016;Aarrevaara and Dobson, 2016).Our analysis covers a period from initial drafting of the project plan "Smart Energy Transition: Realizing its potential for sustainable growth for Finland's second Science & Technology Studies XX(X) The empirical material that we draw on mainly consists of internal project documents of the case project.As all the authors have themselves worked in the project, participated in several meetings, and exchanged emails with other project partners, this offers a more thorough background understanding of the case.Table 1 lists the key steps in the analyzed project, the documentation that has served as the empirical material, and the key insights or role the document has in the analysis. Most of the above documents are management documents containing abbreviated text describing discussions in meetings or presenting plans and lists of items for upcoming work.They have mainly been written for project participants rather than for an external audience, with the major exception of the funding application.We have approached the text as a factual description of the choices made in the project, placing emphasis on how actors and technologies were brought into the realm of smart energy.The outcome of such work is a changing list of relevant elements.In addition, and in particular relating to the moments of problematization, we have analyzed the discursive strategies of the SET proposal text and metaphors of disruptions that were placed in the proposal and the Delphi survey questions. In addition to drawing on the documents created and interviews made during the planning of the survey, we interviewed the key actors of the project in spring 2017 to verify our results.These interviews were conducted with the principle investigator of the SET project, the key academic content provider (who drafted the first version of the proposal), and the policy liaison officer of the project (who has a key role in facilitating the interaction between researchers, companies and the policy makers in the SET project). The consortium and grant application: Smart energy transition as a research proposal for strategic research We have divided our analysis of the SET project into two parts.In this section, we focus on our first research question about the way strategic research configures visioneering.We account for the drafting of the SET proposal and for the way in which the content was tailored to fit both the involved researchers and the social context of the project.The next section dwells on the second research question and on the work that took place after the positive funding decision, highlighting the different views that existed inside the consortium, the adjustment of the work program and the Delphi survey as an element of visioneering. The analysis of the empirical material is also informed by the notion of visions as practice.We hence analyze both the making of the proposal and the establishment of smart energy transition as a shared vision, and the operationalization of such vision and the interessement of an actor network through a Delphi survey with particular informants and questions posed to them. The forming of the SET consortium in response to the SRC funding instrument Strategic research implies multidisciplinary and -sectoral work (Rip, 2002 and2004).The SRC followed this principle by requiring the consortium consists of at least three research teams, which represent at least two different organizations (e.g., universities, research institutes, civil society organizations, or private companies).Moreover, the researchers needed to represent at least three different disciplines.Additionally, the candidates were informed that it was expected that at least two, preferably three, government ministries would be involved in the projects.This was in addition to stakeholders from the private sector and/or the civil society sector. The SET consortium had little leeway or need to challenge these predications of strategic research.While the consortium drew on the established joint research efforts of the business school partner, the political science partner, and the environmental policy research partner, such a consortium was not regarded as competitive in the call.Rather, the initiators from these three units reached out for both expertise in energy and building technology and economics, and for an organization that represents users of knowledge.The final consortium included: The consortium members anticipated and inquired into other competing applications and sought to combine forces with other research institutions with an established position on energy-related research.These attempts at mergers between consortia were however not successful: According to he consortium leader, the consortium came out to be an "innovative but not obvious" collection of partners and sought a viable niche by rephrasing emerging energy technology as major societal disruption. The drafting of the funding application and the problematization of energy futures In the first call of the SRC in 2015, the three main themes for strategic funding were (1) the utilization of disruptive technology and changing institutions, (2) a climate-neutral and resourcescarce society, and (3) equality and its promotion (Aarrevaara and Dobson, 2016).The SET project application was written for theme 1 of Disruptive The consortium certainly stated rather boldly that it had insight into the forces that are going to affect Finnish actors in the future in a significant way and even cause disruptions in the energy systems.The notion of disruption, used by both the SRC call and the SET proposal, thus serves to evoke uncertainty and problematize energy futures.The text also enlists other fields of technology and actors, such as consumers, into the network.However, playing with the notion of disruption effectively undermines any direct predictions.Moreover, being uncertain about which areas and for which actors the ramifications of smart energy disruption might be most significant, the application serves as an explorative starting point.Finally, by inserting the notion of transition and by seeking to find effective ways for Finnish actors to cope with this disruption and even benefit from it, the plan takes a transformative view of the future, seeks to interest policy actors, and questions how to effectively steer social development towards a low-carbon energy system. In the subsequent project meetings, the research group further crystallized the key logic to be placed in the application.The proposal claimed: international technology development will both push towards a change in the Finnish energy system and create business opportunities for Finnish companies in international markets; the process will create both winners and losers as existing resources and competences become redundant.The sheer force of international technology development is suggested to undermine any conservative strategies.Moreover, the consortium agreed to claim that, with proper policy tools, disruptive technologies can be taken into use and acted upon in a more concerted way, as the project name 'Smart Energy Transition' suggests. The notion of disruption runs through the three levels of our empirical examination: the call, the proposal and the Delphi survey.Disruption was regarded to imply a particular time frame.Whereas Leipprand et al. (2017) suggest that longer time frames contribute to more proactive and change-oriented energy discourses, the SET proposal endorsed a short-term view.Quite explicitly, the development of the new Finnish actors in the energy field was regarded as interesting within a time scale of five years, whereas long-term predictions were regarded as difficult to make and uninteresting from this point of view.A retrospective interview with the principle investigator of the project revealed the logic for short termism.Insofar as disruption can be viewed as a rearrangement of existing actors and their interest, one can study and contribute to such change in the short term. The plan included a dedicated work package (WP1) that was to study "the rate, direction and impacts of the technological transition" as well as "the possible directions, triggering factors, rates and impacts of ongoing disruption in smart energy technologies."Our participant observations indicate that such a 'techy' work package fit the engineering members of the consortium and was seen to both strike a balance with other work packages driven by social science and raise the credibility of the proposal.The work package was further split into the subtasks of conducting a Delphi survey to establish the rate and direction of technological change within a five-year time span, and a separate task, projecting the anticipated developments in digitalization, cleantech, and bioeconomy.In other words, the problematization occurred by suggesting that energy futures can be acted upon instead of a view of global developments to which Finnish actors simply need to adjust.Hence, the project plan aimed to organize processes in which multiple, distributed actors could fill in details about how the likely changes in the Finnish energy system could potentially unfold.Yet, by initiating a set of core technologies, the academics working in WP1 nevertheless acted as spokespersons for a particular network. Tailoring as boundary work took place in respect to selecting a theme within the SRC call.Our ex-post interviews reveal that making disruption the mainstay of the proposal was regarded as a very risky strategy.Yet, the consortium stuck with theme 1 and the notion of disruption, as this was broadly viewed to fit the credentials of the consortium better than 'climate neutrality and resource scarcity' , the alternative theme in the call.Tailoring took place also as the proposal was tuned politically.The writers of the application regarded the upcoming parliamentary elections and the pending success of an agrarian party as an added reason to put emphasis on aspects of biofuels.Hence, tailoring of the proposal and research interest was far more than lip service (cf.Calvert, 2006) but rather included a substantial realignment of the work program. The technology focus of the application and notions such as smart grid and intermittent power production reflect a productivist technology discourse but also forward tailoring, i.e. the expertise areas of the consortium.It is obvious that the application was premised upon (and also created future demand for) such expertise (cf.Latour, 1993).However, while the consortium had extensive technical and business knowledgeparticularly in the area of solar energy-the decision was to put the focus on a broader set of technologies related to renewable energy.This was to signal that the potential impacts of disruption, the actors implied, and the work of the SET project were to span existing industries and several sites in which energy is used: In addition to energy production technologies, the application included work on buildings and vehicles as sites in which energy can be produced, stored, and used in a distributed manner.Parallel to this, there was a more fundamental shift from the narrow areas of expertise of the consortium researchers towards studying the broader impacts of the disruption on less familiar terrains. The SRC and the notion of strategic research pushed the SET application not only toward interdisciplinary work but to include non-academic actors.The initiators of the project hence enlisted practitioners and interest groups as carriers of interests by asking for Letters of Commitment.Such letters were particular devices for tailoring as they demonstrated the potential applicability and short-term relevance of the results.Interessement thus proceeded already at the point of drafting the proposal and prior to any 'strategic research' . SET in motion: Crafting an energy disruption into a Delphi survey The planning of the Delphi study was already started when drafting the application.Key technologies, such as new forms of intermittent power production by solar and wind sources, were mentioned in the application.However, much of the content of the survey remained open at the time of submitting the application.After a positive funding decision, the partners thus needed to reassemble visions of smart energy and relevant research foci.After establishing the first ideas about the content, the planning of the Delphi survey followed guidelines given in previous research (e.g., Gordon, 2000;Riikonen and Tapio, 2009).Accordingly, organizers need to select a few knowledgeable and willing respondents and create a background understanding of the issues through interviews.Thus, it was the SET project partners and the few interviewed external actors who had the opportunity to draw in technologies, trends, observations, or emerging knowledge pools to the energy vision created for the survey.Both more need and leeway for reinterpretation of the execution of the survey appeared within the consortium.In particular, the time frame and the technology mix-the technologies that are suggested to cause the disruption and amplify its effects-needed to be redefined. Turning from predictive to strategic Delphi While discussions during the phase of writing the SET proposal listed five-year, 15-year and 30-year spans, the final plan did not specify other time spans than a five-year technology outlook that was to be based on predictive technology forecasts.Reconsidering time scales from the point of view of strategic research, it however became evident that a longer study frame was also desired.The position papers from November 2015 suggested a study of the potential impacts running up to 2025, whereas a later project meeting (07.01.2016) suggested the following time horizons: 2020 for a technology outlook, 2030 for a policy-level futures study, and 2045 for a scientific outlook.In the Delphi interviews and the demo version of the survey, the project group responsible for the survey indeed trialed different time scales for different questions.However, as this appeared to create confusion, the time frame was fixed to run to 2030. Fixing a technology portfolio The technology portfolio of the survey was another subject that was modified after the funding decision.We account for the changes in tables 2 and 3.In the first phase, the consortium leader requested a focus proposal from each participating research institution detailing the key energy production and storage technologies that should be studied and the other relevant technology areas.This process is documented in position papers by six participating research institutes (see table 2 for a summary).These position papers exhibited a wide range of issues, potential impacts, and areas, branches, and industries that seemed to be challenged by smart energy technology.Compared with the application document, they added weight on the dynamics of industrial restructuring and put less emphasis on digitalization and on the Internet of Things.Another change in orientation is the stronger presence of bioenergy that came through in the mentioning of alternative biofuels for cars, the availability and competing uses of forest biomass and the challenges associated with all energy production that is based on burning organic matter. Soon after the position papers were written, WP1 assembled to plan the Delphi survey.Some technologies were considered to be too radical.For example, fusion energy was discussed as a possible item on the list of technologies, but group members expressed anxiety about this issue.It would follow that other novelties such as biomass from algae production would need to be included.The time span and uncertainty about developments were not the only difficult aspects of scoping the technology portfolio: The content resonated between thinking about their significance in Finland and for domestic operations, and their significance in the export markets of Finnish companies.As no existing or emerging actors and interests in these to-be-excluded technologies were identified, translation did not occur and they were considered as empty promises that might create uncertainty but could not be effectively used to arrange actor networks.In a later phase, carbon capture and storage, and a novel concept of a 'power-to-food' energy chain, were also excluded as no existing actors or sites of relevant development could be identified.On the other hand, the portfolio came to include technologies such as large-scale solar heat and wave power since they had local technology actors in Finland (although apparent potential in Finland is less obvious). The resulting iteration of the selection of technologies was presented in the Delphi interview guide, which was used to engage experts in the content of the survey.The interviews included six project partners, some of whom had been involved in writing the position papers, and four external practitioners in business and policy.Interviews affected the survey design in several ways: Energy demand and technologies of demand reduction gained prominence.This applied to the energy efficiency of buildings but also comfort expectations were mentioned. 3The tendency of future studies to focus on energy production technologies (Zehner, 2014), which was clear in the scoping papers and the initial work plan of WP1, was thus partly resolved by the interview round conducted amongst diverse project partners in which both members of academia and practitioners raised concern about the overtly production-oriented focus of the intended study. 12 Moreover, as the interviewees had criticized Finland for a tendency to stick to forest biomass as the mainstay of new energy systems, they also politicized the survey by adding a question about the future of biomass in the case where burning was ruled out.Finally, the interviews also caused the above-mentioned shift in timescales.Instead of working with the five-year frame, the final technology portfolio was connected to the year 2030 (table 4). Compared with the project plan, and in line with the position papers written by partners, the final version reflects an increasing need to account for storage technologies and other facilitating solutions for the increasing share of intermittent power production.It also builds on an actor perspective: Additions such as wave energy and geothermal energy were added according to ongoing technology development and automated demand response was added according to heightened interest amongst policy makers.On the other hand, biomass refers to the old established actors and interests that were refashioned into the new configurations of Finnish energy systems.These changes are partly effects of the SET researchers having been increasingly exposed to the topic in the early phase of the project.Hence the development of the survey reflects the basic premises of strategic research in which multiple stakeholders co-construct futures. Using a Delphi survey to create interests and coordinate actors The choice to conduct a strategic Delphi resonated with Turoff's (1970) ideas on a policy Delphi: The survey was viewed as an opportunity to draw actors in, make translations, and suggest particular roles in new actor networks.This decision had strong impacts on the Delphi study.Rather than focusing on international technology development, it turned to focus on the ramifications of smart technologies in Finland.It also followed that the Delphi panel would be held in Finnish, consist of Finnish experts, and also include policy makers.Even the notion of expertise was changed.Instead of trying to poll the rate and direction of technological development amongst technology experts and speak to policy in the name of such expertise, the survey sought to consider the interests of potentially impacted Finnish actors. 4 Interessement did not however only take the form of invitations to partake in the survey, but also in the way that the questions were formulated.The categorization of potentially impacted domestic actors in the final survey was as follows: Science & Technology Studies XX(X) The survey was also broadened towards further implications of the diffusion of novel energy technologies.It came to include questions on who is likely to suffer from this change.Potential "crises" or highly ambiguous futures were constructed against major CO2-emitting processes by asking whether they will perish or remain as "necessary evils."Thereby actors such as coal-power producers, peat producers, and waste incinerators were also enlisted as relevant entities. Disruptive narratives and prompts in the survey The SET project and the planned Delphi survey were premised on an image of the disruptive global technology forces that are affecting the Finnish energy system and its actors in a fundamental but unpredictable way.Listing energy technologies such as carbon capture and utilization created increasing uncertainty.Yet, in order to politicize the disruption, the survey was aimed at creating visions of potential strategic action.While the selected technology portfolio, the time frame, and the list of potential interest groups already suggest a particular actor network, future visions also depend on a narrative of problems, opportunities, and threats (Paschen and Ison, 2014;Leipprand et al., 2017).As the final part of our analysis, we thus briefly turn to aspects of narrating energy disruption in Finland. The planners of the survey had used the notion of a 'second wave of electrification' , which referred to "the electrification of energy systems as many renewable energy technologies relate to power production and many energy-efficiency technologies, including heat pumps and electric vehicles, require electric power as an energy form."In addition to this, the Delphi interviews brought about new narrative structures of energy disruption.New representations of the key outcomes of the disruption derived from the interviews and included 'post-fire era' , a 'capacity market (as in telecom)' , the 'decentralization of energy systems' , the 'active role of prosumers' , and a 'window of opportunity for system integration' .These open formulations were used in the Delphi questions in order to sensitize respondents to the magnitude and type of potential changes and the potential roles actors might assume. Discussion The role of expectations and visions for technology development and socio-technical changes has been subject to wide academic interests (Borup et al., 2006).Following STS scholars such as Callon (1986aCallon ( , 1986b) ) and Ferrari and Lösch (2017) we have suggested studying the acts and practices of visioneering.The analyses sought to shed light on how miniscule elements of visioneering such as Delphi survey questions reflect broader structures such as funding instruments. Our first research question concerning how strategic research conditions and contributes to academic practices of visioneering appears to hinge on the notion of disruption.Disruption served to establish an explorative and constructive agenda for visioneering.The notion of disruption that the SRC used in the call, and that the SET project used in the proposal, effectively dispersed interest across academic silos.While disruption in the SET project was perceived to have a technical core, namely increased PV and wind power production, potential ramifications were proposed to be scattered across different technologies, industry sectors, and social actors.Moreover, the notion introduced uncertainties in who might be impacted upon and who should concerned and aim to develop strategic responses to new energy technology.To follow such a path of visioneering, practices may be aimed at translating existing, emerging and even missing entities into actor networks.Such bridging is clearly different from either predictive or transformative Delphi approaches.For STS scholars the implication is that strategic research may neither be traditional in the sense of predicting likely developments nor thoroughly political as providing means for predetermined ends, but rather speculative and explorative. Our second question concerned the use of a Delphi survey as a tool for problematization and interessement.The Delphi planning began with a view of the major technological disruption brought about by intermittent power production and the need to store and use power in new applications during peak production.However, the heterogeneity of the consortium allowed for plural views of future development.The Delphi interviews proved critical in altering the content of the survey from its technology focus to the broader aims.The analysis of the SET project resonates with Zehner's (2014) claim that energy futures are often based on production technologies rather than addressing radically lowered energy demand.In this sense, the notion of disruption was not in itself enough to divert the path of the survey planning, but the interviews with the consortium members and external partners provided a reflexive space for thinking through the potential impacts of smart energy technology. Can expert panels and Delphi methods be expected to deliver radically new or innovative futures?To begin with, translations need to build on existing entities and seek to bring them into new relations.Destabilizing prompts, such as a post-fire era, were used in the SET project to suggest impact mechanisms and outcomes that could interest and even mobilize actors.Key challenges relate to balancing between radical, disruptive notions of futures and capturing the interests of practitioners and making disruptions actionable.The notion of translation and actor network theory in general provide some hints.The enrolments of existing entities and the translation that occurs between networks imply that futures are made of existing elements, altered relations and interest-generating misunderstandings (Latour, 1993).Moreover, our results highlight that time scales are important aspects of problematization and interessement.Whilst Leipprand et al. (2017) view longer time scales as important for putting forward strategic analysis, Ferrari and Lösch (2017) suggest time scales need to be plural: They need to include the established "old" elements, the emerging elements, and the missing elements.While the missing elements do not exist, they can be represented by laboratories and scientific formula (Callon 1986a), as well as field experi-Science & Technology Studies XX(X) ments (Ferrari and Lösch, 2017).Yet, based on our findings, multiple time frames are difficult to manage in a Delphi environment. Our third question concerns the alignment of researchers as part of actor networks.We contend that the proliferation of strategic research as an academic identity and occupation requires better understanding of such alignment.One interpretation is that strategic research is being made on order for political purposes.Insofar as such research is transparent and the contributors are plural, such work may contribute to conductive policy processes (Leipprand et al., 2017).Another interpretation is that academic actors retain autonomy and use their existing knowledge resources, skills, and backgrounds to continue research efforts in their selected paths, engage in tailoring and push knowledge into the hands of users (Calvert, 2006).A third, more novel idea about the relationship between science and policy is to think along the lines of strategic research, the facilitation of knowledge making by heterogeneous actors and in terms of actor networks and translation.In this case, the roles of spokespersons and acts of translation constitute a new academic practice.This might be a creative practice, but it may also hide the politics of academic work.In the case of the SET project, staying rather firmly in the area of strategic Delphi research helped researchers to dodge normative questions about the desired end results and also the question of opting out from particular opportunities (cf.Felt, 2015).Hence, competing discourses, such as bioenergy and increased electrification, were present in the survey. We have also claimed that SET researchers engaged in a different type of tailoring.This was evident in the planning of the project as well as in the execution of the work.The research proposal was drafted based on the resources and existing knowledge of the consortium, but also in anticipation of evaluators, the pending political climate, and other competing proposals, as well as on forming new alliances with other social actors.These results suggest that SRC funding has been able to create room for (or forced) researchers to create new combinations of knowledge and expand their activity towards participating in social change.For us, the gradual evolution of the research agenda represents a safeguard against academics being subordinated by political needs, even when they themselves are framing and then being faced with questions such as "How can Finland best benefit from smart energy disruption". The question of alignment between researchers and pragmatic interests can be viewed as a layered phenomenon.On the most abstract level, strategic research calls for impacts such as contributions in the future success of a nation and expects researchers accordingly to pick and engage with grand societal challenges.On another level, the project consortium negotiated a fit between the resources, abilities, and academic histories in the consortium and the recognized challenges.Finally, the research methods indicate different forms of societal engagement and lead to more or less inclusive and responsive work processes.Hence, in our case the Delphi survey questions as the one outcome in the project were ordered and structured through these different levels: the SRC, SET and the Delphi survey as a futures study technique.In the current case, the middle level and the academic community of the SET project has proven particularly relevant. Conclusions Energy futures are profoundly open, whilst being rooted in current technology development and social structures.The notions of translations, visioneering, strategic scenarios and strategic Delphi thrive from this position: Futures are actively made by combining existing elements and emerging elements into visions that are able to capture, create interest and even mobilize implicated elements and participants. In this paper, we have suggested that academics addressing issues such as smart energy engage in visioneering.This notion highlights the active practices of translating existing entities into new Jalas et al networks.Such work is increasingly prominent as funding organizations push academics to engage in policy making and business, and to make contributions to solving grand social challenges under the rubric of strategic research.Our interests initially lay in the way policy and business actors influence academics, and the way that academics strive for sovereignty.However, the case also witnessed the notion of strategic research as a process of co-alignment through which new futures, new identities and new research settings are being crafted.Critical questions, however, also arise.The previous knowledge base, forms of expertise, and social networks certainly influence the perceived space of possibilities. On a pragmatic level the overall objective of this paper has been to try to better understand closure and convergence in visioneering.Strategic visions derive power from convergence: They amplify particular possibilities and exclude others.The notion of disruption, which is in frequent use in strategic research, proved to open up space for possibilities.Yet closure, convergence, and alignment with existing interests are parts of an evident and needed process, and they also concern academics.Whilst such processes can be seen to take place on different levels, our results highlight the importance of both the collaboration inside the multidisciplinary consortium and the methodological choices (such as Delphi surveys and expert interviews).In our case they affected both the time frame and technology options of perceived smart energy futures. Beyond dealing with issues of managing closure and convergence, this paper has also attempted to contribute to the academic practice of strategic research.Insofar as academics are explicitly called upon to engage in futures making and in the quest for recipes for success, both selfreflection and critical examination of researchers' agendas appear to us to be fundamental elements of strategic research. Table 1 . Key phases of the analyzed futures work Consortium memorandum 19.03.2015 (5 pages); Consortium memorandum 01.04.2015 (7 pages); A list of the intended members of the technology panel 22.04.2015(Excel sheet) Early ideas on what smart energy is, who could be in the consortium and the adjunct technology panel, and how technological disruption should be conceptualized Table 2 . Summary of the technology portfolios in the position papers written by individual academic partner organizations (6) Table 3 . Questions about disruptive energy technologies in the SET Delphi survey.Italics in the list refer to added technologiesWhat is the role of the following technologies for the Finnish energy system in 2030?[options: not significant; a promising alternative; a commercialized solution; a solution which has replaced key parts of the existing system]
2019-09-19T09:15:27.091Z
2019-09-14T00:00:00.000
{ "year": 2019, "sha1": "c0de6c6756172764b0558686fdadecc29b9018ba", "oa_license": "CCBY", "oa_url": "https://sciencetechnologystudies.journal.fi/article/download/65948/35887", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "33451ca87dfea32e9aec6a47d7ddf2ab30735629", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Sociology" ] }
247370697
pes2o/s2orc
v3-fos-license
Accelerated Endothelialization of Nanofibrous Scaffolds for Biomimetic Cardiovascular Implants Nanofiber nonwovens are highly promising to serve as biomimetic scaffolds for pioneering cardiac implants such as drug-eluting stent systems or heart valve prosthetics. For successful implant integration, rapid and homogeneous endothelialization is of utmost importance as it forms a hemocompatible surface. This study aims at physicochemical and biological evaluation of various electrospun polymer scaffolds, made of FDA approved medical-grade plastics. Human endothelial cells (EA.hy926) were examined for cell attachment, morphology, viability, as well as actin and PECAM 1 expression. The appraisal of the untreated poly-L-lactide (PLLA L210), poly-ε-caprolactone (PCL) and polyamide-6 (PA-6) nonwovens shows that the hydrophilicity (water contact angle > 80°) and surface free energy (<60 mN/m) is mostly insufficient for rapid cell colonization. Therefore, modification of the surface tension of nonpolar polymer scaffolds by plasma energy was initiated, leading to more than 60% increased wettability and improved colonization. Additionally, NH3-plasma surface functionalization resulted in a more physiological localization of cell–cell contact markers, promoting endothelialization on all polymeric surfaces, while fiber diameter remained unaltered. Our data indicates that hydrophobic nonwovens are often insufficient to mimic the native extracellular matrix but also that they can be easily adapted by targeted post-processing steps such as plasma treatment. The results achieved increase the understanding of cell–implant interactions of nanostructured polymer-based biomaterial surfaces in blood contact while also advocating for plasma technology to increase the surface energy of nonpolar biostable, as well as biodegradable polymer scaffolds. Thus, we highlight the potential of plasma-activated electrospun polymer scaffolds for the development of advanced cardiac implants. Introduction Structural and valvular heart diseases are of increasing incidence and a leading cause of mortality worldwide [1][2][3]. Minimally invasive surgery for coronary revascularization by percutaneous coronary intervention (PCI) based on stent implantation has become the standard procedure of care to relieve the stenosis and to restore the dysfunctional stenotic vessel. However, stent surgery is accompanied with local vessel injury, comprising disruption of the intimal smooth muscle cell layer and the luminal endothelial cell layer. The most important clinical complications following PCI are stent restenosis and stent thrombosis, associated with hyperplasia, delayed endothelialization, and acute and chronic inflammation events [4]. Since stent restenosis is mainly related to bare metal stents, drugeluting stents (DES) have been shown to effectively reduce hyperplasia by comprising a polymer coating with incorporated antiproliferative drugs that inhibit smooth muscle evaluate their potential as innovative biomaterials (see Figure 2). These results are a decisive step in the development of novel, advanced scaffolds for cardiac regeneration and a better understanding of cell-biomaterial interactions, not least because endothelialization is a prerequisite for the successful integration of cardiovascular devices, as it forms a natural hemocompatible surface that prevents inflammation and thrombosis events, thus ensuring implant integrity. Fibrous nonwovens were fabricated from these different polymer solutions by freesurface, needleless electrospinning via the Nanospider Lab 200 (ELMARCO, Liberec, Czech Republic) using a rotating wire emitter in a high-volume spinning tube, and a static collector. Emitter-to-collector distances of 18 cm at 16 rpm, 17 cm at 12 rpm or 16.5 cm at 13 rpm were used accordingly for PLLA L210, PCL and PA-6, each resulting in nonwoven samples with randomized fibers. The applied high voltages were 49 to 58 kV, 60 to 80 kV evaluate their potential as innovative biomaterials (see Figure 2). These results are a decisive step in the development of novel, advanced scaffolds for cardiac regeneration and a better understanding of cell-biomaterial interactions, not least because endothelialization is a prerequisite for the successful integration of cardiovascular devices, as it forms a natural hemocompatible surface that prevents inflammation and thrombosis events, thus ensuring implant integrity. Fibrous nonwovens were fabricated from these different polymer solutions by freesurface, needleless electrospinning via the Nanospider Lab 200 (ELMARCO, Liberec, Czech Republic) using a rotating wire emitter in a high-volume spinning tube, and a static collector. Emitter-to-collector distances of 18 cm at 16 rpm, 17 cm at 12 rpm or 16.5 cm at 13 rpm were used accordingly for PLLA L210, PCL and PA-6, each resulting in nonwoven samples with randomized fibers. The applied high voltages were 49 to 58 kV, 60 to 80 kV Fibrous nonwovens were fabricated from these different polymer solutions by freesurface, needleless electrospinning via the Nanospider Lab 200 (ELMARCO, Liberec, Czech Republic) using a rotating wire emitter in a high-volume spinning tube, and a static collector. Emitter-to-collector distances of 18 cm at 16 rpm, 17 cm at 12 rpm or 16.5 cm at 13 rpm were used accordingly for PLLA L210, PCL and PA-6, each resulting in nonwoven samples with randomized fibers. The applied high voltages were 49 to 58 kV, 60 to 80 kV and 72 to 76 kV for PLLA L210, PCL and PA-6, respectively, each under ambient conditions of 23 • C and humidity of 35%. The generated polymeric nonwoven mats were dried for 12 h at 40 • C using the vacuum oven VO 200 (Memmert GmbH and Co., Schwabach, Germany, 40 mbar). Nonwoven Characterization by Scanning Electron Microscopy Fiber morphology of the PLLA L210, PCL and PA-6 nonwovens was examined by scanning electron microscopy (SEM) using a QUANTA FEG 250 (FEI Company, Dreieich, Germany) with an Everhart-Thornley secondary electron detector (ETD) at an acceleration voltage of 10 kV; a working distance of around 10 mm, at a high vacuum of 3.5 × 10 −6 mbar; and a spot size of 3.0. The samples were fixed onto aluminum trays with conductive tape and sputter coated with gold by Agar Sputter Coater (Agar Scientific Ltd., Essex, UK). Therefore, the samples were put under vacuum of 0.2 mbar and exposed to gold flow 2 times for 120 s. SEM images were taken at magnification 500×, 1000×, 5000× and 8000×. For quality assurance of the produced polymeric nonwovens and for determining the average fiber diameter, SEM analysis was performed at different areas of the nonwovens. Fiber diameters were calculated from SEM images by using EDAX Genesis software, measuring 50 random fibers from five micrographs for each nonwoven at high magnification. Surface Plasma Modification of Nonwovens Plasma-chemical surface modification was conducted by plasma etching (PE) of nonwoven samples in an ammonia (NH 3 ) plasma, generating radical species and amino groups. The short plasma activation process was performed for 1 min and 60% generator output in an ammonia radio frequency (RF) plasma generator (frequency 13.56 MHz, power 100 W, Diener electronic GmbH and Co. KG, Ebhausen, Germany) at a low pressure of 0.3 mbar based on previously studies [42,49,51]. The screening data were not presented because the energy density of the plasma in the chamber depends on a wide range of factors such as chamber material, design, sample positioning and mounting, electrode spacing, size, shape and material, and the method of excitation. Generalization or the transfer of parameters from one system to another is not possible, so screening of suitable parameters depending on the objective must always be performed. Water Contact Angle and Surface Free Energy Water contact angle measurements were performed by the sessile drop method (water) on the nonwoven polymer surface using a goniometer (OCA 20, Dataphysics Instruments GmbH, Filderstadt, Germany) equipped with SPSS software 15.0. Nonwovens of PLLA L210 were washed three times for 10 min each with pure water to remove Triton X-100. Nonwovens were attached to glass slides, and water contact angles were determined by the sessile drop method with water droplets of 5 µL. A time-resolved measurement over 60 s was performed, whereby the smallest standard deviation was obtained after 10 s. Mean values and standard deviations were calculated from five independent samples with n = 4 measurements per sample. To calculate the surface free energy (SFE) of untreated nonwovens according to Owens-Wendt-Rabel-Kaelble (OWRK) [52], further measurements were performed with a mobile surface analyzer (MSA, KRÜSS GmbH, Hamburg, Germany) with ADVANCE 1.9.2 software. The initial contact angles of two liquids, water and diiodomethane, were determined against air, whereby drops with a volume of 1 or 2 µL were deposited and measured within a few seconds (n = 3). Cell Viability Assay Cell viability of human endothelial EA.hy926 cells was determined using the CellQuanti-Blue™ assay (BioAssaySystems, Hayward, CA, USA) according to the manufacturer's instructions. Briefly, cell viability was assessed via quantification of cellular metabolic activity by the reduction of the substrate resazurin to resorufin by cellular reductases. Therefore, cells were incubated with CellQuanti-Blue™ as 10% of the culture medium volume for 2 h after a 46-h cell cultivation period on the polymeric surfaces. The resulting fluorescence of resorufin was measured at an emission wavelength of 590 nm with an excitation wavelength of 544 nm using a microplate reader (FLUOstar OPTIMA, BMG Labtech, Offenburg, Germany). For each polymer, six independent biological replicates were measured. Data were normalized to viability of EA.hy926 cells grown on planar non-cytotoxic tissue culture polystyrene (TCPS) as control surface (NC) [40,42]. Cell Morphology Analysis by Scanning Electron Microscopy Cell morphology of human endothelial EA.hy926 cells grown for 48 h on either polymeric nonwovens or tissue culture polystyrene control surface (NC) was observed by scanning electron microscopy. After the incubation on the polymeric surfaces, cells were fixed with 2.5% glutaraldehyde and 0.2 M sodium cacodylate, in PBS for 30 min. Samples were then washed with sodium phosphate buffer, dehydrated in a graded series of ethanol (50%, 75%, 90% and 100%) and dried with CO 2 in a critical point dryer (CPD 7501, Quorum Technologies Ltd., Laughton, Lewes, East Sussex, UK). Samples were sputter-coated with gold by Agar Sputter Coater (Canemco Inc., QC, Canada), and image acquisition was performed with the scanning electron microscope Quanta TM FEG 250 (FEI Company, Hillsboro, OR, USA) at 10 kV under high vacuum conditions by using the Everhart-Thornley secondary electron detector (ETD). Endothelialization Analysis by Cell Spreading and Cell Shape Index Endothelialization potential of polymeric nonwovens was assessed by quantification of cellular spreading of EA.hy926 endothelial cells and analysis of phenotype maintenance, evaluated by cell shape index after a 48 h cultivation period on all surfaces. This quantification was done based on SEM images of fixed cells. For determination of cell spreading, cell areas of 40 cells per specimen of three independent replicates were measured using the area measurement function of ImageJ software. Cell shape index (CSI) analysis was performed with ImageJ software by using the formula CSI = 4π × area/(perimeter) 2 [53]. Calculated CSI defines cellular morphological shape ranging from 0 to 1, corresponding to a circular shape (CSI = 1) or a straight line, i.e., maximum elongated shape (CSI = 0). Immunofluorescence of PECAM-1 (CD31) and Actin Cytoskeleton For immunostaining, endothelial EA.hy926 cells were examined after incubation for 48 h on the polymeric nonwovens as well as on the control surface (NC) at 37 • C and 5% CO 2 under humidified atmosphere. Cells were fixed in 4% paraformaldehyde (PFA) for 30 min at room temperature (RT), rinsed in PBS (pH = 7.4) and permeabilized with Triton X-100 (Sigma-Aldrich, Darmstadt, Germany) for 30 min at RT. Cells were then incubated with primary murine monoclonal anti-human PECAM-1 (CD31) antibody (1:20, DAKO, Agilent, Santa Clara, CA, USA) overnight. Afterwards, cells were rinsed in PBS and incubated with secondary donkey anti-mouse antibody conjugated with Alexa Fluor 488 (Life Technologies GmbH, Darmstadt, Germany) for 1 h at RT. For actin staining, TRITC-conjugated phalloidin (500 µg/mL, Sigma-Aldrich, Taufkirchen, Germany) was used by incubating the cells in the staining solution for 1 h at RT. Cell nuclei were stained with Hoechst 33,342 (1:500, Sigma-Aldrich, Taufkirchen, Germany) for 1 h at RT. Cells were mounted in VectaShield mounting medium (Vector Laboratories, Burlingame, CA, USA) and examined by confocal laser scanning microscopy (FluoView FV1000, Olympus, Hamburg, Germany). Quantification of mean fluorescence intensity of the respective markers in cell images was evaluated by CellProfiler software. Statistical Analysis Data were reported as mean value with standard deviation and analyzed by one-way ANOVA carried out with GraphPad©Prism 5 software (La Jolla, CA, USA). Statistical significance was defined as ns = not significant *** p < 0.001. Morphology of Polymeric Nonwovens Different polymeric nonwovens were produced via electrospinning and then compared in terms of fiber diameter. Representative images of the three polymer classes are shown in Figure 3 (additional magnifications of the SEM images are presented in Figure S5). Morphological differences between the individual polymer classes are evident but not after NH 3 -plasma treatment. The PLLA L210 fibers are about twice as thick as the PCL fibers and four times thicker than PA-6 fibers. 33,342 (1:500, Sigma-Aldrich, Taufkirchen, Germany) for 1 h at RT. Cells were mounted in VectaShield mounting medium (Vector Laboratories, Burlingame, CA, USA) and examined by confocal laser scanning microscopy (FluoView FV1000, Olympus, Hamburg, Germany). Quantification of mean fluorescence intensity of the respective markers in cell images was evaluated by CellProfiler software. Statistical Analysis Data were reported as mean value with standard deviation and analyzed by one-way ANOVA carried out with GraphPad©Prism 5 software (La Jolla, CA, USA). Statistical significance was defined as ns = not significant *** p < 0.001. Morphology of Polymeric Nonwovens Different polymeric nonwovens were produced via electrospinning and then compared in terms of fiber diameter. Representative images of the three polymer classes are shown in Figure 3 (additional magnifications of the SEM images are presented in Figure S5). Morphological differences between the individual polymer classes are evident but not after NH3-plasma treatment. The PLLA L210 fibers are about twice as thick as the PCL fibers and four times thicker than PA-6 fibers. The fiber diameters of the different polymeric nonwovens were determined from 50 measurements each and are shown in absolute terms in Figure 4. Whereas the diversity of the fibers varies the most within PLLA nonwovens, the fiber diameter of untreated and plasma functionalized PLLA L210, PCL and PA-6 nonwovens differ only marginally. This implies that the fiber structure is not changed by the plasma treatment. Additionally, the The fiber diameters of the different polymeric nonwovens were determined from 50 measurements each and are shown in absolute terms in Figure 4. Whereas the diversity of the fibers varies the most within PLLA nonwovens, the fiber diameter of untreated and plasma functionalized PLLA L210, PCL and PA-6 nonwovens differ only marginally. This implies that the fiber structure is not changed by the plasma treatment. Additionally, the frequency of the respective fiber diameters for each investigated polymer is presented in Figure S6. Wettability Analysis and Surface Free Energy of Polymeric Nonwovens First, the wettability of the nonwovens was determined in a time-resolved manner ( Figure 5A). A second measurement, performed instantaneously after only a few seconds, provides a direct comparison of the initial wettability of untreated and plasma-activated nonwovens ( Figure 5B). frequency of the respective fiber diameters for each investigated polymer is presented in Figure S6. Wettability Analysis and Surface Free Energy of Polymeric Nonwovens First, the wettability of the nonwovens was determined in a time-resolved manner ( Figure 5A). A second measurement, performed instantaneously after only a few seconds, provides a direct comparison of the initial wettability of untreated and plasma-activated nonwovens ( Figure 5B). It can be seen that the water contact angle (WCA) of PA-6 nonwovens does not remain constant and decreases rapidly over 60 s ( Figure 5A). In contrast, the WCA of the PLLA L210 and PCL nonwovens is very high at 130 to 140° as expected, which is due to the special surface morphology of nonwoven structures. As opposed to film or foil materials, nanofibrous nonwovens have pores that are filled with air. The wetting properties of nonwovens are influenced by their weight, layer thickness or (more precisely) their pore structure. Such a surface, where air is trapped between the liquid and the solid and thus can only be wetted incompletely, is called a composite interface. [54] The initial WCA Wettability Analysis and Surface Free Energy of Polymeric Nonwovens First, the wettability of the nonwovens was determined in a time-resolved manner ( Figure 5A). A second measurement, performed instantaneously after only a few seconds, provides a direct comparison of the initial wettability of untreated and plasma-activated nonwovens ( Figure 5B). It can be seen that the water contact angle (WCA) of PA-6 nonwovens does not remain constant and decreases rapidly over 60 s ( Figure 5A). In contrast, the WCA of the PLLA L210 and PCL nonwovens is very high at 130 to 140° as expected, which is due to the special surface morphology of nonwoven structures. As opposed to film or foil materials, nanofibrous nonwovens have pores that are filled with air. The wetting properties of nonwovens are influenced by their weight, layer thickness or (more precisely) their pore structure. Such a surface, where air is trapped between the liquid and the solid and thus can only be wetted incompletely, is called a composite interface. [54] The initial WCA It can be seen that the water contact angle (WCA) of PA-6 nonwovens does not remain constant and decreases rapidly over 60 s ( Figure 5A). In contrast, the WCA of the PLLA L210 and PCL nonwovens is very high at 130 to 140 • as expected, which is due to the special surface morphology of nonwoven structures. As opposed to film or foil materials, nanofibrous nonwovens have pores that are filled with air. The wetting properties of nonwovens are influenced by their weight, layer thickness or (more precisely) their pore structure. Such a surface, where air is trapped between the liquid and the solid and thus can only be wetted incompletely, is called a composite interface. [54] The initial WCA before and after plasma treatment is shown in Figure 5B. It can be clearly seen that the WCA for all investigated nonwovens can be greatly reduced by one-minute NH 3 -plasma treatment, indicating that plasma treatment is very efficient at improving the wettability of nonwoven fibers. Further results on surface free energy and its division into polar and disperse fractions are summarized in Table 1. For their calculation by means of OWRK, the diiodomethane contact angle was additionally determined. The surface free energies of untreated PLLA L210 and PCL nonwovens differ only marginally. The SFE of PA-6 nonwoven is slightly lower, but in this case the measurement is affected by a high standard deviation. After plasma activation, the SFE is noticeably improved for PCL and PA-6, whereby the disperse fraction is reduced and the polar fraction strongly increased for all polymers. Biocompatibility of Polymeric Nonwovens Assessed by Cell Viability Cell viability of human endothelial EA.hy926 cells grown for 48 h on untreated and NH 3 -plasma modified polymeric nanofiber nonwovens of PLLA L210, PCL and PA-6 is shown in Figure 6. Values were normalized to values from the control surface (NC, TCPS), which was set to 100%. For the untreated nonwovens, relative cell viability was highest for PLLA L210 (57.1%), followed by nonwovens of PA-6 (54.1%) and PCL (49.6%); however, these differences were not statistically significant. before and after plasma treatment is shown in Figure 5B. It can be clearly seen that the WCA for all investigated nonwovens can be greatly reduced by one-minute NH3-plasma treatment, indicating that plasma treatment is very efficient at improving the wettability of nonwoven fibers. Further results on surface free energy and its division into polar and disperse fractions are summarized in Table 1. For their calculation by means of OWRK, the diiodomethane contact angle was additionally determined. The surface free energies of untreated PLLA L210 and PCL nonwovens differ only marginally. The SFE of PA-6 nonwoven is slightly lower, but in this case the measurement is affected by a high standard deviation. After plasma activation, the SFE is noticeably improved for PCL and PA-6, whereby the disperse fraction is reduced and the polar fraction strongly increased for all polymers. Biocompatibility of Polymeric Nonwovens Assessed by Cell Viability Cell viability of human endothelial EA.hy926 cells grown for 48 h on untreated and NH3-plasma modified polymeric nanofiber nonwovens of PLLA L210, PCL and PA-6 is shown in Figure 6. Values were normalized to values from the control surface (NC, TCPS), which was set to 100%. For the untreated nonwovens, relative cell viability was highest for PLLA L210 (57.1%), followed by nonwovens of PA-6 (54.1%) and PCL (49.6%); however, these differences were not statistically significant. In contrast, NH 3 -plasma modification has indeed shown to increase the relative cell viability of endothelial EA.hy926 cells on all polymeric nonwovens. This effect was most prominent for PCL nonwoven, where NH 3 -plasma modification yielded the highest and most statistically significant (p < 0.001) increase in EA.hy926 cell viability of 75.6%, corresponding to an enhancement of 26.0% compared to untreated PCL nonwovens. Additionally, for PLLA L210 and PA-6 nonwovens, a positive effect of NH 3 -plasma treatment on cell viability was observed, although it was not significant and slightly lower than what was detected for PCL. NH 3 -surface treatment of PLLA L210 and PA-6 resulted in an increased viability of endothelial EA.hy926 cells up to 61.3% for PLLA L210 and 63.0% for PA-6. Influence of Nanofiber Scaffolds on Cell Attachment and Morphology Cell morphology analysis using scanning electron microscopy revealed phenotypic differences of human endothelial EA.hy926 cells grown on untreated nonwovens of PLLA L210, PCL and PA-6 compared to those grown on NH 3 -plasma-modified polymeric nonwovens (Figure 7). Only moderate cell attachment and spreading of EA.hy926 cells could be observed on the untreated nanofiber nonwovens of PLLA L210, PCL and PA-6. In particular, cells that were grown on electrospun PLLA L210 and PCL scaffolds were more likely to exhibit spherical phenotypes than cells that were grown on PA-6 nonwovens where they appeared more flattened. Especially on PLLA L210 nonwovens, EA.hy926 cells were shown to grow around the scaffold nanofibers to some degree. After NH 3 -plasma treatment, all of the three types of polymeric nonwovens were shown to facilitate better cell attachment of human endothelial cells and showed increased cell spreading of EA.hy926 cells when compared to the corresponding untreated polymer surfaces. In general, endothelial EA.hy926 cells grown on all NH 3 -plasma treated surfaces were larger and exhibited a flattened and more elongated phenotype with more filopods than on the unmodified nonwovens. Furthermore, this altered effect on cell spreading and the phenotype change was most obvious for cells that were grown on NH 3 -plasma treated PLLA L210 nonwovens. It was also apparent that on all nanofibrous scaffolds, human endothelial cells were able to spread between individual fibers of the mats, but none of the cells were observed to have grown deeper into the pores of the analyzed nonwovens. viability of endothelial EA.hy926 cells on all polymeric nonwovens. This effect was most prominent for PCL nonwoven, where NH3-plasma modification yielded the highest and most statistically significant (p < 0.001) increase in EA.hy926 cell viability of 75.6%, corresponding to an enhancement of 26.0% compared to untreated PCL nonwovens. Additionally, for PLLA L210 and PA-6 nonwovens, a positive effect of NH3-plasma treatment on cell viability was observed, although it was not significant and slightly lower than what was detected for PCL. NH3-surface treatment of PLLA L210 and PA-6 resulted in an increased viability of endothelial EA.hy926 cells up to 61.3% for PLLA L210 and 63.0% for PA-6. Influence of Nanofiber Scaffolds on Cell Attachment and Morphology Cell morphology analysis using scanning electron microscopy revealed phenotypic differences of human endothelial EA.hy926 cells grown on untreated nonwovens of PLLA L210, PCL and PA-6 compared to those grown on NH3-plasma-modified polymeric nonwovens (Figure 7). Only moderate cell attachment and spreading of EA.hy926 cells could be observed on the untreated nanofiber nonwovens of PLLA L210, PCL and PA-6. In particular, cells that were grown on electrospun PLLA L210 and PCL scaffolds were more likely to exhibit spherical phenotypes than cells that were grown on PA-6 nonwovens where they appeared more flattened. Especially on PLLA L210 nonwovens, EA.hy926 cells were shown to grow around the scaffold nanofibers to some degree. After NH3-plasma treatment, all of the three types of polymeric nonwovens were shown to facilitate better cell attachment of human endothelial cells and showed increased cell spreading of EA.hy926 cells when compared to the corresponding untreated polymer surfaces. In general, endothelial EA.hy926 cells grown on all NH3-plasma treated surfaces were larger and exhibited a flattened and more elongated phenotype with more filopods than on the unmodified nonwovens. Furthermore, this altered effect on cell spreading and the phenotype change was most obvious for cells that were grown on NH3-plasma treated PLLA L210 nonwovens. It was also apparent that on all nanofibrous scaffolds, human endothelial cells were able to spread between individual fibers of the mats, but none of the cells were observed to have grown deeper into the pores of the analyzed nonwovens. Regarding phenotype maintenance, the morphology of human EA.hy926 cells grown on the NH 3 -plasma modified polymeric nonwovens was comparable to those on the control surface (NC), although on the control surface, cells generally exhibited a much larger cell area compared to those on all of the polymeric nonwovens. Quantification of Endothelialization To evaluate the impact of NH 3 -plasma treatment on endothelialization potential as a substantial biological requisite for successful cardiac scaffolds, cell spreading of endothelial EA.hy926 cells was quantified by measuring cell areas (Figure 8). Measurements were therefore conducted on cells either grown on NH 3 -modified or untreated polymeric nonwovens of PLLA L210, PCL and PA-6. Regarding phenotype maintenance, the morphology of human EA.hy926 cells grown on the NH3-plasma modified polymeric nonwovens was comparable to those on the control surface (NC), although on the control surface, cells generally exhibited a much larger cell area compared to those on all of the polymeric nonwovens. Quantification of Endothelialization To evaluate the impact of NH3-plasma treatment on endothelialization potential as a substantial biological requisite for successful cardiac scaffolds, cell spreading of endothelial EA.hy926 cells was quantified by measuring cell areas (Figure 8). Measurements were therefore conducted on cells either grown on NH3-modified or untreated polymeric nonwovens of PLLA L210, PCL and PA-6. Mean cell area of human endothelial EA.hy926 cells on untreated nonwovens ranged from 144.9 µm 2 to 203.2 µm 2 for PLLA L210 and PA-6, respectively. Modification with NH3-plasma led to a remarkable increase of the cell area of human endothelial cells for all three types of investigated polymeric nonwovens. The effect was most obvious for PLLA L210 and PA-6 nonwovens, where the cell area was significantly increased and even doubled after NH3-plasma treatment, ranging from 404.5 µm 2 to 406.0 µm 2 , respectively. The cell area also tended to increase on PCL nonwovens after NH3-plasma modification, although the difference was not statistically significant compared to untreated PCL mats. However, cell area of human EA.hy926 cells was highest on the control surface (NC), ranging from 1565.0 µm 2 for NC and 1873.0 µm 2 for NH3-plasma treated NC. In order to judge the quality of endothelialization based on maintenance of the endothelial phenotype, cell shape of human endothelial cells was analyzed by calculating cell circularity by cell shape index (CSI). The index, ranging from 0 to 1, is either corresponding to a circular shape (1) or a straight line (0 for maximum elongated shape). CSI of human endothelial cells on untreated polymeric nonwovens ranged between 0.64 for PCL nonwoven and 0.73 for PLLA nonwoven, compared to 0.61 for the control surface (NC) (Figure 9). After NH3-plasma modification, CSI was diminished on all polymeric surfaces, with lower circularity measurements averaged to 0.67, 0.65 and 0.58 for PLLA L210, PCL and PA-6 nonwovens, respectively, thus representing a less circular shape and more elongated phenotype of human EA.hy926 cells on the NH3-plasma modified nonwovens. In particular, CSI was lowest on plasma-modified PA-6 nonwoven and therefore expressed Mean cell area of human endothelial EA.hy926 cells on untreated nonwovens ranged from 144.9 µm 2 to 203.2 µm 2 for PLLA L210 and PA-6, respectively. Modification with NH 3plasma led to a remarkable increase of the cell area of human endothelial cells for all three types of investigated polymeric nonwovens. The effect was most obvious for PLLA L210 and PA-6 nonwovens, where the cell area was significantly increased and even doubled after NH 3 -plasma treatment, ranging from 404.5 µm 2 to 406.0 µm 2 , respectively. The cell area also tended to increase on PCL nonwovens after NH 3 -plasma modification, although the difference was not statistically significant compared to untreated PCL mats. However, cell area of human EA.hy926 cells was highest on the control surface (NC), ranging from 1565.0 µm 2 for NC and 1873.0 µm 2 for NH 3 -plasma treated NC. In order to judge the quality of endothelialization based on maintenance of the endothelial phenotype, cell shape of human endothelial cells was analyzed by calculating cell circularity by cell shape index (CSI). The index, ranging from 0 to 1, is either corresponding to a circular shape (1) or a straight line (0 for maximum elongated shape). CSI of human endothelial cells on untreated polymeric nonwovens ranged between 0.64 for PCL nonwoven and 0.73 for PLLA nonwoven, compared to 0.61 for the control surface (NC) (Figure 9). After NH 3 -plasma modification, CSI was diminished on all polymeric surfaces, with lower circularity measurements averaged to 0.67, 0.65 and 0.58 for PLLA L210, PCL and PA-6 nonwovens, respectively, thus representing a less circular shape and more elongated phenotype of human EA.hy926 cells on the NH 3 -plasma modified nonwovens. In particular, CSI was lowest on plasma-modified PA-6 nonwoven and therefore expressed the smallest difference compared to control surface with respective CSI of 0.54. Nonetheless, cells were not detected to be aligned in one specific direction. Endothelial Actin Cytoskeleton Formation on Polymeric Nonwovens The formation of the actin cytoskeleton of human endothelial EA.hy926 grown on specific polymer nonwovens is shown in Figure 10 (see also Figure S7) with regard to NH3-activation status and is compared to the respective control surface (NC). Endothelial Actin Cytoskeleton Formation on Polymeric Nonwovens The formation of the actin cytoskeleton of human endothelial EA.hy926 grown on specific polymer nonwovens is shown in Figure 10 (see also Figure S7) with regard to NH 3 -activation status and is compared to the respective control surface (NC). Endothelial Actin Cytoskeleton Formation on Polymeric Nonwovens The formation of the actin cytoskeleton of human endothelial EA.hy926 grown on specific polymer nonwovens is shown in Figure 10 (see also Figure S7) with regard to NH3-activation status and is compared to the respective control surface (NC). On all of the unmodified polymeric nonwovens comprising PLLA L210, PCL and PA-6, EA.hy926 cells was shown to exhibit a much reduced actin formation. This reduction in actin formation was evident due to the merely faint background staining of F-actin, with a concentrated F-actin ring at the outer cell membrane and a lack of any any visible intracellular actin fibers. In contrast, intracellular actin fibers could be observed spanning the entire cell body on the control surface. NH 3 -plasma treatment also showed a striking positive effect on the intracellular cytoskeleton formation regardless of the observed polymeric nonwovens. In particular, intracellular F-actin expression was enhanced on PLLA L210, PCL and PA-6 nonwovens when compared to the respective untreated counterparts. However, distinct stress fiber formation, as it is seen in EA.hy926 cells grown on the control surface, still remained relatively poor on all of the plasma-activated polymer nonwovens. Expression of Endothelial Cell-Specific PECAM-1 Marker Immunofluorescent staining was examined to determine the expression of endothelial cell specific cell-cell contact PECAM-1 (CD31) marker in human EA.hy926 endothelial cells growing on polymeric nonwovens of PLLA L210, PCL and PA-6, with or without NH 3 -plasma surface modification and control surface (NC). Analysis of CD31 expression was used as a positive indicator of successful formation of cell-cell junctions in human endothelial EA.hy926 cells. Results indicate that on all untreated polymeric nonwovens, only a sparse CD31 expression in EA.hy926 cells was observed compared to the control surface (NC) (Figures 11 and S8). On all of the unmodified polymeric nonwovens comprising PLLA L210, PCL and PA-6, EA.hy926 cells was shown to exhibit a much reduced actin formation. This reduction in actin formation was evident due to the merely faint background staining of F-actin, with a concentrated F-actin ring at the outer cell membrane and a lack of any any visible intracellular actin fibers. In contrast, intracellular actin fibers could be observed spanning the entire cell body on the control surface. NH3-plasma treatment also showed a striking positive effect on the intracellular cytoskeleton formation regardless of the observed polymeric nonwovens. In particular, intracellular F-actin expression was enhanced on PLLA L210, PCL and PA-6 nonwovens when compared to the respective untreated counterparts. However, distinct stress fiber formation, as it is seen in EA.hy926 cells grown on the control surface, still remained relatively poor on all of the plasma-activated polymer nonwovens. Expression of Endothelial Cell-Specific PECAM-1 Marker Immunofluorescent staining was examined to determine the expression of endothelial cell specific cell-cell contact PECAM-1 (CD31) marker in human EA.hy926 endothelial cells growing on polymeric nonwovens of PLLA L210, PCL and PA-6, with or without NH3-plasma surface modification and control surface (NC). Analysis of CD31 expression was used as a positive indicator of successful formation of cell-cell junctions in human endothelial EA.hy926 cells. Results indicate that on all untreated polymeric nonwovens, only a sparse CD31 expression in EA.hy926 cells was observed compared to the control surface (NC) (Figures 11 and S8). On the contrary, EA.hy926 cells that were grown on NH3-plasma-modified polymeric nonwovens demonstrated an increased CD31 expression that was concentrated in the cellto-cell contact sites. Thus, CD31 expression patterns on NH3-plasma-modified nonwovens were more similar to general CD31 expression on the respective control surface. While comparing the different NH3-plasma-treated nonwovens, CD31 expression was highest and most homogeneous for EA.hy926 cells grown on NH3-plasma treated nonwovens of On the contrary, EA.hy926 cells that were grown on NH 3 -plasma-modified polymeric nonwovens demonstrated an increased CD31 expression that was concentrated in the cellto-cell contact sites. Thus, CD31 expression patterns on NH 3 -plasma-modified nonwovens were more similar to general CD31 expression on the respective control surface. While comparing the different NH 3 -plasma-treated nonwovens, CD31 expression was highest and most homogeneous for EA.hy926 cells grown on NH 3 -plasma treated nonwovens of PA-6. Here, it showed a faint intracellular constitutive background expression with intense cell-cell contact sites and thus was most comparable to the control (NC). Discussion Advancements in micro-and nanotechnology provide the technical platform to fabricate innovative biomaterials for tissue engineering in order to restore, maintain or even improve biological function. Electrospun polymeric scaffolds possess similar structural and morphological properties to extracellular matrix that might make them excellent biomimetic scaffolds for several cardiac interventions [18,26,[55][56][57][58]. This includes drug-eluting systems, artificial heart valve prosthetics, or occluder systems. Although contemporary efforts have been made in stent technology regarding efficacy and safety, polymeric scaffolds are still facing challenges such as inflammation or late thrombosis events [9]. Delayed endothelialization is believed to play a key role in the occurrence of late stent thrombosis, and therefore research has attempted to improve stent designs to accelerate endothelialization. Since the endothelium represents an inherently antithrombogenic surface, fast formation of an intact endothelium at the implant site can prevent thrombus formation, or inflammation, events. The present study is focused on the fabrication of nanofibrous polymer matrices as biomimetic scaffolds and the characterization of their physicochemical properties and biological performance in order to evaluate their suitability for the development of innovative artificial grafts for structural and valvular heart diseases [11,15,57]. Additionally, the effect of plasma functionalization of polymeric nonwovens was evaluated in attempt to improve endothelialization and thus biocompatibility of polymeric nanofibrous matrices [42,48,59]. Critical cellular parameters of human endothelial cells were examined to determine distinct growth patterns on polymeric nonwovens and investigate whether plasma functionalization with NH 3 affects biocompatibility and the maintenance of the endothelial cell phenotype. Surface Characterization and Chemical Modification by Plasma Treatment This study demonstrated that nanofibrous polymer scaffolds of biodegradable PLLA L210 and PCL as well as of biostable PA-6 were successfully fabricated by needleless electrospinning. Polymeric nonwovens exhibit uniform meshes with randomly distributed fibers with mean fiber diameters of 450 ± 250 nm for PLLA L210 and 100 ± 50 nm for PA-6 and 200 ± 50 nm for PCL while forming interconnected pores. Additionally, fiber morphology was shown to lack the formation of beads and junctions, which indicates sufficient evaporation of organic solvent mixtures during the electrospinning process. The physicochemical surface analysis of polymeric nonwovens demonstrated clear differences in wettability and surface free energy among the fabricated polymeric nonwovens (see Table 1). Water contact angles of unmodified polymeric nonwovens ranged from hydrophobic to highly hydrophobic, which likely indicatives that, despite the chemical composition, the hydrophobicity is a consequence of the nanostructured surface topography itself [60][61][62][63]. Contact angle measurements confirmed that surface treatment with NH 3 -plasma increases hydrophilicity of all types of investigated polymeric nonwovens. Moreover, the NH 3 -plasma led to a higher polar fraction of the surface free energy for all examined polymeric nonwovens. Thus, NH 3 -plasma functionalization was observed to exhibit a significant decrease of water contact angles (less than 40 • ) after NH 3 -plasma deposition for all polymeric nonwovens, indicating a more hydrophilic surface. Among the tested polymer scaffolds, the very high contact angles of untreated PLLA L210 and PCL nonwovens, compared to the nearly zero apparent contact angles after NH 3 -plasmatreatment, might be induced by capillary effects and the highly porous fibrous structure. That effect was observed to be highest for PLLA L210 nonwovens, while PCL-and PA-6 nonwovens exhibited less than half of the mean fiber diameters of PLLA L210. Evaluation of Cell Physiology and Cell Phenotype Maintenance on Nanofibrous Polymer Scaffolds Generally, cells are known to be able to sense aspects of their environment, including distinct surface topographical and chemical features, and adopt cellular physiology in response to those physicochemical properties of the biomaterial [28]. Thus, the results of the present study show that the nanofibrous topography of the fabricated electrospun polymeric matrices alone influences cell physiology and phenotype maintenance of human endothelial EA.hy926 cells, while distinct chemical characteristics, due to NH 3 -plasma treatment, have a significant influence on these properties. In this study, human endothelial EA.hy926 cells were successfully proven to grow on fabricated nanofibrous nonwovens of PLLA L210, PCL and PA-6. However, for all of those untreated nonwovens, only a moderate biocompatibility, assessed by cell viability analysis (ranging from 49.6 to 57.1%), could be observed. Correspondingly, compared to the planar control surface, cells grown on the polymeric nonwovens also demonstrated less cell adhesion, as well as reduced cell viability nearly independent of the type of polymer. This could be a consequence of the nanofibrousness of the polymeric scaffolds, causing initially altered cell physiology affecting cell attachment, cell morphology and cell-functional parameters. Beyond the effect of the nanofibrous surface, in general, the nonwoven scaffolds also provide a different growing environment since the nonwovens are a more three-dimensional scaffold compared to the planar negative control (NC), which is simply two-dimensional with an additionally cell culture treated surface [40]. In consequence, the different dimensions between nonwoven and the planar control surface might be the reason for the observed differences in cell attachment and vitality of human endothelial EA.hy926 cells. In this context, Del Gaudio et al. [56] also reported lower cell vitality for primary human endothelial HUVEC cells seeded on PCL nonwovens in comparison to a planar TCPS surface, which is in agreement with the results on cell viability and attachment in the present and prior studies [42]. Regarding the evaluation of distinct cell-nanofiber interactions, Ahmed et al. [55] also described alterations in cell shape and function of primary human endothelial HUVEC cells grown on nano-and microfibrous poly(lactic-co-glycolic acid) PLGA meshes in a fiber diameter dependent manner. In particular, HUVECs became more spherical with smaller fiber diameters in the nanometer range (up to 500 nm), indicated by higher cell shape index, when compared to cells grown on micrometer fibers. Similarly, the results in the present study also demonstrate high CSI values of human endothelial cells on the tested nanofiber polymeric nonwovens. In addition to the nanofibrous surface topography that might be responsible for distinct growth patterns of endothelial cells on polymeric nonwovens, the observed differences in wettability and surface free energy of the polymeric meshes could also contribute to changes in cell morphology. Limited cell attachment and viability can be attributed to the high hydrophobicity of all nanofibrous polymer surfaces that are known to negatively influence cell growth patterns. According to other studies, material surfaces that possess moderate hydrophilicity, i.e., water contact angle ranging between 40 • and 80 • , are shown to be favored by several cell types regarding cell attachment and vitality [64,65]. Therefore, the results of this study suggest that enhanced cell adhesion and viability of human endothelial EA.hy926 cells grown on PLLA L210 and PA-6 nonwovens might be attributed to the higher hydrophilicity than of PCL nonwovens. However, it is worth noting that the hydrophilic surface of PLLA L210 (less than 20 • before washing, results not shown) was artificially induced by the addition of a surfactant and could not be maintained after the introduction of washing processes. Consequently, after release of Triton X-100, the contact angle of PLLA L210 was highly hydrophobic and similar to that of PCL nonwoven (see Table 1). As a general conclusion, it is essential to consider all additives used during production to simplify the electrospinning process and for a correct comparison of different final polymer fiber surfaces. While distinct fiber geometries and surface wettability among the investigated untreated polymeric nonwovens were exhibited, differences in cell viability and cell shape were only slight and not significant. This might also be based on only slight differences in fiber diameters within the three polymer scaffolds, which only ranged within 100 to 450 nm for PCL, PA-6 and PLLA L210 nonwovens. In consequence, nanofiber-dependent differences in cell morphology and physiology patterns might become more obvious when comparing a higher range of fiber diameters. This might be evidenced by other studies that reported altered cell physiology according to differences in fiber diameters of polymeric nonwovens. Here, Ko et al. [58] showed increased proliferation and growth behavior of endothelial cells with increased fiber diameters of PLGA matrices by comparing nonwovens produced in nanofiber diameters of 200 nm and 600 nm. Moreover, since no significant differences regarding cell viability and growth patterns among the investigated polymer nonwovens could be demonstrated, it seems that surface chemistry might be less influential than surface topography because endothelial EA.hy926 cells exhibited similar growth patterns on all polymeric nonwovens, regardless of their distinct chemical composition, i.e., PLLA L210 vs. PCL vs. PA-6. In regard to this differentiation between the relative influence of topographical and chemical effects, Cousin et al. [66] showed that changes in the morphology of fibroblasts could be predominantly attributed to the surface topography of nanoparticulated coatings, irrespective of the surface chemistry, which is consistent with the results of the present study. Plasma Functionalization of Nanofibrous Polymer Scaffolds towards Improved Biocompatibility and Cellular Growth Patterns Surface functionalization with NH 3 -plasma was applied in order to improve biocompatibility and endothelialization of the polymeric nonwovens and thus their biological functionality and integrity regarding their usage as potential artificial cardiac or cardiovascular grafts. Indeed, NH 3 -plasma functionalization resulted in remarkable elevation of biocompatibility and phenotype maintenance of human endothelial cells on all of the investigated nanofibrous polymer scaffolds. In particular, NH 3 -functionalization increased cell viability by up to 26% above untreated nanofibrous nonwovens as observed for PCL scaffolds. Additionally, phenotypic traits of endothelial cells were improved after NH 3 -treatment, seen in enhanced cell spreading along with more elongated cell shapes (represented by lower CSI) as well as improved formation of actin cytoskeleton and more physiological CD31-expression. In consequence, surface functionalization with NH 3 -plasma was proven to exert positive effects on endothelialization, which could potentially improve biocompatibility and hemocompatibility of PLLA L210, PCL and PA-6 nanofiber matrices. Those results are in accordance with other studies that also have shown a positive effect of NH 3plasma of polymer films on cell viability and expression of endothelial markers such as CD31 in human endothelial HUVEC and HCAEC cells [42]. The positive effects seen in this study of NH 3 -plasma treatment of the polymeric nonwovens of PLLA L210, PCL and PA-6 could be attributed to the incorporation of functional NH 2 -groups that caused an increase in hydrophilicity of the polymeric nonwovens, subsequently leading to improved cell viability, which is also consistent with previous studies [67]. Because the surface morphology characterization did not reveal any significant effect of plasma treatment on the fiber diameters, it seems plausible that the enhancement of cell spreading and viability on the plasma-treated surfaces was induced solely by the increased hydrophilicity and the incorporation of amino functionalities, i.e., NH 2 -groups. In particular, NH 2 -group surface coupling might facilitate the adsorption of serum proteins to the surface that elevates cell attachment, subsequently leading to improved spreading and improved physiological patterns of human endothelial cells. Additionally, surface-adsorbed serum proteins might possess favorable configurations that are advantageous for cell attachment, spreading and growth patterns, as previous studies already reported [68]. Previous studies that demonstrated increased cell spreading of endothelial cells on NH 3 -treated fibrous PLLA scaffolds postulate that the long-term beneficial effect of NH 3 plasma-treatment of polymer surfaces is a result of NH 2 -group coupling to the surfaces [42,45]. Limitations and Potential of Plasma-Activated Nonwovens for Cardiovascular Applications Within this study, we were able to gain basic knowledge about the biological interaction between cell-nonwoven surfaces under the influence of NH 3 plasma activation. However, so far we have only been able to make limited statements about the colonization of the fiber networks with human endothelial cells [58,69,70]. Both untreated and plasmatreated nonwovens were only studied within in vitro cell culture for short periods of time (48 h). In literature, a doubling time of approximately 25 h of the used endothelial cells is described [71,72]. Therefore, longer cell culture experiments in the future could yield more information. As already mentioned before, no general conclusions about polymeror nonwoven-specific plasma treatment can be predicted. In general, parameters such as polymer type and electrospinning processing or plasma treatment time need to be adjusted to the desired applications. In addition, insights into biological response such as cell colonization and inflammatory behavior are limited in vitro experiments. In preparation for in vivo experiments, new methods such as cell behavior under flow conditions have to be established to quantify effects of shear stress on a confluent endothelial cell layer and to examine more complex situations such as biofilm formation or thrombotic events. According to literature [11,69,73], the following important current questions cannot yet be conclusively or adequately addressed: (i) how long does complete endothealization of nonwoven materials take in vivo (with blood circulation)? (ii) do specific cells penetrate through the fiber network and does biodegradation influence the functionality? (iii) which polymer type should be preferred for which implant region-biodegradable vs. biostable? In addition, Table 2 provides a brief summary of further considerations regarding plasma surface treatment of electrospun polymer scaffolds. Conclusions In this study, nanofibrous scaffolds of PLLA L210, PCL and PA-6 were successfully fabricated by electrospinning for enabling their usage as biomimetic matrices for endothelialization. Results showed surface-topography-dependent alterations in cell physiology and endothelialization potential of human EA.hy926 endothelial cells. Untreated polymer meshes showed moderate biocompatibility and low cell spreading, which might complicate their use as implants. In order to improve the biocompatibility and biological integrity of the nanofibrous polymeric scaffolds, surface modification by NH 3 -plasma functionalization was performed as it is described as a promising method to optimize biomaterials towards better cell compatibility and implant integrity. Chemical surface modification by NH 3 -plasma treatment was demonstrated to alter hydrophilicity of all investigated polymer nonwovens while not affecting fiber morphology or structure integrity. Short functionalization with NH 3 -plasma was demonstrated to effectively promote cell attachment and cell growth patterns of human endothelial cells on polymer nanofiber scaffolds. The positive effect was proven by enhanced cell attachment and spreading as well as increased cell viability. Thus, surface functionalization by NH 2 -plasma could further promote endothelialization and hemocompatibility of polymeric nonwovens of PLLA L210, PCL and PA-6, making them suitable as advanced biomaterials for several cardiac and vascular interventions. Further studies should more closely examine the influence of fiber thickness and pore size of nanofiber nonwovens using one specific polymer type. We know that, for instance, flat human endothelial cells forming a uniform monolayer with cell clusters of 200-400 µm are probably too large for the pores of our nanofiber meshes [74]. However, complex biodegradable PLLA or PCL matrices, where fiber degradation and cavity generation over time plays a role, need to be considered in vitro. To study degradation or ingrowth behavior under in vivo conditions for longer time periods, fast-degrading medically approved polymers could also become interesting as model fiber systems, if the influence of the polymer type on cell colonization is incidental. In addition, growth-promoting substances can be released via fiber degradation [74]. Controlling the fabrication parameters of the electrospinning process to optimize fiber diameter, pore structure, and mesh density and thickness is an important task to ensure adequate cell functionality and biological integrity. Together with the finely tunable fabrication properties of the electrospinning technique, plasma-treated polymer nonwovens are promising biomaterials for specifically directing cellular response. Thereby, systematic modulation of biomaterial's physicochemical properties will support the elucidation of key cell-biomaterial interactions to further improve implant technology. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ma15062014/s1, Figure S1: The annual number of publications for the last 20 years with topic "engineer* cardi* scaffold*" provided by the search engine of Scopus, PubMed and Web of Science before the 15 February 2022; Figure S2: The annual number of publications for the last 20 years with topic "engineer* cardi* scaffold*" associated with supplementary search terms: "polymer*", "hydrogel*", "micropattern*", "nanofib*", "bioprint*" or "decellular*", provided by the search engine of Web of Science before 15 February 2022; Figure S3: The annual number of publications for the last 20 years with topic "engineer* cardi* scaffold*" associated with supplementary search terms: "polymer*", "hydrogel*", "micropattern*", "nanofib*", "bioprint*" or "decellular*", provided by the search engine of PubMed before before 15 February 2022; Figure S4: The annual number of publications for the last 20 years with topic "engineer* cardi* scaffold*" associated with supplementary search terms: "polymer*", "hydrogel*", "micropattern*", "nanofib*", "bioprint*" or "decellular*", provided by the search engine of Scopus before 15 February 2022; Figure S5: Representative SEM images of PLLA L210, PCL and PA-6 nonwovens at magnification 1000× and 5000×; Figure Funding: Partial financial support by the European Regional Development Fund (ERDF) and the European Social Fund (ESF) within the excellence research program Card-ii-Omics and the collaborative research between economy and science of the state Mecklenburg-Vorpommern and by the Federal Ministry of Education and Research (BMBF) within RESPONSE "Partnership for Innovation in Implant Technology" is gratefully acknowledged. Data Availability Statement: The data presented in this study are available upon request from the corresponding author.
2022-03-11T16:11:28.712Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "971745bf3e80750d1cc82533b108eee9473151a4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/15/6/2014/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "707b0d65e6cec3f538afe9b2b6d2d22c926af7a6", "s2fieldsofstudy": [ "Materials Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
255571507
pes2o/s2orc
v3-fos-license
Target complementarity in cnidarians supports a common origin for animal and plant microRNAs microRNAs (miRNAs) are important post-transcriptional regulators that activate silencing mechanisms by annealing to mRNA transcripts. While plant miRNAs match their targets with nearly-full complementarity leading to mRNA cleavage, miRNAs in most animals require only a short sequence called ‘seed’ to inhibit target translation. Recent findings showed that miRNAs in cnidarians, early-branching metazoans, act similarly to plant miRNAs, by exhibiting full complementarity and target cleavage; however, it remained unknown if seed- based regulation was possible in cnidarians. Here, we investigate the miRNA-target complementarity requirements for miRNA activity in the cnidarian Nematostella vectensis. We show that bilaterian-like complementarity of seed-only or seed and supplementary 3’ matches are insufficient for miRNA-mediated knockdown. Furthermore, miRNA-target mismatches in the cleavage site decrease knockdown efficiency. Finally, miRNA silencing of a target with three seed binding sites in the 3’ untranslated region that mimics typical miRNA targeting was repressed in zebrafish but not in Nematostella and Hydractinia symbiolongicarpus. Altogether, these results unravel striking similarities between plant and cnidarian miRNAs consolidating the evidence for common evolutionary origin of miRNAs in plants and animals. Introduction miRNAs are endogenous post-transcriptional regulators that are abundant in diverse eukaryotic lineages (Ameres & Zamore, 2013;Bartel, 2004Bartel, , 2018;;Moran et al, 2017).They have important roles in various biological processes and are essential for proper development of animals and plants (Bartel, 2018;Jones-Rhoades et al, 2006;Voinnet, 2009;Dexheimer & Cochella, 2020).miRNAs are transcribed by RNA polymerase II into long primary transcripts that are processed into hairpin-structured precursor-miRNAs (pre-miRNAs), which are later cleaved into short 20-22 nucleotides long duplexes.The duplexes are loaded into Argonaute (AGO) proteins that are a part of the RNA-Induced Silencing Complex (RISC), where only one strand is selected to remain loaded and the other is discarded (Kim et al, 2009).The loaded strand leads the RISC complex to a matching target and mediates its repression by inducing cleavage, translation inhibition or degradation by deadenylation of the mRNA (Jones- Rhoades et al, 2006;Jonas & Izaurralde, 2015). The miRNA system is present in both plant and animal kingdoms, although a few major differences exist between them in the miRNA biogenesis pathway, mode of action and target recognition (Axtell et al, 2011;Moran et al, 2017).The biogenesis pathway in animals starts within the nucleus with the processing of the primary miRNA (pri-miRNA) by the microprocessor complex composed of RNase type III Drosha and its partner protein Pasha (known as DGCR8 in vertebrates) (Han et al, 2004a).The resulting pre-miRNA is transported by Exportin 5 into the cytoplasm where it gets cleaved into the mature miRNA by the RNase type III Dicer with the help of partner double-stranded RNA binding proteins such as Loqs and TRBP (Förstemann et al, 2005;Jouravleva et al, 2022;Fareh et al, 2016;Redfern et al, 2013;Wilson et al, 2015).In plants, both pri-miRNA and pre-miRNA are processed within the nucleus by DICER-LIKE1 (DCL1) assisted by its partner protein Hyponastic Leaves1 (HYL1) (Voinnet, 2009;Han et al, 2004b).Another difference between plant and animal miRNAs resides in their target recognition mode.In bilaterian animals, which include most known animal groups such as arthropods, nematodes, and vertebrates, miRNAs bind their targets with a short 5' sequence called the 'seed' that includes only seven nucleotides, at positions 2-8 of the miRNA (Brennecke et al, 2005).Supplemental complementarity at the 3' end, mostly of positions 13-16, occurs in some cases, but it is not as frequent and considered less crucial for target recognition (Bartel, 2009;Grimson et al, 2007).The contribution of the supplemental complementarity to target binding seems to change considerably between cases (Becker et al, 2019;Bertolet et al, 2019).Target recognition restricted to seed or mediated via a seed match and supplemental complementarity with mismatches at positions 10 and 11 often leads to translational inhibition and deadenylation of the mRNA, mitigated by the metazoan-specific GW182 protein family (called TNRC6 in vertebrates) (Hutvagner & Simard, 2008;Bartel, 2009;Iwakawa & Tomari, 2015).Contrastingly, plant miRNA target recognition and activity require nearly-full complementarity that frequently results in AGO-mediated target cleavage between positions 10-11 of the miRNA, known as the cleavage site.Translational inhibition can also occur, but it still requires nearly-full complementarity (Iwakawa & Tomari, 2013). The above-mentioned differences led to the notion that the miRNA system evolved independently in plants and animals; however, recent studies have shown that the miRNA system in the model sea anemone Nematostella vectensis, as well as other cnidarian species, is more similar to plants than previously thought (Moran et al, 2014;Modepalli et al, 2018;Tripathi et al, 2022;Baumgarten et al, 2018;Li & Hui).Cnidaria, the sister group to Bilateria, diverged over 600 million years ago from the vast majority of animal clades, and is composed of sea anemones, corals, jellyfish, and hydroids (Erwin et al, 2011;Kayal et al, 2018).Cnidarians possess a miRNA system (Grimson et al, 2008), and share some highly conserved miRNAs, some of them are known to be crucial for cnidarian fitness and development (Praher et al, 2021;Modepalli et al, 2018;Fridrich et al, 2020Fridrich et al, , 2023)). Interestingly, cnidarian miRNAs operate by binding their targets with nearly-full complementarity that leads to mRNA cleavage, in a similar manner to plant miRNAs (Moran et al, 2014).Furthermore, miRNAs in Nematostella and plants are methylated at the 3' end, which is essential to prevent miRNA degradation (Modepalli et al, 2018).Importantly, it was shown recently that Nematostella Hyl1-Like a, a homolog to plant-specific HYL1, also takes part in miRNA biogenesis, which suggests that it likely took part in miRNA biogenesis before the separation of plants and animals (Tripathi et al, 2022).Yet, despite these striking similarities to the miRNA pathway of plants, the cnidarian miRNA system also exhibit clear metazoan-specific features such as Drosha and Pasha homologs and a GW182 homolog (Mauri et al, 2017;Moran et al, 2013).The similarities to the bilaterian miRNA pathway raise the question whether cnidarian miRNAs might be able to target mRNAs via more restricted interactions focused on the seed region like their bilaterian counterparts. To answer this question, we characterized the complementarity requirements between miRNAs and their targets in Nematostella.Using a reporter, we tested the efficiency of different complementarity patterns in promoting gene knockdown, such as the bilaterian-like seed match and a cleavage site mutated sequence.We utilized TBP::mCherry transgenic sea anemones that ubiquitously express mCherry fluorescent protein (Admoni et al, 2020) and injected to their zygotes different miRNA mimics to compare their effect on the expression of the fluorescent reporter. Bilaterian-like matches fail to repress gene expression in Nematostella It was previously shown that injection of short hairpin RNAs (shRNAs) to Nematostella zygotes leads to efficient knockdown of chosen targets (He et al, 2018).For our study, we designed a miRNA mimic (mimiR) based on an endogenous miRNA template to resemble native Nematostella miRNA precursors and better predict their processing by Dicer and the strand selection by AGOs.We used the pre-miRNA sequence of Nve-miR-2022, a highly conserved miRNA among cnidarians (Moran et al, 2014;Praher et al, 2021), and changed the mature miRNA sequence to nearly fully match the mCherry transcript.The target site is located in the 3' UTR, since the majority of canonical miRNA sites are found in this region (Bartel, 2018).To be able to test the effect of different complementarity patterns on knockdown efficiency, we first generated mimiR with nearly-full complementarity to mCherry transcript (except for position 19, see materials and methods), that was later altered to resemble bilaterian miRNA binding sites that are based on seed match or mismatched in the cleavage site (Figure 1A). To test the impact of bilaterian-like miRNA-target complementarity pattern on transcript and protein levels, only the seed region at positions 2-8 of the miRNA was left matching the mCherry-encoding transcript while the rest of the sequence was changed.In addition, we generated a mimiR with supplemental matches to the seed at positions 13-16 (Figure 1A). The mimiRs were injected to TBP::mCherry zygotes that were observed after three days and mCherry transcript and protein levels were measured.Interestingly, we observed that bilaterian-like mimiRs with 'canonical site, i.e., matching their target only via the seed region, which is the most common class of miRNA in Bilateria (Bartel, 2018), cause no measurable knockdown of mCherry.The fluorescence of the embryos was similar to the negative control group injected with short hairpin RNA (shRNA) with no matches to Nematostella transcripts (Figure 1B) and mCherry mRNA and protein levels showed no difference compared to the negative control; and were significantly higher than in the positive control groups ( Figures G and H, and Table 1).Adding supplementary binding bases at the 3'-end of the miRNA, to resemble another type of common bilaterian targets, resulted in similar measurements, with both transcript and protein levels of mCherry remaining unaffected by the presence of the mimiR when compared to the control group (Figure 1C, G and H, and Table 1). These results implicate that the previously described nearly-perfect plant-like matches between cnidarian miRNAs and their targets (Moran et al, 2014) are the major mode of interaction and bilaterian-like matches between miRNAs and targets are probably not functional in Cnidaria.In fact, it was shown that bilaterian-like matches have very weak effect or none at all in plants (Iwakawa & Tomari, 2013;Liu et al, 2014).Our experimental validation supports the evolutionary scenario that miRNA targeting based on seed match is a bilaterian innovation, suggested to contribute to the expansion of regulatory networks by allowing a single miRNA to bind hundreds of targets (Moran et al, 2017).It is noteworthy that in plants despite having full complementarity to the targets, the seed region is still crucial for target recognition and mismatches in positions 1-8 lead to decrease in silencing efficiency (Liu et al, 2014), which could potentially also be the case for Nematostella miRNAs. Cleavage site mismatches interfere with miRNA activity Next, we assessed the necessity of miRNA binding in the site of target cleavage.We mismatched positions 10-11 of the mimiR and compared mCherry levels to nearly-full complementarity control mimiR injection.Similarly to seed-restricted mimiRs, inhibition of cleavage site binding resulted in impaired miRNA activity, as mCherry fluorescence as well as transcript and protein levels showed no difference from the negative control (Figure 1D, G, and H, and Table 1). These results are in accordance with plant miRNAs that also fail to induce target cleavage when central mismatches are introduced (Iwakawa & Tomari, 2013).Moreover, this experiment further validates the notion that Nematostella miRNAs promote target cleavage as the main mode of action (Moran et al, 2014).Cleavage is known to be the main mechanism for miRNA activity in plants as well, suggesting that it is the ancestral miRNA mode of action.This has been discussed in relation to the ancient RNA interference (RNAi) system for defense against invasive nucleic acids, such as transposons and viruses, that operates by binding of short interfering RNAs (siRNAs) to foreign RNA targets and eliminating their expression by cleaving them.It is a probable evolutionary scenario that the miRNA system evolved from the RNAi defense system (Cerutti & Casas-Mollano, 2006). In addition to target cleavage, translational inhibition, which is the common miRNA mechanism in bilaterians, was also exhibited in plants (Liu et al, 2014;Li et al, 2013;Brodersen et al, 2008).It was shown in Arabidopsis thaliana that central mismatches abolish target cleavage, although still allow translational inhibition when the target site is in the 5' UTR (Iwakawa & Tomari, 2013).The extent to which translational inhibition is occurring in Nematostella is still unknown, although it was shown that Nematostella GW182 homolog could promote mRNA decay and translational repression when expressed in human cells (Mauri et al, 2017).Based on our experiments, it seems that translational inhibition or mRNA decay did not occur when central mismatches prevented target cleavage; however, we cannot exclude the scenario that this mechanism is active in Nematostella due to the conservation of GW182, but requires different complementarity pattern or different number of sites. Partial mismatch in the cleavage site results in weaker repression in Nematostella After testing positions 10-11, we wished to test how a single mismatch in the cleavage site affects the knockdown efficiency.For this we mismatched either position 10 or 11 separately and injected both variants to transgenic zygotes.Both mimiRs resulted in visibly lower mCherry fluorescence.On the molecular level, mCherry transcript levels were significantly lower than the negative control, hence knockdown still occurred, but it was significantly weaker than the positive control inflicted repression (Figure 1E-F and G, and Table 1).On the protein level, despite a noticeable trend of higher protein levels compared to the positive control both in the ELISA measurement and the fluorescence of the transgenic animals, there was no statistically significant difference between protein concentration between a nearly-full match and a single position mismatch (Figure 1E-F and H, and Table 1).This raises the intriguing possibility that translational inhibition contributes to the silencing effect beyond the effect at the RNA level.Yet, such a claim requires further biochemical proof. Reduced knockdown efficiency due to mismatch of one nucleotide at the cleavage site could be due to different conformation of AGO-miRNA that changes the cleavage efficiency (Sheu-Gruttadauria & MacRae, 2017).Some Nematostella miRNAs naturally exhibit a mismatch to their target in position 10 or 11: position 10 was found mismatched in 67 miRNAtarget pairs, while position 11 was mismatched in 41 pairs among degradome-verified targets (Moran et al, 2014).It is possible that the natural mismatches are selected for weaker knockdown of their targets, as weaker repression might be beneficial for some regulatory roles. Multiple seed match sites in the 3' UTR are inefficient for miRNA activity in Cnidaria Single miRNA binding sites in bilaterians are capable of modulating protein target levels in a significant manner as demonstrated experimentally in Drosophila and zebrafish (Brennecke et al, 2005;Choi et al, 2007).However, many bilaterian miRNAs exhibit more than one site for each target they regulate (Grimson et al, 2007).Since multiple binding sites on the same target transcript can provide together synergic rather than additive repression effects (Briskin et al, 2020), we designed an mCherry-encoding mRNA that harbors three seed match sites in its 3' UTR (Figure 2A).The mRNA was injected to wild-type Nematostella zygotes along with the previously used seed match mimiR.A new nearly fully matching positive control mimiR was designed since the original sequence could potentially partially bind the three seed sites. After 24 hours from injection, mCherry fluorescence was weaker in the positive control group compared to the seed match group (Figure 2B).In accordance, protein concentrations were similar between seed match mimiR and negative control, both significantly higher than the positive control (Figure 2C Table 1).This result shows that increasing the number of target sites in the 3' UTR does not improve the efficiency of seed match miRNAs and further validates that bilaterian-like matches are ineffective in Nematostella. Next, the mRNA silencing by mimiRs was tested in Hydractinia symbiolongicarpus, a colonial cnidarian and member of Medusozoa, which separated from Anthozoa, the group that includes Nematostella, about 560 million years ago (Khalturin et al, 2019).shRNA silencing tool was shown to be effective in Hydractinia (DuBuc et al, 2020), making it possible to test this experimental design in another cnidarian.As described before, mCherry protein levels were quantified 24 hours after injection to Hydractinia zygotes.Similar to Nematostella, the protein levels following injection of nearly-full complementarity mimiR were significantly lower than both the seed match and the negative control groups (Figure 2D and Table 1).In contrast, although there seems to be a slight difference between the negative control and the seed groups, it is not significant.These results further confirm that the miRNA mechanism in cnidarian animals is based on nearly-full complementarity between miRNAs and their targets, while seed match alone is insufficient in mediating target knockdown. To validate the ability of the small RNAs to bind to the sites and promote silencing in a bilaterian system, the target mRNA was injected with shRNA/mimiR to zebrafish embryos. The injection was in the same manner as to Nematostella, with addition of an mRNA encoding sfGFP without miRNA binding sites, to account for the variability of expression efficiency in the zebrafish embryos.Following 10 hours from injection, the embryos show a difference in mCherry fluorescence between the groups, with the seed match group exhibiting the weakest fluorescence (Figure 3A).Both in the protein and the fluorescence level, a significant difference was found between the negative control and the seed match groups, hence validating the efficiency of the seed match mimiR in binding and repressing its target in a bilaterian animal (Figure 3B and C and Table 1).Moreover, no significant difference was found between the full match mimiR group and the other treatments, indicating that the target was not efficiently cleaved despite the extensive complementarity.This result is in accordance with the fact that zebrafish lack efficient slicing activity by AGO2 due to two specific point substitutions specific to teleost fishes (Chen et al, 2017).In conclusion, the results of this experiment validate the experimental approach of co-injection of seed match or nearly-full complementarity mimiR and the mRNA target since the results coincide with the literature about zebrafish miRNA pathway.Further, together with our previous results it suggests that seed-restricted matches between miRNAs and their targets is a derived bilaterian innovation (Figure 4). In this study, we show through in-vivo assays that cnidarian miRNAs act similarly to those of plants in terms of the complementarity requirements to their targets to induce efficient gene repression.We show that bilaterian matches that rely either solely on seed matches or seed matches with supplementary 3' matches fail to perform measurable gene repression in Nematostella.In addition, multiple seed matches are insufficient for promoting target silencing in Hydractinia and Nematostella.Furthermore, a cleavage site is crucial for miRNA activity in Cnidaria, and a single mismatch in the cleavage site reduces repression efficiency on the qualitative level. Overall, the results of this study reveal important similarities between plants and cnidarians in the complementarity requirements between miRNAs and their targets and together with previous findings provide multiple lines of evidence for common origin of miRNA regulation before the separation of plants and animals (Figure 4). Nematostella culture and microinjection Nematostella polyps culturing, spawning and fertilization were conducted as previously described (Genikhovich & Technau, 2009) with minor modifications.Cultured anemones and were maintained at 18 °C under dark conditions and fed with freshly hatched Artemia salina nauplii three times a week.Anemones were induced to release gametes in 25 °C for 8 hours followed by fertilization of WT eggs with either WT or heterozygote TBP::mCherry sperm.The gelatinous sack surrounding the eggs was removed by incubation in 3% L-Cysteine (Merck Millipore, USA) while rotated by hand for 15 minutes.Microinjection to zygotes was performed with Eclipse Ti-S Inverted Research Microscopes (Nikon, Japan) connected to an Intensilight fiber fluorescent illumination system (Nikon) for visualization of the fluorescent injected mixture.The system is mounted with a NT88-V3 Micromanipulator Systems (Narishige, Japan).Every replicate included injection of three groups of 400-700 zygotes each: negative control shRNA group, positive control mimiR and altered mimiR.TBP::mCherry heterozygotes were injected with shRNA/mimiRs at 31.7 µM.WT zygotes were injected with mCherry mRNA at 0.167 µM along with shRNA/mimiR at 1 µM, and 100 mM KCl.All injection mixes included dextran Alexa Fluor (Thermo Fisher Scientific, USA) for tracing of injection mix.The injected animals were kept in an incubator at 22 °C, counted and transferred to fresh Nematostella medium (16 ‰ artificial sea water made from dry Red Sea salt) every day.The animals were visualized before flash-frozen in liquid nitrogen with ~150 animals in each sample.The frozen samples were kept in -80 °C until either RNA or protein extraction. Hydractinia cultures and microinjection Adult Hydractinia symbiolongicarpus colonies were maintained as previously described (Frank et al, 2020).The colonies were grown in artificial seawater at 19-22 °C on glass slides, separated to males and females.The animals were fed with Artemia nauplii four times per week, and once a week with ground oysters.The animals were kept in a constant 14:10 light:dark cycle, where females and males spawn 1.5 hours after exposure to light.Zygotes were injected within two hours from fertilization as previously described (Millane et al, 2011;Salinas-Saavedra et al, 2023), with mCherry mRNA at 0.167 µM and shRNA/mimiRs (Negative control shRNA, positive control mimiR or seed match mimiR) at 15.85 µM.Injected embryos were flash-frozen in liquid nitrogen after 24 hours.The frozen samples were kept in -80 °C until protein extraction. Zebrafish embryos culture and microinjection Wild-type zebrafish (AB/TL) maintenance was according to standard procedures.Fertilized eggs were collected at 28 °C and kept in culture medium (5 mM NaCl, 0.17 mM KCl, 0.33 mM CaCl 2 , 0.33 mM MgSO 4 , 0.25 mM HEPES, 0.1% Methylene blue).A total of ~150 embryos per group were microinjected at the one-cell stage with 1 nl of solution containing 0.297 µM mCherry mRNA, 0.3 µM sfGFP mRNA and shRNA/mimiR at 1.782 µM (Negative control shRNA, positive control mimiR or seed match mimiR).All injection mixes included 10% phenol red (New England Biolabs) for tracing of injection mix.Zebrafish embryos were visualized under a fluorescent stereomicroscope before dechorionated and frozen at 10 hours post injection.For removal of chorion embryos were incubated for 5 mins with 1 mg/ml Pronase (Merck) then washed with culture medium and flash-frozen in liquid nitrogen.The frozen samples were kept in -80 °C until protein extraction.All protocols and procedures involving zebrafish were approved by the Institutional Committee on Animal Care and Use (IACUC, Protocol #NS-15859), The Alexander Silverman Institute of Life Sciences, The Hebrew University of Jerusalem. RNA extraction Total RNA was extracted from ~150 injected animals (3 days old planulae) with the aid of Tri-Reagent® (Merck Millipore) according to the manufacturer's protocol, with a few minor changes.At the RNA isolation phase, samples were centrifuged at 21,130 × g.Removal of residual genomic DNA from the extracted RNA was conducted by treatment with Turbo DNase twice for 30 minutes at 37 °C (Thermo Fisher Scientific) and repeating the RNA purification procedure for a second time.Final RNA pellets were re-suspended in 23-25 µl of RNase-free water (Biological Industries, Israel).Final concentration was measured by Qubit™ RNA BR Assay Kit (Thermo Fisher Scientific.RNA integrity was assessed by gel electrophoresis with 1:1 formamide (Merck Millipore) and 1 µl of loading dye on 1.5% agarose gel.RNA samples were stored at -80 °C until used. shRNA /mimiR design shRNA sequence to serve as negative control with no matches in Nematostella genome was taken from an existing protocol (Karabulut et al, 2019).mimiRs to target mCherry transcript were designed based on Nematostella miR-2022, an endogenous miRNA stem-loop that was used as template to allow better prediction of cleaving sites by Dicer and to ensure selection of the desired strand and loading onto AGO1 (Fridrich et al, 2020;Moran et al, 2014).The targeted sequence was selected have U as a 5' terminal nucleotide, according to Nematostella guide strand characteristics, and mismatches were introduced to the predicted star strand in positions 1, 8, 9 and 17 (Fridrich et al, 2020).mimiRs were designed to the 3' UTR region of the mCherry transcript.The base in position 19 was always cytosine, due to invitro transcription requirements.mimiR sequence alterations included mismatches in positions 10-11, position 10 or 11, only positions 1-8 base-pairing with the mCherry sequence (seedmatch), and positions 1-8 and 13-16 matching (seed + supplemental matches). In-vitro transcription The shRNA and mimiRs were transcribed according to the manufacturer's instructions using the AmpliScribe™ T7-Flash™ Transcription kit protocol (Lucigen, USA) with a few changes. The shRNA/mimiR DNA templates were ordered from Integrated DNA Technologies, Inc (Integrated DNA Technologies, USA) as reverse complements to the sequence of T7 promoter followed by shRNA/mimiR precursor.Templates were annealed with a T7 promoter primer prior to the in-vitro transcription reaction, which was carried out for 15 hours, followed Fisher Scientific).Primer quality was 125% efficiency, -2.83 slope and R2 >0.99.The specificity of the amplified products was determined by the presence of a single peak in the melting curve.RT-qPCR was performed using StepOnePlus Real-Time PCR System v2.2 (ABI, Thermo Fisher Scientific) and cDNA amplification was quantitatively assessed with using Fast SYBR Green Master Mix (Thermo Fisher Scientific).Each sample was quantified in triplicates for mCherry transcript and housekeeping gene 4 (HKG4) as an internal control (Columbus-Shenkar et al, 2018). 1 µl of cDNA template was used for all replicates.For negative control cDNA was replaced with RNase-free water.The Reaction thermal profile was 95 °C for 20 sec, then 40 amplification cycles of 95 °C for 3 sec and 60 °C for 30 sec, a dissociation cycle of 95 °C for 15 sec and 60 °C for 1 min and then brought back to 95 °C for 15 sec (+0.6 °C steps).mCherry fold change was analyzed using a comparative Ct method (2 -ΔΔCt ) (Schmittgen & Livak, 2008).Thresholds for HKG and mCherry detection were equalized between individual experiments.Each experiment was composed of at least three biological replicates. Protein extraction Total protein extraction was implemented by adding 200 µl of the following lysis buffer: 50 mM Tris-HCl (pH 7.4), 150 mM KCl, 10% glycerol, 0.5% NP-40, 5 mM EDTA (all chemicals purchased from Merck Millipore) and Halt™ Protease Inhibitor cocktail (Thermo Fisher Scientific).Then, samples were homogenized with pestle mixer (Argos Technologies, cat. No.A0001) and incubated at 4 °C for two hours in rotating mixer (Intelli Mixer™ RM-2, ELMI, function 1, 7 rpm).Samples were later centrifuged at 4 °C, 16,000 × g for 15 min and the aqueous phase was transferred to a new tube.Concentration of total protein was measured using Pierce™ BCA Protein Assay Kit (Thermo Fisher Scientific) on Epoch Microplate Spectrophotometer (BioTek Instruments).Samples were kept at -80 °C until used. Red Fluorescence Protein Enzyme-Linked Immunosorbent Assay (RFP ELISA) mCherry protein levels were detected with the aid of RFP ELISA kit (Cell Biolabs, Inc., USA). All protein samples were diluted to equal concentration prior to loading on antibody plate, and the experiment was carried out according to protocol.Epoch Microplate Spectrophotometer (BioTek Instruments) was used for absorbance measuring.Fit for standard curve was found using CurveExpert Basic 2.2.3 (Hyams Development, USA). Rapid Amplification of cDNA Ends (3' RACE) In order to reveal the exact length of the 3' end of the transgenic TBP::mCherry transcript, SMARTer® RACE 5'/3' Kit was used (Takara Bio, Japan).Prior to the reaction, RNA was extracted from 3-months-old animals, with one round of Tri-Reagent® (Merck Millipore). cDNA synthesis was conducted according to manufacturer's protocol with 500 ng of RNA. Gene specific primers were designed for the PCR and Nested PCR reactions, which were carried out by Advantage® 2 Polymerase (Takara Bio).Final products were outsourced for Sanger sequencing (HyLabs, Israel).mCherry mRNA generation mCherry mRNA template was ordered as gBlock gene fragment (Integrated DNA Technologies).The sequence included T7 promoter, EF1α kozak sequence TGTTAAACCAACCAACCACC and 3' UTR with three seed sites 21 bases apart (two were inserted in addition to the original one).In addition, the 3' UTR included one site for full match mimiR and two nucleotides' changes to make gBlock synthesis efficient.A codon-optimized mCherry mRNA sequence was design for expression in Hydractinia.The DNA fragment was dissolved in TE buffer to final 20 ng/µl and incubated at 50 °C for 20 min.For injection to Nematostella, the template was cloned to pGEM®-T Easy plasmid (Promega) and amplified with forward primer to add T7 promoter class II phi2.5.In-vitro transcription was conducted with HighYield T7 Cap 1 AG (3'-OMe) mRNA Synthesis Kit (m5CTP) (Jena Bioscience, Germany) using 800 ng of amplified template followed by Turbo DNase treatment (Thermo Fisher Scientific) by incubation with 1 µl of DNase for 30 minutes at 37 °C twice sequentially. For injection to Hydractinia and zebrafish the mRNA template was amplified from gBlock, zebrafish mRNA with 68 °C annealing temperature.mRNA was transcribed with HiScribe T7 mRNA Kit with CleanCap Reagent AG (New England Biolabs) according to manufacturer's protocol with 1 µg of amplified template.In-vitro transcription products were cleaned using RNA clean & concentrator 25 (Zymo Research) and eluted with 33 µl RNase free water.Concentration was measured using Qubit ™ RNA Broad Range Assay Kit with the Qubit Fluorometer (Thermo Fisher Scientific).Poly-adenylation followed using Escherichia coli Poly(A) Polymerase (New England Biolabs) for 30 minutes at 37 °C and products were further cleaned with RNA clean & concentrator 5 (Zymo Research) and eluted with 8-10 µl RNase free water.Single product was validated on 1.5% agarose gel after incubated at 95 °C for two minutes in thermo cycler with hot lid then brought to 22 °C and mixed with formamide (Merck millipore) in 1:3 ratio.The mRNA was stored in -80 °C until injected.sfGFP mRNA generation sfGFP-encoding mRNA with a 40 nucleotides polyA tail was in-vitro transcribed from a plasmid encoding the construct under SP6 promoter.The plasmid was linearized by digestion with SapI restriction enzyme (New England Biolabs) for 1 hour in 37 °C.mRNA was synthesized using HiScribe SP6 RNA synthesis kit (New England Biolabs) according to the manufacture's protocol.Resulting mRNA levels were quantified by nanodrop (NanoDrop Microvolume Spectrophotometer, Thermo Fisher Scientific) and mRNA length was validated by gel electrophoresis.The mRNA was stored in -80 °C until injected. Microscopy Fluorescence of mCherry and sfGFP protein was detected by SMZ18 stereomicroscope (Nikon) connected to an Intensilight fiber illumination fluorescent system (Nikon).Images were captured by DS-Qi2 SLR camera (Nikon) and were analyzed and processed with NIS-Elements Imaging Software (Nikon).Zebrafish images were taken using both mCherry and GFP channels and analyzed using ImageJ software (Schindelin et al, 2012).Between 17-29 zebrafish embryos were selected per field with three biological replicates per treatment.For quantitative analysis, background intensity calculated by averaging the intensity of the same five locations in each field was subtracted before raw intensity was measured for each individual embryo.mCherry intensity was normalized to GFP by dividing the values.The average normalized mCherry intensity was calculated for each treatment in three biological replicates. Statistical analysis Comparisons between groups of injected animals in transcript levels and protein concentration were tested with one-way ANOVA with Tukey's HSD post-hoc test.Normality of the data was validated beforehand.Statistical analysis for protein level or normalized mCherry intensity following mRNA injections was conducted with pairwise one-tail Student's ttest or one-tail Welch's t-test without the assumption of homogeneity of variances.P-values were adjusted by FDR method.For normalized mCherry intensity, average intensities were compared between groups.For mCherry transcript level, ΔCt values were compared between groups.All experiments included at least three biological replicates and three technical replicates for RT-qPCR and two for ELISA.The tests were performed in Rstudio 2021.09.0.Significance is shown for pairwise comparisons (One-tail Student's t-test with FDR correction, n=3 biological replicates). Figure legends Figure4 -Evolution of the miRNA system in plants, Cnidaria and Bilateria Results presented here and in previous publications (Moran et al, 2014(Moran et al, , 2017;;Tripathi et al, 2022;Mauri et al, 2017;Modepalli et al, 2018) suggest that miRNA post-transcriptional regulation evolved before the separation of plants and animals.HYL1 and HEN1 were present in the common ancestor of plans and animals where they had roles in miRNA biogenesis and methylation of miRNAs to protect from degradation, respectively.GW182 was mitigating target translational inhibition before the separation of Cnidaria and Bilateria over 600 million years ago.While plant and cnidarian miRNAs match their targets with nearly-full complementarity, bilaterian miRNAs evolved to depend on seed-match.In addition, bilaterians lost HYL1 that was replaced with other miRNA biogenesis proteins.Animal silhouettes are from http://phylopic.org/. Tables and their legends Transcript by addition of 1 µl of DNase, incubation at 37 °C for 15 min, and product clean up with Quick-RNA MiniPrep Kit (Zymo Research, USA).Concentration of transcripts was measured by Epoch Microplate Spectrophotometer (BioTek Instruments, Cole-Parmer, USA) and product integrity was validated with gel electrophoresis.The ready to use hairpins were kept at -80 °C until used.cDNA synthesis cDNA synthesis was conducted with iScriptTM (Bio-Rad, USA) according to manufacturer's protocol.100 ng of RNA (extracted from ~150 treated 3 days old planulae) per sample were used as template, resulting in final concentration of 5 ng/µl.cDNA was stored at -20°C.Reverse transcription-quantitative PCR Primers to amplify mCherry transcript for RT-qPCR were designed via Primer3 version 0.4.0 (Untergasser et al, 2012) and calibrated at concentrations of 25, 5, 1, 0.2 and 0.04 ng/µl to generate standard curves with StepOnePlus Real-Time PCR System v2.2 (ABI, Thermo Figure2 Figure2 -mRNA with multiple seed-match sites is not silenced by seed-match mimiR in Nematostella and Hydractinia A. Schematic representation of injected in-vitro transcribed mCherry mRNA, containing EF1α kozak sequence followed by mCherry-encoding transcript containing three seed-match sites in its 3' UTR.B. mCherry fluorescence observed in WT Nematostella embryos, 24 hours after injection with mCherry mRNA combined with a different mimiR in each panel.Negative control group (left) Figure3 Figure3 -mRNA with multiple seed-match sites is repressed by seed-match mimiR in zebrafish A. Fluorescence observed in zebrafish embryos, 10 hours after injection with mCherry mRNA combined with different mimiRs and sfGFP mRNA as fluorescence intensity control.Top pictures show mCherry fluorescence in the different treatments: Negative control group (left) injected with shRNA with no match to mCherry mRNA, displaying noticeable fluorescence.Positive control (middle) group injected with nearly-full complementarity mimiR displaying weaker mCherry fluorescence.Seed match group (right) injected with mimiR matching the three seed sites in the 3' UTR showing the weakest fluorescence of the groups.Bottom pictures show sfGFP fluorescence with variability in expression but don't follow the same trend as mCherry fluorescence.Scale bars represent 1000 µm.B. mCherry protein concentration 10 hours after injection with mCherry mRNA combined with different mimiRs.Significance is shown for pairwise comparisons (One-tail Student's t-test with FDR correction, n=4 biological replicates).C. Average fluorescence intensity of mCherry normalized to GFP 10 hours after injection with mCherry mRNA combined with different mimiRs and sfGFP mRNA as intensity control. Results of pairwise comparisons between injected groups.Including difference of means for mCherry transcript (ΔCt), protein concentration or normalized fluorescence, confidence interval and adjusted p-value for multiple comparisons.P-values were adjusted with Tukey's HSD for experiments with TBP::mCherry transgenic line and with FDR correction for mRNA injections.Full match refers to positive control group injected with nearly-full complementarity mimiR.No match refers to negative control group injected with shRNA with no match in Nematostella genome.
2023-01-11T14:11:01.028Z
2023-10-30T00:00:00.000
{ "year": 2023, "sha1": "3bb91980f9e74bc4e370125303059effb1b79f93", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/01/08/2023.01.08.523153.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "3bb91980f9e74bc4e370125303059effb1b79f93", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
15747350
pes2o/s2orc
v3-fos-license
Atmospheric Chemistry and Physics Carbon monoxide, methane and carbon dioxide columns retrieved The near-infrared nadir spectra measured by SCIAMACHY on-board ENVISAT contain information on the vertical columns of important atmospheric trace gases such as carbon monoxide (CO), methane (CH 4), and carbon dioxide (CO 2). The scientific algorithm WFM-DOAS has been used to retrieve this information. For CH 4 and CO 2 also column averaged mixing ratios (XCH 4 and XCO 2) have been determined by simultaneous measurements of the dry air mass. All available spectra of the year 2003 have been processed. We describe the algorithm versions used to generate the data (v0.4; for methane also v0.41) and show comparisons of monthly averaged data over land with global measurements (CO from MOPITT) and models (for CH 4 and CO 2). We show that elevated concentrations of CO resulting from biomass burning have been detected in reasonable agreement with MOPITT. The measured XCH 4 is enhanced over India, southeast Asia, and central Africa in Septem-ber/October 2003 in line with model simulations, where they result from surface sources of methane such as rice fields and wetlands. The CO 2 measurements over the Northern Hemisphere show the lowest mixing ratios around July in qualitative agreement with model simulations indicating that the large scale pattern of CO 2 uptake by the growing vegetation can be detected with SCIAMACHY. We also identified potential problems such as a too low inter-hemispheric gradient for CO, a time dependent bias of the methane columns on the order of a few percent, and a few percent too high CO 2 over parts of the Sahara. also column averaged mixing ratios (XCH 4 and XCO 2 ) have been determined by simultaneous measurements of the dry air mass. All available spectra of the year 2003 have been processed. We describe the algorithm versions used to generate the data (v0.4; for methane also v0.41) and show comparisons of monthly averaged data over land with global measurements (CO from MOPITT) and models (for CH 4 and CO 2 ). 10 We show that enhanced concentrations of CO resulting from biomass burning have been detected which are in reasonable agreement with MOPITT. The measured XCH 4 is enhanced over India, south-east Asia, and central Africa in September/October 2003 in line with model simulations where they result from surface sources of methane such as rice fields and wetlands. In qualitative agreement with model simulations the XCO 2 15 measurements over the northern hemisphere show the lowest mixing ratios around July due to uptake of CO 2 by the growing vegetation of the land biosphere. We also identified potential problems such as a too low inter-hemispheric gradient for CO, a time dependent bias of the methane columns on the order of a few percent, and a few percent too high CO 2 over parts of the Sahara. Introduction Knowledge about the global distribution of carbon monoxide (CO) and of the relatively well-mixed greenhouse gases methane (CH 4 ) and carbon dioxide (CO 2 ) is important for many reasons. CO, for example, plays a central role in tropospheric chemistry (see, e.g., Bergamaschi et al., 2000, and EGU addition, channel 4 has been used to determine the mass of dry air from oxygen (O 2 ) column measurements using the O 2 A band. Channels 4, 6 and 8 measure simultaneously the spectral regions 600-800 nm, 970-1772 nm and 2360-2385 nm at spectral resolutions of 0.4, 1.4 and 0.2 nm, respectively. For SCIAMACHY the spatial resolution depends on the spectral interval and orbital position. For channel 8 data, the spatial 5 resolution, i.e., the footprint size of a single nadir measurement, is 30×120 km 2 corresponding to an integration time of 0.5 s, except at high solar zenith angles (e.g., polar regions in summer hemisphere), where the pixel size is twice as large (30×240 km 2 ). For the channel 4 and 6 data used for this study the integration time is mostly 0.25 s corresponding to a horizontal resolution of 30×60 km 2 . SCIAMACHY also performs direct (extraterrestrial) sun observations, e.g., to obtain the solar reference spectra needed for the retrieval. SCIAMACHY is one of the first instruments that performs nadir observations in the near-infrared (NIR) spectral region (i.e., around 2 µm). In contrast to the ultra violet (UV) and visible spectral regions where high performance Si detectors have been man-15 ufactured for a long time, no appropriate near-infrared detectors were available when SCIAMACHY was designed. The near-infrared InGaAs detectors of SCIAMACHY were a special development for SCIAMACHY. Compared to the UV-visible detectors they are characterized by a substantially higher pixel-to-pixel variability in quantum efficiency and dark (leakage) current. Each detector array has a large number of dead and bad 20 pixels. In addition, the dark signal is significantly higher compared to the UV-visible mainly because of thermal radiation generated by the instrument itself. The in-flight optical performance of SCIAMACHY is overall as expected from the on-ground calibration and characterization activities (Bovensmann et al., 2004). One exception is the time dependent optical throughput variation in the SCIAMACHY NIR channels 7 and 8 25 due the build-up of an ice layer on the detectors ("ice issue"). This effect is minimised by regular heating of the instrument (Bovensmann et al., 2004) during decontamination phases. The ice layers adversely influence the quality of the retrieval of all gases discussed in this paper as they result in reduced throughput (transmission) and, therefore, Introduction EGU reduced signal and signal-to-noise performance. In addition, changes of the instrument slit function have been observed which introduce systematic errors. All these issues complicate the retrieval. WFM-DOAS retrieval algorithm The Weighting Function Modified Differential Optical Absorption Spectroscopy (WFM-5 DOAS) retrieval algorithm and its current implementation is described in detail elsewhere (Buchwitz et al., 2000a(Buchwitz et al., , 2004a. In short, WFM-DOAS is an unconstrained linear-least squares method based on scaling pre-selected trace gas vertical profiles. The fit parameters are the desired vertical columns. The logarithm of a linearized radiative transfer model plus a low-order polynomial is fitted to the logarithm of the ratio of the 10 measured nadir radiance and solar irradiance spectrum, i.e., observed sun-normalized radiance. The WFM-DOAS reference spectra are the logarithm of the sun-normalized radiance and its derivatives. They are computed with a radiative transfer model taking into account line-absorption and multiple scattering (Buchwitz et al., 2000b). A fast look-up table scheme has been developed in order to avoid time consuming on-line 15 radiative transfer simulations. In order to identify cloud-contaminated ground pixels we use a simple threshold algorithm based on sub-pixel information as provided by the SCIAMACHY Polarization Measurement Devices (PMDs) (details are given in Buchwitz et al., 2004a,b). We use PMD1 which corresponds to the spectral region 320-380 nm located in the UV part 20 of the spectrum. Strictly speaking, the algorithm detects enhanced backscatter in the UV. Enhanced UV backscatter mainly results from clouds but might also be due to high aerosol loading or high surface UV spectral reflectance. As a result, ice or snow covered surfaces may be wrongly classified as cloud contaminated. This needs to be improved in future versions of our retrieval method. 25 The quality of the WFM-DOAS fits in the near-infrared is poor (i.e., the fit residuals are large) when applying WFM-DOAS to the operational Level 1 data products. In Introduction EGU order to improve the quality of the fits and thereby the quality of our data products we pre-process the operational Level 1 data products mainly with respect to a better dark signal calibration (see Buchwitz et al., 2004a,b). In addition, there are indications that the in-orbit slit function of SCIAMACHY is different from the one measured on-ground (Hans Schrijver (SRON), personal communication) due to the "ice issue" (see Sect. 2). 5 We use a slit function that has been determined by applying WFM-DOAS to the in-orbit nadir measurements. We selected the one that resulted in best fits, i.e., smallest fit residuum (see Buchwitz et al., 2004a,b). WFM-DOAS data products: time coverage The WFM-DOAS trace gas column data products have been derived by processing all consolidated SCIAMACHY Level 1 operational product files (i.e., the calibrated and geolocated spectra) of the year 2003 that have been made available by ESA/DLR (up to mid-2004). Figure 1 gives an overview about the number of orbits per day that have been processed. The maximum number of orbits per day is about fourteen. As can been seen (blue lines), all 14 orbits were available for only a small number of 15 days. For many days no data were available. Many of the large data gaps are due to decontamination phases (see Sect. 2) which are indicated by red lines). For November and December 2003 no consolidated (i.e., full product) orbit files have been made available (for ground processing related reasons). 20 For CO column retrieval a small spectral fitting window (2359-2370 nm) located in SCIAMACHY channel 8 has been selected which covers four CO absorption lines. The retrieval is complicated by strong overlapping absorption features of methane and water vapor. For details concerning pre-processing of the spectra (for improving the calibration), WFM-DOAS v0.4 retrieval, vertical column averaging kernels, quality of the spectral fits, and a quantitative comparison with MOPITT Version 3 CO columns (Deeter et al., 2003;Emmons et al., 2004) we refer to Buchwitz et al. (2004a). An initial error analysis 5 using simulated measurements can be found in Buchwitz and Burrows (2004) where it is shown that the errors are expected to be less than about 20%. In Buchwitz et al. (2004a) it has been shown that strong plumes of CO can be detected with single overpass data which are in good qualitative agreement with MOPITT. Globally, for measurements over land, the standard deviation of the difference with respect to MOPITT 10 was shown to be in the range 0.4-0.6×10 18 molecules/cm 2 and the linear correlation coefficient between 0.4 and 0.7. The differences depend on time and location but are typically within 30% for most latitudes. Perfect agreement with MOPITT is, however, not to be expected for a number of reasons (differences in overpass time, spatial resolution, etc.). In this context it is important to point out that the sensitivity of SCIAMACHY 15 measurements is nearly independent of altitude whereas the sensitivity of MOPITT to boundary layer CO is low. On the other hand, retrieval of CO from SCIAMACHY is not unproblematic. For example, the WFM-DOAS v0.4 CO column are scaled with a constant factor of 0.5 to compensate for an obvious overestimation. This overestimation is most probably closely related to the difficulty of accurately fitting the weak CO lines. 20 The fit residuals, which are on the order of the CO lines, are not (yet) signal-to-noise limited but dominated by (not yet understood) rather stable spectral artifacts. Figure 2 shows a comparison of monthly averaged WFM-DOAS version 0.4 CO columns with CO from MOPITT (version 3). Because of the low surface reflectivity of water the NIR nadir measurements are noisy over the ocean (outside sun-glint con-25 ditions). Therefore, we focus on SCIAMACHY measurements over land. Only these measurements are shown in Fig. 2. The same land mask as used for SCIAMACHY has also been used for MOPITT to ease the comparison. SCIAMACHY data have only been averaged if the CO fit error is less than 60% and when the pixels have been ACPD 5,2005 CO, CH 4 , and CO 2 columns from SCIAMACHY M. Buchwitz et al. EGU flagged cloud free. As can be seen from Fig. 1, there are large SCIAMACHY data gaps for April and August. Therefore, the SCIAMACHY data shown in Fig. 2 are not monthly averages obtained from a bias free sampling. The August data, for example, are strongly biased towards the beginning of August. Carbon monoxide (CO) Only for September were nearly all orbits available. As a result, nearly all land ar-5 eas are covered by measurements during this month. But even for this month there are quite large gaps. There are nearly no data over Greenland because the cloud detection algorithm can not discriminate between clouds and snow/ice covered surfaces. There are also large gaps in the tropics due to persistent cloud coverage. When comparing the September data from SCIAMACHY and MOPITT one can see that overall 10 the agreement is good. Both sensors show that the columns are typically between 1.7 and 2.5×10 18 molecules/cm 2 (shown in green). Both sensors show large regions of enhanced CO in the northern part of South America and over the central/southern part of Africa. It is well known that a lot of biomass burning is going on in these regions in September. In the northern hemisphere the overall agreement is reasonable but 15 there are also substantial differences. For example, SCIAMACHY sees much higher CO over the eastern part of the United States compared to MOPITT. If this is due to the higher sensitivity of the SCIAMACHY measurements for boundary layer CO or for other reasons is currently unclear. The April data of SCIAMACHY show large gaps in time and space. The agreement 20 with MOPITT is not so good as for September. The main difference with respect to MO-PITT is the significantly weaker inter-hemispheric difference as seen by SCIAMACHY. Both sensors give (on average) quite similar values over the northern hemisphere but over the southern hemisphere there are significant differences. Here the MOPITT data are mostly in the range 1. In summary, the agreement is reasonable but there are significant differences at certain locations during certain times of the year. More investigations are needed to 15 explain the observed differences taking into account the different altitude sensitivities of both sensors. Methane (CH 4 ) The methane columns have been retrieved from a small spectral fitting window (2265-2280 nm) located in SCIAMACHY channel 8 which covers several absorption lines of 20 CH 4 and several but much weaker absorption lines of nitrous oxide (N 2 O) and water vapor (H 2 O). The main scientific application of the methane measurements of SCIA-MACHY is to obtain information on the surface sources of methane. The modulation of methane columns due to methane sources is only on the order of about one percent. This is much weaker than the variation of the methane column due to changes of sur-25 face pressure (because methane is well-mixed the methane column is highly correlated with the total air mass and, therefore, with surface pressure). To filter out these much processing, averaging kernels, quality of the spectral fits, and a quantitative comparison with global models we refer to that study. An initial error analysis using simulated measurements is given in Buchwitz and Burrows (2004). According to this error analysis errors of a few percent due to undetected cirrus clouds, aerosols, surface reflectivity, temperature and pressure profiles, etc., are to be expected. It has been shown 15 in Buchwitz et al. (2004b) that the WFM-DOAS Version 0.4 methane columns have a time dependent (nearly globally uniform) bias of up to −15% (low bias of SCIAMACHY) for one of the four days that have been analyzed. The bias is correlated with the time after the last decontamination performed to get rid of the ice layers on the detectors. Therefore, Buchwitz et al. (2004b) concluded that the bias might be due to the "ice-20 issue" (see Sect. 2). This is consistent with the finding of H. Schrijver (SRON) (see Sect. 2) that the ice build-up on the detectors results in a broadening of the instrument slit function (the wider the slit function compared to the assumed slit function, the larger the underestimation of the retrieved methane column). Because EGU an improved version of our methane data product (Version 0.41) by applying a bias correction to the v0.4 methane columns. We assume that the data can be sufficiently corrected by dividing the columns by a globally constant scaling factor which only depends on time (on the day of the measurement). The correction factor has been determined as follows: For each day all cloud free v0.4 XCH 4 measurements over the 5 Sahara have been averaged. The ratio of these daily average mixing ratios to a constant reference value (chosen to be 1750 ppbv) is approximately the methane bias (because methane is not constant this is not exactly the methane bias and a certain systematic error is introduced by this assumption). This time dependent methane bias is shown in Fig. 3 (black diamonds). This bias shows a similar time dependence as 10 the independently measured channel 8 transmission loss also shown in Fig. 3 (red diamonds). The transmission has been determined by averaging the signal of the channel 8 solar measurements normalized to a reference measurement at the beginning of the mission. The varying transmission is a consequence of the varying ice layer on the detectors. Figure 3 shows a third curve, the (daily) correction factor (magenta dia-15 monds). The correction factor curve has been obtained by linearly transforming the transmission curve. The coefficients of the linear transformation have been selected such that a good match is obtained with the methane bias curve. In order to correct the WFM-DOAS v0.4 methane columns for the systematic errors introduced by the ice layer the correction factors are applied as follows: All v0.4 methane columns of a given 20 day have been divided by the correction factor for this day. The corrected WFM-DOAS v0.4 methane columns are the new WFM-DOAS v0.41 (absolute) methane columns. In order to generate the new WFM-DOAS v0.41 XCH 4 product a second modification has been applied: Instead of normalizing the methane columns by the oxygen column retrieved from the 760 nm O 2 A band (as done for v0.4 XCH 4 ), they have been 25 normalized by the CO 2 columns retrieved from the 1580 nm region (for details on CO 2 retrieval see Sect. 7). Using CO 2 rather than O 2 for normalizing the methane columns has been first proposed by C. Frankenberg et al. (presented EGU CO 2 rather than O 2 is expected to give better results for XCH 4 is because errors of CO 2 and CH 4 are similar and cancel much better when the ratio is computed. Errors due to aerosols, residual cloud contamination, surface reflection, etc., are expected to be the more similar, the more similar the radiative transfer is. In general, this requires that the two spectral intervals from which the two columns are retrieved are located as close as 5 possible (in wavelength). As the CO 2 fitting window (at 1580 nm), is much closer to the CH 4 fitting window (2270 nm) than the O 2 window (760 nm) canceling of errors will be better using CO 2 . In addition, when the column retrieval from the two spectral regions suffers from similar instrumental/calibration errors, also these errors cancel to a certain extent. 10 The drawback of this approach is that CO 2 is not as constant as O 2 mainly because of the surface sources and sinks of CO 2 . This approach requires that the variability of the column averaged mixing ratio of CO 2 is small (ideally negligible) compared to the variability of the column averaged mixing ratio of CH 4 . According to global model simulation this is a reasonable assumption. The model simulations shown in Buchwitz et al. 15 (2004a) indicate that the variability of the methane column is about 6% (±100 ppbv) and the variability of the CO 2 column is about 1.5% (±5 ppmv), i.e., a factor of four smaller than for methane. This means that the error introduced by essentially assuming that CO 2 is constant is less than about 1.5%. This is comparable to the estimated error on the WFM-DOAS v0.4 CO 2 columns as reported in Buchwitz and Burrows (2004). 20 Figure 4 shows a comparison of WFM-DOAS v0.41 XCH 4 with TM5 model simulations. The TM5 model is a two-way nested atmospheric zoom model (Krol et al., 2004). It allows to define zoom regions (e.g. over Europe) which are run at higher spatial resolution (1×1 • ), embedded into the global domain, run at a resolution of 6×4 • . We employ the tropospheric standard version of TM5 with 25 vertical layers. TM5 is 25 an off-line model and uses analyzed meteorological fields from the ECMWF weather forecast model to describe advection and vertical mixing by cumulus convection and turbulent diffusion. CH 4 (a priori) emissions are as described by Bergamaschi et al. (2004). Chemical destruction of CH 4 by OH radicals is simulated using pre-calculated EGU OH fields based on CBM-4 chemistry and optimized with methyl chloroform, for the stratosphere also the reaction of CH 4 with Cl and O( 1 D) radicals are considered. The comparison is limited to observations over land as the SCIAMACHY observations over ocean are less precise because of the low ocean reflectivity in the near-infrared. For SCIAMACHY all measurements from cloud free pixels have been averaged where the 5 CH 4 column fit error is less than 10%. The bi-monthly averages have been computed from daily gridded data. For the model simulations only those grid boxes where measurements were available have been used to compute the averages. The comparison with the model simulations shows similarities but also differences. Very interestingly and in qualitative agreement with the model simulations, the mea-10 surements show high CH 4 mixing ratios in the September to October 2003 average over India, southeast Asia, and over the western part of central Africa, which are absent or significantly lower in the March to April average. In the model the high columns in these regions are a result of methane emissions mainly from rice fields, wetlands, ruminants, and waste handling. The good agreement with the model simulations indi-15 cates that SCIAMACHY can detect these emission signals. However, there are also differences compared to the model simulations. For example, in the March-April average the SCIAMACHY data are few percent lower over large parts of South America but over large parts of the northern hemisphere the SCIAMACHY data are a few percent lower. More investigations are needed to find out what the reasons for these discrep- 20 ancies are. Figure 5 shows a quantitative comparison of the daily data with the TM5 model, the correlation coefficient and the bias for the two versions of SCIAMACHY XCH 4 data products, namely v0.4 and v0.41. As can be seen, the bias is significantly smaller for the v0.41 data, although not zero. There still appears to be a systematic bias due to 25 the ice issue indicating that the bias correction applied to generate the v0.41 data is not perfect. Also the correlation with the model results is better for the version 0.41 data, especially in the middle of 2003, where the correlation coefficient can be as high as 0.9. The CO 2 columns have been retrieved using a small spectral fitting window (1558-1594 nm) located in SCIAMACHY channel 6 (which is not affected by an ice layer). This spectral region covers one absorption band of CO 2 and weak absorption features of water vapor. As for methane v0.4 (air or) O 2 -normalized CO 2 columns have been 5 derived, the dry air column averaged mixing ratios XCO 2 . First results of CO 2 from SCIAMACHY have been presented in Buchwitz et al. (2004b). For details concerning pre-processing of the spectra (for improving the calibration), WFM-DOAS (v0.4) processing, vertical column averaging kernels, quality of the spectral fits, and a quantitative comparison with global model simulations we refer to Buchwitz et al. (2004b). 10 An initial error analysis using simulated measurements is given in Buchwitz and Burrows (2004). According to this error analysis errors of a few percent due to undetected cirrus clouds, aerosols, surface reflectivity, temperature and pressure profiles, etc., are to be expected. In Buchwitz et al. (2004b) it has been shown that the WFM-DOAS v0.4 CO 2 columns agree with model columns within a few percent. To compensate for a not 15 yet understood systematic underestimation the WFM-DOAS v0.4 CO 2 columns have been scaled with a constant factor of 1.27 (see Buchwitz et al., 2004b, for details). It has been shown that the spatial and temporal pattern of the retrieved column averaged mixing ratio is in reasonable agreement with the model data except for the amplitude of the variability. The measured variability is about a factor of four higher than the 20 variability of the model data (about 6% compared to about 1.5% for the model data). In the following we restrict the discussion to the comparison of three months of data with TM3 model simulations as shown in Fig. 6. More results can be found in Buchwitz et al. (2004b). TM3 3.8 (Heimann and Körner , 2003) is a three-dimensional global atmospheric transport model for an arbitrary number of active or passive tracers. It uses 25 re-analyzed meteorological fields from the National Center for Environmental Prediction (NCEP) or from the ECMWF re-analysis. The modeled processes comprise tracer advection, vertical transport due to convective clouds and turbulent vertical transport ACPD MACHY the averages have been computed using only the cloud free pixels with a CO 2 retrieval error of less than 10%. Shown in Fig. 6 are only the data over land because of the problems with measuring over the ocean in the near-infrared (see Sect. 5). For SCIAMACHY Fig. 6 shows absolute column averaged mixing ratios of CO 2 in the range 335-385 ppmv. For TM3 "uncalibrated" XCO 2 -offsets are shown which are 10 in the range 0-13.7 ppmv. These offsets do not include the (current) background concentration of CO 2 . Therefore, not the absolute values but only the variability in space and time should be compared. The model simulations show low columns (compared to the mean column) over the northern hemisphere in July 2003 compared to higher values in May and September. This is mainly due to uptake of CO 2 by the biosphere 15 which results in minimum columns around July. Qualitatively the SCIAMACHY data show a similar time dependence with also lower columns in July compared to May and September. This indicates that SCIAMACHY is able to detect the uptake of CO 2 by the biosphere over the northern hemisphere when the vegetation is in its main growing season. The measured variability (±25 ppmv) is about a factor of 3-4 higher than the 20 variability of the model data (±7 ppmv). Over large parts of the (mostly western) Sahara SCIAMACHY sees "plumes" of relatively high CO 2 (red colored areas) not present in the model simulations. These probably a few percent too high CO 2 mixing ratios may result from the high surface reflectivity over the Sahara (probably in combination with aerosol variability). Currently, only 25 a constant surface albedo of 0.1 is assumed for WFM-DOAS (and only one aerosol scenario). According to the error analysis presented in Buchwitz and Burrows (2004) the error on the CO 2 column is +1.4% if the albedo is 0.3 instead of 0.1 (for a solar zenith angle of 50 • ). The corresponding error for O 2 is −3.0%. The XCO 2 error is to EGU a good approximation the difference of these errors, i.e., +4.4% or 16 ppmv. This indicates that the high values seen by SCIAMACHY over the Sahara may be explained by retrieval algorithm limitations as the current version does not take albedo (and aerosol) variations fully into account. 5 Nearly one year (2003) of SCIAMACHY nadir measurements have been processed with the WFM-DOAS retrieval algorithm (v0.4, for methane also v0.41) to generate a number of data products: vertical columns of CO, CH 4 , and CO 2 . In addition, O 2 columns have been retrieved to compute dry air column averaged mixing ratios for the relatively well-mixed greenhouse gases CH 4 and CO 2 , denoted XCH 4 and XCO 2 , 10 respectively. The data products have been compared with independent measurements (CO from MOPITT) and model simulations (for CH 4 and CO 2 ). For the CO columns the agreement with MOPITT is mostly within 30%. SCIAMACHY detects enhanced concentrations of CO due to biomass burning similar as MOPITT. SCIAMACHY seems to systematically overestimate the CO columns over large parts 15 of the southern hemisphere at least for certain months where MOPITT sees systematically lower columns in the southern hemisphere compared to the northern hemisphere. This discrepancy is most probably related to the difficulty of accurately fitting the weak CO lines covered by SCIAMACHY. Investigations are ongoing on how to improve the precision and the accuracy of the CO retrieval. This includes using a larger fitting 20 window to cover more CO lines. The WFM-DOAS Version 0.4 methane columns have a time dependent bias of up to about −15% related to ice build-up on the channel 8 detector. Using a simple bias correction an improved methane data product (v0.41) has been generated. The comparison with model simulations shows agreement within a few percent. The compari-Introduction EGU Africa. The WFM-DOAS Version 0.4 CO 2 columns show agreement with model simulations within a few percent. The comparison indicates that SCIAMACHY is able to detect low columns of CO 2 resulting from uptake of CO 2 over the northern hemisphere when the vegetation is in its main growing season. Over highly reflecting surfaces such as over the Sahara SCIAMACHY seems to systematically overestimate the column averaged mixing ratio of CO 2 by a few percent most probably because of limitations of the current version of the retrieval algorithm. A summary of our findings from the comparisons with independent data shown here and elsewhere (Buchwitz et al., 2004a,b;Gloudemans et al., 2004;de Maziere et al., 10 2004; Sussmann and Buchwitz, 2005;Warneke et al., 2005) is given in Table 1 which shows our current best estimates of precision and accuracy of our data products. Our future work will focus on identifying the reasons for the observed biases and to improve the accuracy of the data products. This will also comprise the use of larger spectral fitting windows, especially for CO, to improve the precision. So far only a small 15 subset of the large spectral region covered by SCIAMACHY has been analyzed. Introduction (7), 445-451, 1995. 1946 Fig. 2. Monthly mean CO columns over land from SCIAMACHY/ENVISAT (left) and MOPITT/EOS-Terra (right). Only the columns over land are shown because the quality of the SCIAMACHY CO columns over water is low due to the low reflectivity of water in the nearinfrared. For SCIAMACHY only data have been averaged where the CO fit error is less than 60% and where the PMD1 cloud identification algorithm indicates a cloud free pixel. For MO-PITT cloudy pixels are also not included.
2014-10-01T00:00:00.000Z
2005-01-01T00:00:00.000
{ "year": 2005, "sha1": "69c8cc819d2275237d7bbda693c672200adedcfd", "oa_license": "CCBYNCSA", "oa_url": "https://www.atmos-chem-phys.net/5/3313/2005/acp-5-3313-2005.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "ad0c5fd87f12469559ffa0f3170bfae69965eaac", "s2fieldsofstudy": [ "Environmental Science", "Physics", "Chemistry" ], "extfieldsofstudy": [] }
255841813
pes2o/s2orc
v3-fos-license
Associations of streptococci and fungi amounts in the oral cavity with nutritional and oral health status in institutionalized elders: a cross sectional study Disruption of the indigenous microbiota is likely related to frailty caused by undernutrition. However, the relationship between undernutrition and the oral microbiota, especially normal bacteria, is not obvious. The aim of this study was to elucidate the associations of nutritional and oral health conditions with prevalence of bacteria and fungi in the oral cavity of older individuals. Forty-one institutionalized older individuals with an average age ± standard deviation of 84.6 ± 8.3 years were enrolled as participants. Body mass index (BMI) and oral health assessment tool (OHAT) scores were used to represent nutritional and oral health status. Amounts of total bacteria, streptococci, and fungi in oral specimens collected from the tongue dorsum were determined by quantitative polymerase chain reaction (PCR) assay results. This study followed the STROBE statement for reports of observational studies. There was a significant correlation between BMI and streptococcal amount (ρ = 0.526, p < 0.001). The undernutrition group (BMI < 20) showed a significantly lower average number of oral streptococci (p = 0.003). In logistic regression models, streptococcal amount was a significant variable accounting for “not undernutrition” [odds ratio 5.68, 95% confidential interval (CI) 1.64–19.7 (p = 0.06)]. On the other hand, participants with a poor oral health condition (OHAT ≥ 5) harbored significantly higher levels of fungi (p = 0.028). Oral streptococci were found to be associated with systemic nutritional condition and oral fungi with oral health condition. Thus, in order to understand the relationship of frailty with the oral microbiota in older individuals, it is necessary to examine oral indigenous bacteria as well as etiological microorganisms. malnutrition and poor oral health conditions in older individuals, in which dental disease status or oral function, including dental caries, periodontitis, tooth loss, and mastication, or swallowing disorders were mainly investigated as malnutrition risks [7][8][9][10]. In investigations that used a microbial approach, a relationship of the oral microbiome, consisting of not only pathogenic agents but also non-pathogenic indigenous bacteria, with nutritional status and oral health status has been demonstrated in older subjects [11,12]. It has also been shown that the indigenous oral microbiota, composed of various microorganisms, plays an important role in maintaining homeostasis of oral and systemic health [13]. In the oral cavity, indigenous microorganisms such as streptococci grow and develop a suitable microbiota based on the environment provided by the host [14]. Once the microbiota is established, it helps to protect the host from invasion by pathogenic microorganisms and subsequent infection [15,16]. Therefore, when a normal oral microbiota is disturbed, the population of pathogenic microorganisms is easily increased, which causes oral and systemic diseases [17]. On the other hand, the oral microbiota can be affected by changes in the oral environment as well as general conditions, such as aging, immunosuppression, and medication [18][19][20]. Recently, oral fungi have received focus as etiological microorganisms, especially in the elders, and it has been shown that older individuals are more prone to colonization by fungi due to such factors as decreased immune response, reduced salivary flow, and denture use [20][21][22]. Candida species, the most prevalent fungi found in the human oral cavity, are normal commensal and asymptomatic organisms in healthy conditions [23]. However, in frail older individuals or immunocompromised patients, it can overgrow in oropharyngeal or esophageal mucosa, which causes a burning sensation, taste disorder, severe mucositis, and/or dysphagia, resulting in poor nutrition [24,25]. Furthermore, even with the absence of such oral symptoms, a relationship of general malnutrition with certain fungi in the oral cavity of elderly individuals has been shown [26]. Fungal growth in the oral cavity as well as systemic malnutrition are thought to be associated with an alteration in prevalence of oral indigenous bacteria. Oral dryness is a condition that can enhance fungal growth, though is not suitable for growth of indigenous oral bacteria such as oral streptococci. In addition, a decrease in oral intake of food, which is generally the start of the frailty cycle, can have a negative effect on growth of oral streptococci, whose main energy source is carbohydrates from staple foods [14,27,28]. Thus, deterioration of general health and nutritional status may disrupt the normal microbiota by increasing fungi and decreasing indigenous bacteria amounts. Such an oral microbiome composition may increase problems with the teeth or oral mucosa, causing adverse effects on systemic nutritional intake. We speculated that simultaneous measurements of oral fungal and streptococcus levels would be useful for determining systemic and oral frailty. Although detailed analyses of the oral microbiota have been reported [29,30], few studies that investigated fungi and bacteria analyzed those in the same oral specimens, due to different gene targets associated with the molecular microbiological methods used. In the present study, the amounts of fungi, total bacteria, and commensal streptococci in tongue samples of institutionalized older individuals were determined, and systemic and oral conditions were assessed. From those results, the relationships of systemic nutritional condition and oral health status with oral microbial prevalence were investigated. Ethical considerations The study was performed according to the principles of the Declaration of Helsinki and was approved by The Ethics Committee of Iwate Medical University School of Dentistry (approval no. 01340). Individuals with decision-making ability were given information regarding the study protocol and enrolled as participants after providing documentation indicating consent. For those who lacked ability to make such a decision due to cognitive decline, study protocol information documentation was given to a closely related family member, who provided consent for participation. Individuals who refused to participate in the study survey or undergo analysis after providing consent were subsequently excluded. Study design and participants This was a cross-sectional observational study and conducted at a nursing institution located in Iwate Prefecture, Japan. Informed consent was obtained from 42 of the 160 residents. Exclusion criteria included administration of antimicrobial or antifungal drugs within one month of the survey, as well as current status of undergoing parenteral nutrition. None of the 42 residents who provided consent to participate met those criteria. However, at the time of the oral examination, one rejected participation, after which none withdrew their consent. Consequently, 41 elder individuals (8 males, 33 females) with an average age ± standard deviation (SD) of 84.6 ± 8.3 years (range 70-105 years) completed the present study protocol (Fig. 1). None had a smoking or drinking habit. Two of the participants were undergoing special medical care, as one received dialysis treatment three times a week and the other wore a cardiac pacemaker, though neither presented outliers in regard to body mass index (BMI), oral health assessment tool (OHAT) results, or oral microbial amounts, and their results were included in the analyses. In addition, none suffered from oral cancer or an oral potentially malignant disorders (OPMD). Variables The primary outcomes were amounts of oral streptococci, oral fungi, and total bacteria, which were determined using a quantitative polymerase chain reaction (PCR) method. The outcome used to represent systemic frailty was nutritional condition assessed by BMI, with that calculated from measurements conducted at the institution using the following formula: weight (kg)/ height (cm) 2 . OHAT results were used for determination of oral frailty. To reveal relationships between those outcomes, information regarding potential confounders or effect modifiers, including gender, age, height, weight, application of special medical care, smoking and drinking habits, intake of antibacterial or anti-fungal medication, eating independence, and longterm care need level, was provided by the institution. This information was obtained from the facility care records, followed by confirmation with the caregiver in charge of each participant. Long-term care needs level is widely recognized in Japan as it is used in the national long-term care insurance system, which was established in 2000. Following receipt of an application from a community-dweller aged 65 years or older, their care needs level is determined by the applicable department for administration of their community based on examinations of mental and physical condition assessed by certification screening personnel, as well as a diagnosis from their primary physician. Long-term care needs consist of 7 levels, including support levels 1 and 2, and care levels 1-5, aside from not applicable. Support level 1 represents the lowest level of care needed and care level 5 the highest [31]. For the present study, the range from support level 1 to care level 5 is referred to as long-term care needs level 1-7. Differences in dietary habits were not considered in this study, because the facility provides nearly the same food menu to all residents for each meal service. Potential confounders or effect modifiers concerning oral health were assessed by oral examinations performed by two well-calibrated dentists, including present number of teeth, present tooth status, presence of oral cancer or OPMD, tongue moisture level, tongue coating deposits, and denture use frequency. Both examiners had received training in oral assessment for perioperative care of inpatients at Iwate Medical University Hospital prior to the study. Oral examinations Present tooth status including decayed and filled teeth was assessed according to the WHO criteria. Residual roots with or without coping treatment were recorded as a filled or decayed tooth, respectively. The presence of oral cancer or OPMD including leukoplakia, oral lichen planus, and erythroplakia, was determined using methods recommended by the WHO [32]. The moisture level of the tongue surface was measured using an intraoral moisture meter (Mucus ® ; Life, Saitama, Japan) at a point 1 cm posterior from the tip of the tongue along the median groove, as noted in a previous study [33]. Measurements were performed three times in each participant and the average value was used as the Mucus score. Tongue coating deposits were evaluated using tongue coating index (TCI). The surface of the tongue dorsum was divided into three sections vertically and three sections laterally (total nine sections). Each section was scored from 0 to 2 (0: tongue coating not visible, 1: thin tongue coating, papillae of tongue visible, 2: very thick tongue coating, papillae of tongue not visible) and the total score of the 9 sections was recorded as personal TCI [34]. Comprehensive oral health status was assessed using OHAT values, which is a validated tool used for assessment of oral health and comprises eight domains, including lips, tongue, gums and tissues, saliva, natural teeth, dentures, oral cleanliness, and dental pain, stratified into three grades (healthy, oral changes, unhealthy) [35]. In addition, at the oral examinations, denture use frequency was obtained by oral questioning of the participant or a caregiver. Denture use frequency was categorized into three levels; 1: no use, 2: used for meals, and 3: always worn except at sleeping time. Collection and genome purification of microbial samples Microbial samples were collected immediately after the oral examination, as follows. The dorsum of the tongue was swabbed 20 times with a sterile cotton swab, which was then immersed in 1 ml of sterile saline. Collected samples were transferred to the laboratory on ice and stored at − 80 °C until genome extraction. Genomic DNA was extracted and purified from the collected samples using a Wizard ® Genomic DNA Purification Kit (Promega), according to the manufacturer's instructions. The microbial samples were lysed in 50 mM EDTA, 1 mg/ml lysozyme (Thermo Fisher Scientific, Waltham, USA), 0.5 mg/ml lysostaphin (Fujifilm Wako Pure Chemical), and 1 unit/ml lyticase (Sigma-Aldrich, St. Louis, MO, USA) at 37 °C for 30 min. In addition, samples were disturbed with an ISOFECAL for Beads Beating device (Nippon Gene, Tokyo, Japan) and a μT-12 beads crusher (TAITEC, Saitama, Japan) at 3200 r/minute for 5 min. Purified genomic DNA was dissolved in TE buffer (10 mM Tris-HCl, 1 mM EDTA, pH 8.0) and stored at − 20 °C. Primer sets for polymerase chain reaction (PCR) assays A specific primer for amplifying the genome of oral streptococci was designed based on the S. mutans ATCC 25175 gene (NCBI Accession No. EF536028) and subjected to Primer-BLAST (basic local alignment search tool) (http:// www. ncbi. nlm. nih. gov/ tools/ primer-blast/). The primer sequences (5′-3′) targeting all streptococci elongation factor-Tu were CCA ATG CCA CAA ACT CGT GAAC (forward) and GAT CAC GGA TTT CCA TTT CAACC (reverse). To test the specificity of a streptococci-specific primer, the following strains were used: [36,37]. One ng of each bacterial DNA sample was confirmed by detection with the streptococci specific primer set using KOD-Plus-Neo DNA polymerase (TOYOBO, Tokyo, Japan). The amplification cycles were as follows: 2 min at 98 °C for initial heat activation, then one cycle for 10 s at 94 °C, 10 s at 62 °C, and 5 s at 68 °C, for a total of 30 cycles. These PCR products were subjected to agarose gel electrophoresis using a 3.0% gel and stained with ethidium bromide, with detection performed at 302 nm with ChemiDoc XRS Plus (Bio-Rad Laboratories, Hercules, CA, USA). DNA amplicons were observed in ten oral streptococci, while no PCR products were demonstrated with DNA samples from the other six bacteria species (Additional file 1). For amplifying the genome of bacterial 16S rRNA universal and eukaryotic 18S rRNA genes, primer sets reported in previous studies were used [38,39]. The primer sequences (5′-3′) targeting bacterial the 16S rRNA gene were CGC TAG TAA TCG TGG ATC AGA ATG (forward) and TGT GAC GGG CGG TGT GTA (reverse), and those (5′-3′) targeting the 18S rRNA gene were TCT CAG GCT CCY TCT CCG G (forward) and AAG CCA TGC ATG YCT AAG TATMA (reverse). Determination of microbial amounts Stored samples were solved in room temperature. From those samples, genomic DNA was extracted and purified by the same method used for reference microorganisms. To determine microbial amounts in each sample, quantitative PCR assays were performed using a Thermal Cycler Dice Real-Time System II and with the following thermal cycle, as recommended for the TB Green Ex Taq (Takara Bio, Kusatsu, Japan) mixture: 95 °C for 30 s, then 40 cycles for 5 s at 95 °C and 1 m at 58 °C for 16S rRNA and 18S rRNA, and at 62 °C for oral streptococcal EF-Tu. Ten ng of genomic DNA from the samples was used as the template. Standard curves for each organism were plotted using Ct values obtained from amplification of genomic DNA. Those were extracted from S. salivarius ATCC 7073 (1.0 × 10 2 to 1.0 × 10 6 CFU) and C. albicans SC 5314 (1.0 × 10 1 to 1.0 × 10 6 CFU) cells. The numbers of S. salivarius and C. albicans were determined by plating culture dilutions in trypticase soy agar and YPD agar plates, respectively [38,40]. The linearity of these assays was correlated between the Ct and microbial amounts. Correlation coefficients were 0.969 for 16S rRNA, 0.985 for oral streptococcal EF-Tu, and 0.979 for 18S rRNA. The amounts of the microorganisms are expressed as logarithm number of CFU. Statistical analysis Age, BMI, care needs score, number of teeth, TCI, Mucus score, OHAT score, and amounts of oral microorganisms were used as continuous variables. After distributions of all continuous variables were tested by one-sample Kolmogorov-Smirnov test, variables with normal distribution (age, BMI, total bacterial amount, streptococcal amount) were used as quantitative variables. Variables that did not show a normal distribution treated as rank variables. Gender, need for special medical care, edentulous status, and denture wearer were treated as categorical variables. For assessing nutritional condition, the participants were classified as BMI < 20 (undernutrition) or ≥ 20 (not undernutrition), according to the criteria for Asian elder people aged 70 years or older in The Global Leadership Initiative on Malnutrition (GLIM) [41]. For OHAT, the participants were divided into low (score ≤ 5) and high (score > 5) OHAT groups, after referring to the median of the participants in this study and a cut-off value in previous study [42]. For comparison tests between the groups, Mann-Whitney's U test was used for rank variables and Fisher's exact test for categorical variables. Single correlations for combinations of quantitative variables were tested using Pearson's correlation coefficient analysis, while Spearman's rank correlation analysis was used for combinations of rank or categorical variables after the categorical variables were transformed into binary variables. Eating independence was entered as a three-level rank variable in Spearman's rank correlation analysis and as a categorical variable in the following comparisons between groups. For multivariable analysis to elucidate factors related with BMI or OHAT, multiple logistic regression analyses were performed. In the model for BMI, the dependent variable was set as "not undernutrition" (BMI ≥ 20: 1, BMI < 20: 0) and three variables (streptococcal amount, eating independence level, TCI) were selected as independent variables based on the results of the single correlation analysis. For OHAT, a regression model was constructed using high OHAT as the dependent variable (OHAT ≥ 5: 1, OHAT < 5: 0) to determine the amount indicating poor oral health condition. For confounding factors, number of decayed teeth and eating independence were excluded from the regression models, since those are included in the OHAT criteria or were evaluated as part of determination of long-term care need. As a result, long-term care need level, denture use frequency, and fungal amount were used as independent variables. SPSS for Windows software, ver. 25.0 (IBM SPSS, Tokyo, Japan), was used for all data analyses. This study followed the STROBE statement for reports of observational studies (Additional file 2). Comparisons of health conditions and microbial amounts between participants with and without undernutrition Fifteen of the 41 participants were determined to have undernutrition based on a BMI cut-off value of < 20, with the average ± SD for those 17.5 ± 1.43, while that was 23.5 ± 3.49 for the participants in the group without undernutrition. Other than BMI, there were no significant differences between the groups for other systemic or oral health conditions ( Table 1). As for microbial measurements, only streptococcal amounts showed a statistically significant difference between groups with and without undernutrition, with the average ± SD values for streptococcal amount as the (Fig. 2). Comparisons of health conditions and microbial amounts between participants with and without poor oral health condition Twenty-six of the 41 participants had a high OHAT score (> 5), indicating a poor oral health condition, with the median and range for the high OHAT group 6 and 5-13, respectively, and for the low OHAT group, 2 and 1-4, respectively. As for systemic and oral health conditions, significant differences were found between the groups for care needs score, rate of participants with eating independence, and number of decayed teeth ( Table 2). In microbial analyses, fungal amount was significantly higher in the high OHAT group participants with poor oral health, with median and range values of 2.45 and 0-4.72, respectively, for the group with poor oral health, and 0 and 0-3.32, respectively, for the group without poor oral health. No association with OHAT score was found for total bacterial amount or streptococcal amount (Fig. 3). Measurements related with microbial amounts The association between each microbial amount and other observed values was assessed by single correlation coefficient analysis. Table 3 shows correlation coefficients of significantly related measurements with the amount of any microorganism in addition to OHAT score. In the examination of correlations between microbial amounts, there was a significant positive relationship between total bacterial and streptococcal amounts (r = 0.593, p < 0.001). On the other hand, fungi showed a negative relationship with both total bacteria and streptococci, though that was slight and not significant. For nutritional condition, only streptococcal amount showed a significant correlation with BMI (r = 0.420, p = 0.006). Linear correlations are shown in Fig. 4. TCI was positively related with total bacterial amount (ρ = 0.346, p = 0.029) and negatively related with fungal amount (ρ = − 0.331, p = 0.037). Moreover, fungal amount showed a significant negative correlation with Mucus score (ρ = − 0.383, p = 0.018). Multiple logistic regression analyses In the full model with "not undernutrition" as the dependent variable, streptococcal amount was the only significant variable, with an adjusted odds ratio (AOR) of 6.10 (p = 0.006). After a stepwise procedure, streptococcal amount was selected for the final model ( Table 4). As a result, the adjusted odds ratio (AOR) was 6.32, with a 95% confidential interval (CI) of 1.71-23.3 (p = 0.006) ( Table 5). In multiple regression analyses related to high OHAT (poor oral health condition), long term care need level and fungal amount were shown to be significant variables in the full enter model (AOR: 2.60 and 2.72, p = 0.002 and 0.39, respectively). After a stepwise procedure, long-term care need level and fungal amount were selected for the final model (Table 6). For fungal amount, the AOR was 2.44, with a 95% CI of 0.987-6.04 (p = 0.053). Discussion Among the elders participated in this study, the amount of oral streptococci was found to be significantly lower in the undernutrition group (7.80 ± 0.86 vs. 8.67 ± 0.83), while a significant association (r = 0.420) between amount of streptococci and BMI was found in the entire participants. Furthermore, using a multiple logistic regression model in which "not malnutrition" was the dependent variable, the amount of oral streptococci was selected as a significant independent variable with high AOR (6.32) in the final model after a stepwise procedure. BMI, which is widely used to determine whether an individual is under-or overweight, and also employed in a variety of nutrition assessment tools as a criterion for assessing undernutrition [41,43,44], was used in this study as an indicator of systemic nutritional condition. The average BMI value for the present participants was 21.3, within the average range (20.6-22.5) reported in previous investigations conducted in Japan of institutionalized older individuals [45][46][47]. Thus, their nutritional status was considered to be comparable to that of institutionalized elderly in Japan. Bacteria belonging to the genus Streptococcus are the initial inhabitants of the oral cavity, as they are generally acquired right after birth and play an important role in assembly of the oral microbiota [14]. Moreover, dominance continues until old age [48]. Most oral streptococci are not pathogenic and thought to contribute to oral homeostasis [14], though a change in eating conditions could have an influence on the normal Streptococcusdominant balance of the oral microbiota. Takeshita et al. reported that dominant genera such as Streptococcus were observed in much lower proportions in tube-fed subjects as compared to those fed orally [49]. Even when an oral-environmental change is not as drastic as tube feeding, a decrease in food intake could have a negative effect on growth of oral streptococci, since eating by mouth supplies nutrition to microbes necessary for maintenance of the indigenous oral microbiota. Staple foods in most human populations include starch rich crops such as rice, wheat, and potatoes. Polysaccharides composing starch are degraded by cooking with heat and salivary amylase into mono-or disaccharides, which are easily metabolized by oral streptococci [14,50,51]. A decrease in the amount of food intake via the oral cavity restricts the energy source for streptococcal organisms as well as the human host. Such restriction results in energy source reduction as well as aging-related changes in the oral cavity, represented by decreased saliva flow and an altered oral immune system [52]. Saliva contains secretory IgAs and anti-microbial peptides. However, commensal oral streptococci, such as S. mitis, S. oralis, and S. sanguinis, have become adapted to those components [53][54][55]. Thus, a decrease in saliva may cause Streptococcus to fail to dominate the oral cavity, consequently allowing easier grow for other microorganisms. On the other hand, fungal amount is related with oral moisture condition and oral health. Wu et al. reported that oral care intervention in patients with end stage of cancer reduced detection of Candida on the tongue, accompanied by a decrease in OHAT score [56], which was confirmed by the present results. OHAT criteria for assessments of lip, tongue, gum tissues, and saliva include dryness, with oral dryness one of the significant factors related to fungal colonization in the oral cavity [57,58]. Saliva contains glycoproteins, which act to inhibit adhesion of fungi to epithelial cells [59], and also antimicrobial proteins, histatins, defensins, and other components that can suppress Candida spp. [60]. Thus, a dry condition is preferable for fungi colonization. In contrast to streptococci, fungi have an ability to grow under altered mucosal conditions. Such differences in suitable conditions for colonization may allow fungi and streptococci to grow competitively in the oral cavity. However, previous in vitro studies of the relationship between fungal and streptococcal colonization have shown contradictory results. Some demonstrated that oral streptococci contribute to colonization of oral fungi or modification of their virulence [61,62], while others have reported that a specific streptococcus could inhibit Candida colonization in the oral cavity [63][64][65]. In the present study, both fungal and streptococcal amounts in the oral cavities of the same subjects were determined, and the results showed a negative, though not significant, relationship between oral streptococcal and fungal abundance. The actual balance of fungi and streptococcus colonization in the human oral cavity is unknown, and the present results as well did not show an obvious relationship. Elucidation of the prevalence of both oral fungi and indigenous streptococci in the oral cavity is important for understanding the relationship between frailty and oral health in elderly individuals. It is considered that the oral streptococcus-specific primer established in the present study will be an effective tool for future surveys. This study has some limitations. First, it is difficult to generalize the results due to the small number of participants residing at a single institution. Also, direct assessments of symptoms of frailty and amount of food intake were not performed, thus the actual relationships of oral microbiota with frailty or food intake amount were not clearly elucidated. Furthermore, because of the cross-sectional design, changes in BMI were not recorded, though it is known that a rapid decrease in BMI is as important as a stable state for assessment of frailty in elders [37]. Additionally, some potential biases are assumed. For example, this study showed that OHAT values were lower and oral health better in participants who were able to eat independently. However, dietary management for elder individuals by a caregiver may result in stabilization of food intake and help the individual maintain a better BMI as compared to those who eat independently based on their own preferences. Thus, additional studies with greater numbers of subjects and multiple institutions that utilize longitudinal observations are required. Based on the results of this study, in order to prevent progression of frailty in elder individuals, greater consumption of foods with moisturizing effects may be effective to maintain a balance of streptococci and fungi that results in a streptococcus-dominant state. In the future, it will be desirable to develop probiotic foods that inhibit fungi and promote streptococcal growth. As an indicator of the effects of such interventions, simultaneous evaluation of streptococcus and fungus amounts is useful. Conclusion Oral streptococcal levels were found to be associated with the general nutritional status of institutionalized older individuals, while oral fungal levels were associated with oral health status. Therefore, it is necessary to survey both oral indigenous streptococci as well as fungi to understand the relationship of frailty with oral microbiota, as well as assess the effects of intervention on the oral health of elderly individuals.
2021-10-20T15:54:29.794Z
2021-09-16T00:00:00.000
{ "year": 2021, "sha1": "c8a84ea8832aefa59e997fc409e311911b89f251", "oa_license": "CCBY", "oa_url": "https://bmcoralhealth.biomedcentral.com/track/pdf/10.1186/s12903-021-01926-0", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "23f744a57d774099aa964fee4f600a05f9501f3e", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
53985698
pes2o/s2orc
v3-fos-license
Invariant Solutions to the Strominger System on Complex Lie Groups and Their Quotients Using canonical 1-parameter family of Hermitian connections on the tangent bundle, we provide invariant solutions to the Strominger system on complex Lie groups. Both flat and non-flat cases are discussed in detail. Introduction In [Str86], Strominger analyzed heterotic superstring background with nonzero torsion by allowing a scalar "warp factor" for the spacetime metric. Consideration of supersymmetry and anomaly cancellation imposes a complicated system of PDEs on the internal manifold known as the Strominger system. Ever since then, there has been much effort devoted in finding solutions to the Strominger system. In the case of threefolds, Strominger described some perturbative solutions in [Str86]. Many years later, Li and Yau [LY05] obtained the first smooth irreducible solution to the system for U (4) or U (5) principal bundles on Kähler Calabi-Yau manifolds, which was further developed in [AGF12]. As for the non-Kähler Calabi-Yau inner spaces case, the first solution was constructed by Fu and Yau [FY08]. After the foundation laid by Fu and Yau, more non-Kähler solutions were found, especially on nilmanifolds, see [FIUV09], [Gra11] and the references therein. Some local models were studied in [FTY09]. From a mathematical point of view, the Strominger system can be formulated as follows. Let (X n , g, J) be an integrable Hermitian n-fold (not necessarily Kähler) with holomorphically trivial canonical bundle and let Ω be a nowhere-vanishing holomorphic (n, 0)-form on X. We denote the positive (1, 1)-form associated with g by ω and the curvature form of (T C X, g) with respect to certain Hermitian connection by R. In addition, let (E, h) be a holomorphic vector bundle over X and F its curvature form with respect to the Chern connection. The Strominger system 1 consists of the following equations (mostly people are solely interested in the n = 3 case): F ∧ ω n−1 = 0, F 0,2 = F 2,0 = 0, (1) d( Ω ω · ω n−1 ) = 0. (3) From now on, we will call Equations (1), (2) and (3) the Hermitian-Yang-Mills equation, the anomaly cancellation equation and the conformally balanced equation respectively. If ω is a Kähler metric, then Equation (3) implies that Ω ω is a constant. That is to say, (X, g) has SU (n)-holonomy. From Yau's theorem [Yau78], we know that there is a unique such metric in the given cohomology class. For a general Hermitian manifold, Equation (3) implies that the rescaled metricω = Ω 1 n−1 ω ·ω is balanced, i.e., d(ω n−1 ) = 0, in the sense of Michelsohn [Mic82]. This condition imposes certain mild topological restriction for the inner manifold X, (see [Mic82] for the intrinsic characterization of balanced manifolds), which excludes, for instance, certain T 2 -fiber bundle over Hopf surfaces constructed by Goldstein and Prokushkin [GP04]. Asω is balanced, it is also Gauduchon, i.e., ∂∂(ω n−1 ) = 0. Hence by the theorem of Uhlenbeck-Yau [UY86] and Li-Yau [LY87], Equation (1) is equivalent to the fact that E is poly-stable. Consequently, the main difficulty in the Strominger system problem is to deal with the anomaly cancellation equation. As an analogue of the Kähler situation we have seen, we can think of the Strominger system as a guidance to find canonical metrics on balanced manifolds, at least for non-Kähler Calabi-Yau's, which further sheds light on understanding Reid's fantasy [Rei87]. The Reid's proposal basically says all Calabi-Yau's are connected via conifold transition by going into the non-Kähler territory. The prototype of conifold transition is the transformation between smoothing and deformation of the conifold {z 2 1 + · · · + z 2 4 = 0} ⊂ C 4 . Therefore it is of vital importance to understand the Strominger system on the smoothing of the conifold, which can be identified with the complex semisimple Lie group SL 2 C. In 2013, Biswas and Mukherjee published a paper [BM13], claiming that they have found an invariant solution to the Strominger system on SL 2 C. However, it was soon pointed out by Andreas and Garcia-Fernandez [AGF14] that there was a mistake in Biswas and Mukherjee's calculation and there is actually no solution to the Strominger system in that setting. Furthermore, Andreas and Garcia-Fernandez proposed looking for solutions to the Strominger system using Strominger-Bismut connection. Inspired by their idea, we are able to obtain a few interesting invariant solutions to the Strominger system on complex Lie groups and their quotients. This paper is organized as follows. In Section 2 we briefly review the theory of Hermitian connections on an almost Hermitian manifold. Section 3 focuses in the flat (i.e., F ≡ 0) case. Using the canonical 1-parameter family of Hermitian connections described in Section 2, we obtain a few invariant solutions to the Strominger system on complex Lie groups, giving a rather complete answer to the problem discussed by [BM13] and [AGF14]. In Section 4 we take nonflat bundles E into our consideration. In particular, for the SL 2 C case, we construct invariant solutions to the Strominger system for non-flat trivial bundle E with any rank. The Canonical 1-parameter Family of Hermitian Connections As argued in [Str86], Strominger system requires the connection on T C X to be Hermitian, i.e., it preserves both the metric g and the complex structure J. A natural choice of such a connection is the Chern connection. However, as shown in [AGF14], the ansatz used by [BM13] always yields R = 0, violating the anomaly cancellation. Therefore we need to consider more general Hermitian connections other than Chern. Following [Gau97], we will review the general theory of Hermitian connections and the construction of the canonical 1-parameter family of Hermitian connections. Let (X, g, J) be an almost Hermitian n-fold. Using the Riemannian metric g, we may identify any real T R X-valued 2-form B ∈ Ω 2 (T R X) as a real trilinear form which is skew-symmetric with respect to the last two variables: B(U, V, W ) = U, B(V, W ) for any vector fields U, V, W. In the case J is integrable, d c coincides with the usual notation d and we denote the (+1)-eigenspace of M by Ω 1,1 (T R X). Definition 2.1. A Hermitian connection ∇ on T R X is an affine connection that preserves both the metric g and the complex structure J, i.e., ∇g = 0 and ∇J = 0. It is easy to see that the space of Hermitian connections forms an affine space modelled on Ω 1,1 (T R X). The canonical 1-parameter family of Hermitian connections ∇ t is defined by where D is the Levi-Civita connection and α + denotes the (2, 1) + (1, 2)-part of a 3-form α. The canonical 1-parameter family of Hermitian connections forms an affine line. To be precise, it satisfies where we have to identify the 3-form (d c ω) + as an element of Ω 2 (T M ). This affine line parameterizes all the known "canonical" Hermitian connections: (a). t = 0, it is known as the first canonical connection of Lichnerowicz. (b). t = 1, it is known as the second canonical connection of Lichnerowicz. When J is integrable, it is nothing but the Chern connection. (c). t = −1, this is the Strominger-Bismut connection. (d). t = 1/2, it has been called the conformal connection by Libermann. (e). t = 1/3, this is the Hermitian connection that minimizes the norm of the torsion tensor. When X is Kähler, this line collapses to a single point, i.e. the Levi-Civita connection. In our case, J is always integrable thus (d c ω) + = d c ω. Therefore we have the following simplified expression Flat Invariant Solutions In this section, we will solve the Strominger system on complex Lie groups using the ansatz proposed in [BM13]. As we will see, this is the most natural and symmetric solution one can expect. The name "flat" comes from the assumption that the extra bundle E is flat, i.e., F ≡ 0. Under such assumption, the Hermitian-Yang-Mills equation (1) is satisfied automatically, and therefore the Strominger system reduces to the following equations Now we assume that X is a complex Lie group and let e ∈ X be the neutral element. Obviously X is holomorphically parallelizable, hence it has trivial canonical bundle. Given any Hermitian metric on T e X, we can translate it to get a left-invariant Hermitian metric on X. Let us still denote the associated Hermitian form by ω. It follows that under such metric, Ω ω is a constant and the conformal balanced equation (3) dictates that ω is a balanced metric. The straightforward calculation from [AG86] shows that ω is balanced if and only if X is unimodular. Moreover this condition is independent of the choice of the left-invariant metric. From now on we will assume that X is a unimodular complex Lie group and ω is left-invariant. So Equation (3) holds and we only have to consider the reduced anomaly cancellation equation (4). The new idea here is to use the canonical 1-parameter family of Hermitian connections described in Section 2 to compute R. In order to do that let us fix some notations first. Let g be the complex Lie algebra associated with X and let e 1 , . . . , e n ∈ g be an orthonormal basis under the given left-invariant metric. In addition we define the structure constants c k ij in the usual way [e i , e j ] = c k ij e k . Let {e i } n i=1 be the holomorphic 1-forms on X such that e i (e j ) = √ 2δ i j . Then we can express the Hermitian form ω as Furthermore, the Maurer-Cartan equations give Now we shall compute the canonical 1-parameter family of Hermitian connections ∇ t . We may trivialize the holomorphic tangent bundle T C X by {e i } n i=1 . Under such trivialization, the Chern connection ∇ 1 is simply d and we thus get and therefore that identifies A t ∈ Ω 2 (T R X) as an element in Ω 1 (End T C X), we can rewrite the above equality as Here, A ki is the skew-symmetric matrix whose (i, k)-entry is 1 and (k, i)-entry is -1, E ki the matrix whose (k, i)-entry is 1 and S ki the symmetric matrix whose both (i, k) and (k, i)-entries are 1. If k = i, S kk is the matrix with (k, k)-entry being 2. All other entries not mentioned above vanish. It is straightforward to verify that the above expression gives exactly Consequently, As Tr A t ∧ A t = 0, it follows directly from unimodularity that the first Chern form Remark. From the expression of A t , we know that, as an element of Ω 1 (End T C X), A t does not depend on the left-invariant metric we begin with. It follows that R t = dA t + A t ∧ A t does not depend on the metric either. However the canonical 1-parameter family of Hermitian connections does depend on the choice of the metric. Now we want to compute It is well-known that the last term Tr A t ∧ A t ∧ A t ∧ A t is 0. Let us compute the first two terms separately. The first term is + conjugate of the above line. Proof. We first make two observations. For the dimension in which physicists are most interested, i.e. n = 3, Proposition 3.1 is trivially true since de i ∧de j = 0. If X is nilpotent (hence unimodular), the above equation holds because Tr(ad(e i ) T ad(e j ) T ) = κ(e i , e j ) = 0, where κ is the Killing form. For the general case, we use Equation (6) to expand the LHS and we only have to proof the following identity i,j,r,s,a,b,c,d Like Riemannian curvature tensor, F abcd has many symmetries. It is straightforward from the definition that It follows that Proposition 3.1 is equivalent to the Bianchi identity F abcd + F acdb + F adbc = 0. Using the Jacobi identity c i jk c r il + c i kl c r ij + c i lj c r ik = 0 repetitively, we deduce that 2 indices (b, c, d) and sum them up, after a rearrangement of terms, we get As a consequence of Proposition 3.1, we conclude ). Now we proceed to compute the second term. Like before, we have the following Proposition 3.2. i,j,k de i ∧ e j ∧ e k · Tr(ad(e i ) T ad[e k , e j ] T ) = 0. Proof. This is actually equivalent to Proposition 3.1. Proof. The proof is very similar to the one of Proposition 3.1 but with less complexity. Proof. Direct calculation. 2 We adopt the Einstein notation for summation here. Combining Propositions 3.2, 3.3 and 3.4, we get , de i ∧ de j · Tr(ad(e i ) T ad(e j )). Corollary 3.5. When we choose the Hermitian connection to be either the Chern connection (t = 1) or the first canonical Lichnerowicz connection (t = 0), we have Tr R ∧ R = 0 and thus the Strominger has no solution using our ansatz. This generalizes the result in [AGF14]. Remark. From calculation above, it is tempting to conjecture that Tr(R t ) k is always a real (k, k)-form. However, R t itself in general contains both (2, 0) and (0, 2) parts, and therefore it does not satisfy the so-called equation of motion derived from the heterotic string effective action. As the anomaly cancellation equation (2) reduces to It seems that in general whether Equation (8) has a solution or not is not easy to answer. However, we have the following result. Theorem 3.6. If we further assume that X is semisimple, then there is a unique left-invariant Hermitian metric up to scaling, i.e., the one coming from the Killing form, such that our ansatz of solution does exist. If we pick t < 0, for instance the Strominger-Bismut connection, we obtain solutions with α ′ > 0; if we pick t > 0 with t = 1, we get solutions with α ′ negative. Proof. When X is semisimple, then {de 1 , . . . , de n } are linear independent 2-forms. Therefore Equation (8) requires that Tr(ad(e i ) T ad(e j )) = cδ ij for some positive c. This determines the metric uniquely. We can say a little more about Equation (8) in complex dimension 3. Actually we can classify all the 3-dimensional complex unimodular Lie algebras. Proposition 3.7. Let g be a 3-dimensional unimodular Lie algebra over C, then g must be isomorphic to one of the follows: (a). g is abelian, Remark. Cases (a), (b), (c) and (d) each corresponds to the abelian, nilpotent, solvable and semisimple Lie algebra respectively. They are listed in Page 28 of [Kna02]. For their Lie groups, (a) and (d) is obvious, (b) corresponds to the Heisenberg group and (c) corresponds to the complexification of the group of rigid motions on R 2 . For case (a), any invariant metric is actually Kähler and our ansatz solves the Strominger system because both sides of the anomaly cancellation equation (2) are 0. Case (d) has been treated in Theorem 3.6, so we only discuss the other two situations. For Case (b), we have Z(g) = [g, g] is 1-dimensional, and we may assume it is spanned by e 1 . Under such assumption, the only nontrivial structure constant is c 1 23 = −c 1 32 = 0, others are 0. It follows that ad(e 1 ) = 0 and de 2 = de 3 = 0. One can calculate easily that Tr R t ∧ R t = 0 while R t = 0 for t = 1. This gives an example of a non-flat connection on a bundle such that all the Chern forms are 0. In particular Equation (8) In addition, we have the following formulae for computing exterior derivatives: de 3 = 0. It follows that de 1 and de 2 are linearly independent and √ −1∂∂ω = 1 2 (de 1 ∧ de 1 + de 2 ∧ de 2 ), It follows that the anomaly cancellation equation has a solution if and only if ad(e 1 ) and ad(e 2 ) are orthonormal (up to a positive scalar) under the metric x, y = Tr(xȳ T ). Or equivalently, under the induced metric, ad(h) : [g, g] → [g, g] is unitary (up to a positive scalar) 3 . To summarize, we have the following result: Theorem 3.8. For any Lie group with Lie algebra (b), there is no flat invariant solution to the Strominger system. For any Lie group with Lie algebra (c) with the basis {h, x, y} chosen. As long as x and y are orthogonal to each other in the Hermitian metric, our ansatz solves the Strominger system with α ′ > 0 for t < 0 and α ′ < 0 for t > 0 and t = 1. Remark. As our ansatz are invariant under left translation, solutions to the Strominger system on X descend to solutions on the quotient Γ\X for any discrete closed subgroup Γ. By Wang's classification theorem [Wan54], such quotients include all the compact complex parallelizable manifolds. Non-flat Invariant Solutions In this section, we will consider invariant solutions to the Strominger system with nontrivial F . Let ρ : X → GL n C be a faithful holomorphic representation, then X naturally acts on C n from right by setting v · g := ρ(g) T v for g ∈ X which we abbreviate toḡ T v. Consider the following Hermitian metric H defined on the trivial bundle E = X × C n : at a point g ∈ X, the metric is given by where B =B T is some fixed positive Hermitian matrix, v, w ∈ C n are arbitrary column vectors. Choose the standard basis for C n as a holomorphic trivialization, then 3 Note this condition does not depend on the choice of h Let us compute its curvature F with respect to the Chern connection. Now F 0,2 = F 2,0 = 0 is satisfied automatically, and by the formula F = ∂(H −1 ∂H), we get Notice that g −1 ∂g is the Maurer-Cartan form Remark. It is well-known that all the irreducible representations of SL 2 C are generated by the standard representation on C 2 . Therefore from any solution above, we can produce solutions to the Strominger system with trivial bundle E of arbitrary rank. In addition, this argument generalizes to many other semisimple Lie groups. Remark. Some new phenomenon occur when g is the Heisenberg algebra (b). For example, X = X H is the Heisenberg group It can be checked directly that the anomaly cancellation equation (2) can always be solved by choosing α ′ < 0 properly. However the Hermitian-Yang-Mills equation (9) is hard to solve. For example, let ρ a : X H → GL 3 C be the representation given by "conjugation by a", i.e., ρ a (g) = aga −1 for some a ∈ GL 3 C, g ∈ X H . We would like to find a such that Equation (9) has a solution. If we think of entries of a as unknowns, then Equation (9) can be rewritten as a system of degree 6 real polynomial equations, which is not easy to solve. In a very simple case that B = a = id, we can never find e 1 , e 2 , e 3 such that the Hermitian-Yang-Mills equation holds. Remark. Let Γ be a discrete subgroup of X, then E ′ = C n × Γ X can be naturally viewed as a vector bundle over X ′ = Γ\X. Moreover, one can see from our construction that the metric H descends to an Hermitian metric on the vector bundle E ′ . Unfortunately, it seems that there is no natural holomorphic structure on the total space of E ′ and we can not naïvely obtain solutions of the Strominger system on X ′ in this way. If we modify the right action of X on C n by v · g = ρ(g) T v, then the holomorphic structure does descend on E ′ and we get a holomorphic vector bundle E ′ over X ′ . However, the price to pay is that the similarly constructed metric on E turns out to be flat and it reduces to the situation of Section 3.
2015-06-21T17:23:31.000Z
2014-07-29T00:00:00.000
{ "year": 2014, "sha1": "3c2612e1b5ca1ec5446fd42fabfedfc73611ae77", "oa_license": "CCBYNC", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/106848/1/220_2015_Article_2374.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ab6223b725c7ef34b5219ab7273bef4a64446308", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
237002832
pes2o/s2orc
v3-fos-license
The prevalence of conduct disorders among young people in europe: A systematic review and meta-analysis Introduction This systematic review estimates the pooled prevalence (PP) of Conduct Disorder (CD) among 5-to-18-year-old YP living in Europe, based on prevalence rates established in the last five years (LFY). Objectives Trends of prevalence rates across countries, gender and level of education were analysed. The random effects pooled prevalence rate (REPPR) for CD was calculated. Methods A search strategy was conducted on three databases. Studies were also identified from reference lists and grey literature. Eligible studies were evaluated for reliability, validity and bias, and REPPRs were calculated. Results The European REPPR for CD is calculated at 1.5% (Figure1). The REPPR among males is 1.8% whereas the rate among females is 1.0% (Figure2). The prevalence rate of CD in primary school children is 1.4 times lower than the prevalence of secondary school children. Conclusions Gender, culture and socioeconomic inequality may contribute towards diagnostic inequality and prevalence differences. It is recommended that these aspects are addressed, and routine screening and early intervention services are developed. Disclosure No significant relationships. Introduction: Bipolar disorder in children and adolescents is distinguished by a variable and complex clinical expression. Mood is difficult to assess, mood symptoms are often masked and signs of disorganization may be in the limelight. This can be more difficult when adolescents have intellectual disability (ID). Objectives: This work aims to describe diagnostical and therapeutical features of bipolar disorder in adolescents with ID. Methods: Case reports about five patients who have been diagnosed with bipolar disorder associated to ID, all seen and treated in child and adolescent psychiatry department of Razi Hospital, in Tunis. Results: The study focused on three girls and two boys, all with mild to moderate ID. Four patients had psychiatric family history of bipolar disorder and ID. Only one patient was followed since childhood for mixed ADHD. The average age of onset of bipolar disorder was 14 years. Four cases were inaugurated by manic access; the fifth was a depressive disorder followed by a manic shift under sertraline. Only one case was rapidly favorable, under 10mg of Olanzapine, without any recurrence or relapse during 18 months of follow-up. Another case was slower but also favorable, under 10mg of Olanzapine. We found resistance to usual treatments for 2 patients; these did not evolve well under conventional thymoregulators, or different antipsychotic molecules, nor with combinations of two thymoregulators + an antipsychotic. One of them benefited from a combination of clozapine and lithium with excellent response. Conclusions: Bipolar disorder comorbid with ID in adolescents is a difficult diagnostic entity and particularly hard to manage. Disclosure: No significant relationships. EPV0085 The prevalence of conduct disorders among young people in europe: A systematic review and meta-analysis Introduction: This systematic review estimates the pooled prevalence (PP) of Conduct Disorder (CD) among 5-to-18-year-old YP living in Europe, based on prevalence rates established in the last five years (LFY). Objectives: Trends of prevalence rates across countries, gender and level of education were analysed. The random effects pooled prevalence rate (REPPR) for CD was calculated. Methods: A search strategy was conducted on three databases. Studies were also identified from reference lists and grey literature. Eligible studies were evaluated for reliability, validity and bias, and REPPRs were calculated. Results: The European REPPR for CD is calculated at 1.5% (Figure1). The REPPR among males is 1.8% whereas the rate among females is 1.0% (Figure2). The prevalence rate of CD in primary school children is 1.4 times lower than the prevalence of secondary school children. Conclusions: Gender, culture and socioeconomic inequality may contribute towards diagnostic inequality and prevalence differences. It is recommended that these aspects are addressed, and routine screening and early intervention services are developed. Introduction: Children in a prodromal state manifesting as truancy or social isolation (hikikomori) often complain of problems that are physical in nature and are subject to significant changes. We developed the Child Psychosis-Risk Screening System (CPSS) that incorporates childhood psycho-behavioral characteristics revealed through a retrospective survey of schizophrenia patients into its algorithm. Objectives: Our research aimed to test the risk identification of pediatric and psychiatric clinic outpatients using the CPSS. Methods: We conducted an epidemiological study involving 204 outpatients between the ages of 6 and 14 years who had been examined at a pediatric or psychiatric clinic using the CBCL and clinical data from medical charts. Logistic regression analysis and T-tests were performed using each clinical data variable to clarify the risk of the CPSS calculated from the CBCL data and contributing factors. Results: The results of the logistic regression analysis demonstrated that the diagnostic category (physical illness or DSM-5 diagnosis) and chief complaint did not contribute to differentiate between the high-risk and low-risk groups. Meanwhile, the environmental factors of "abuse" and "social isolation" did contribute to the discrimination of the two groups. Conclusions: The fact that the diagnostic category during childhood does not contribute to the discrimination of the high-risk group warrants attention. It is possible that the high-risk group only had a latent endophenotype that had not yet manifested during this period. The factors suggested to have an association with the highrisk group may be reflecting activators and the dynamic state of the critical period for psychosis. Introduction: Attention deficit hyperactivity disorder (ADHD) is the most common neurodevelopmental disorder in childhood. ADHD is a risk factor for the development of overweight and obesity. One neuropsychological factor that may play a prominent role in the relationship between ADHD and obesity is executive functioning. Objectives: The aim of this study is to investigate the relationship between comorbid obesity/overweight and cold executive functions, verbal short-term memory, and learning in children with ADHD. This is the first study to examine relationship between verbal short-term memory-learning and obesity in patients with ADHD. Methods: This study was conducted with 70 patients with ADHD and 30 healthy controls. In this study, patients diagnosed with ADHD were divided into two groups according to body mass index (BMI) as <85 percentile and ≥85 percentile. Cold executive functions were evaluated by Stroop Test (ST) and Cancellation Test (CT). Serial Digit Learning Test (SDLT) was administered to measure verbal short-term memory and learning capasity. In order to evaluate the severity of ADHD objectively, parents completed the Conners' Parents Rating Scale-Revised Short Version (CPRS-RS). Results: The ST, SDLT and CT scores were significantly lower in both groups with ADHD than the control group. The CPRS-RS subscale scores were significantly higher in both groups with ADHD than the control group. There was no statistically significant difference in ST, SDLT, CT scores and CPRS-RS subscale scores between the two groups with ADHD. Conclusions: This study show that overweight/obesity comorbid with ADHD was not associated with cold executive functions, verbal short-term memory, learning, or ADHD symptom severity. Disclosure: No significant relationships. Keywords: Executive functions; obesity; learning; attention deficit hyperactivity disorder
2021-08-14T13:18:57.138Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "283630d32cc5eec606bbe08ff86d765b79f0e1a1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Cambridge", "pdf_hash": "283630d32cc5eec606bbe08ff86d765b79f0e1a1", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [] }
250593039
pes2o/s2orc
v3-fos-license
A Spatial–Spectral Joint Attention Network for Change Detection in Multispectral Imagery : Change detection determines and evaluates changes by comparing bi-temporal images, which is a challenging task in the remote-sensing field. To better exploit the high-level features, deep-learning-based change-detection methods have attracted researchers’ attention. Most deep-learning-based methods only explore the spatial–spectral features simultaneously. However, we assume the key spatial-change areas should be more important, and attention should be paid to the specific bands which can best reflect the changes. To achieve this goal, we propose the spatial–spectral joint attention network (SJAN). Compared with traditional methods, SJAN introduces the spatial– spectral attention mechanism to better explore the key changed areas and the key separable bands. To be more specific, a novel spatial-attention module is designed to extract the spatially key regions first. Secondly, the spectral-attention module is developed to adaptively focus on the separable bands of land-cover materials. Finally, a novel objective function is proposed to help the model to measure the similarity of learned spatial–spectral features from both spectrum amplitude and angle perspectives. The proposed SJAN is validated on three benchmark datasets. Comprehensive experiments have been conducted to demonstrate the effectiveness of the proposed SJAN. Introduction Different images of the same location acquired at two or more different times are referred to as multi-temporal images. The variations between multi-temporal remotesensing images can be identified by change detection. Change-detection method determines if each pixel in a scene has changed by extracting changed areas from multi-temporal images. Multispectral images have numerous bands, ranging from visible to infrared light, and their extensive spectral information allows for reliable object identification. As a result, multispectral change detection has found widespread application in the fields of environmental monitoring [1][2][3][4], resource inquiry [5][6][7], urban planning [8][9][10], and natural catastrophe assessment [11][12][13]. The two primary categories of change-detection methods are traditional and deeplearning-based methods. For low-resolution images, the earliest change-detection methods mostly used pixels as the monitoring unit and carried out pixel-by-pixel difference analysis. With the development of machine-learning algorithms and the increase in spectral resolution, the unit of change detection shifted from pixels to objects. Prior to 2010, the majority of these technologies were traditional change-detection methods, which consist of algebra-based, image-transform-based, classification-based methods, and so on [14]. Change detection based on algebraic and image transforms detects changes in images by applying transformations and operations to image pixels. While the post-classification method classifies two temporal-phase remote-sensing images that have been aligned separately in advance, and then compares the classification results to obtain change-detection maps. Although the above traditional methods have made important contributions to the development of multispectral change detection, most of them still use manual features and rely on professional visual observers for manual discrimination. Deep learning can automatically extract abstract features and obtain spatial-spectral feature representation, which can effectively improve the accuracy of change-detection tasks. Therefore, deep-learning-based change-detection methods have become a popular research direction. With the continuous improvement of satellite-remote-sensing image resolution, the change-detection methods based on deep learning have also made a qualitative leap in the extraction of multispectral image features. There are various network structures that have been applied in the field of change detection, such as deep-belief networks (DBN) [15], stacked auto-encoders (SAE) [16], convolutional auto-encoders (CAE) [17], PCANet [18]. Some methods aim at extracting spatial-spectral features to obtain a better performance concerning the change detection. Zhan et al. [19] proposed a three-way spectralspatial convolutional neural network (TDSSC), which used convolution to extract spectral features from the spectral direction and spectral-spatial features from the spatial direction to fully extract HSI discriminative features, improving the accuracy of change detection. Zhang et al. [20] proposed a novel unsupervised change-detection method based on spectral transformation and joint spectral-spatial feature learning (STCD). It overcame the challenge of the same object image with different spectra in multiple spatial-temporal periods and improved the robustness of the change-detection method. Liu et al. [21] introduced a dual-attention module (DAM) to exploit the interdependencies between channels and spatial positions. The method could obtain more discriminative features and the authors conducted experiments on the WHU architectural dataset. By simultaneously evaluating the spatial-spectral-change information, Zhan et al. [22] constructed an unsupervised scale-driven change-detection framework for VHR images. The system generated a robust binary change map with high detection precision by fusing deep feature learning and multiscale decision fusion. To address the problem of "the same object with different spectra", Liu et al. [23] presented an unsupervised spatial-spectral feature learning (FL) method, which extracted hybrid spectral-spatial change characteristics through a 3D convolutional neural network with spatial and channel attention. For change detection in very-high-resolution (VHR) images, Lei et al. [24] proposed a network based on difference enhancement and spatial-spectral nonlocal (DESSN). To enhance the object's edge integrity and internal tightness, a spatial-spectral nonlocal (SSN) module in DESSN was proposed to depict large-scale object fluctuations throughout change detection by incorporating multiscale spatial global features. The above-mentioned methods try to extract spatial-spectral features. However, they pay little attention to the subtle features of changed areas. With the widespread use of attentional mechanisms, change-detection methods based on attentional modules have been proposed. To alleviate the problem of ineffective detection of small change areas and poor robustness of the simple network structure, Wang et al. [25] proposed an attention-mechanism-based deep-supervision network (ADS-Net) to obtain the relationships and differences between the features of bi-temporal images. To overcome the problem of insufficient resistance of current methods to pseudo-changes, Chen et al. [26] proposed dual attentive fully convolutional Siamese networks (DAS-Net) to capture long-distance dependencies in order to obtain more discriminant features. Chen et al. [27] presented a spatial-temporal attention-based change-detection method (STA), which simulates the spatial-temporal relationship by the self-attention module. Chen et al. [28] proposed a novel network that paid more attention to the regions with significant changes and improved the model's anti-noise capability. Ma et al. [29] presented a dual-branch interactive spatial-channel collaborative attention enhancement network (SCCA-net) for multi-resolution classification. In this network, a local-spatial-attention module (LSA module) was developed for PAN data to emphasize the advantages of spatial resolution, and a global-channel-attention module (GCA module) was developed for MS data to improve the multi-channel representation. Chen et al. [30] proposed a dynamic receptive temporal attention module by exploring the effect of temporal attention dependence range size on change-detection performance, and introduced Concurrent Horizontal and Vertical Attention (CHVA) to improve the accuracy of strip entities. The above deep-learning-based change-detection methods achieve good results, and some methods also extract spatial-spectral features. However, they do not pay attention to key changed areas in the spatial dimension and the separable bands of land-cover materials in the spectral dimension when extracting spatial-spectral features. When the scene is complex, the efficiency of derived spatial-spectral features is influenced by the key changed areas and the separable bands of land-cover materials. Moreover, the abovementioned deep-learning-based change-detection methods just measure the similarity of learned spatial-spectral features from the spectral amplitude and do not consider the influence of the spectral angle. Spectral angle is an important index to evaluate the spectral similarity. To address the above-mentioned problems, we propose the spatial-spectral joint attention network (SJAN). The SJAN contains the spatial-attention module to focus on the key changed area and the spectral-attention module to explore the separable bands when extracting spatial-spectral features. In order to better measure the similarity of learned spatial-spectral features, we measure it not only from the spectral amplitude perspective, but also from the spectral angle perspective. As a result, the proposed SJAN can achieve better performance. The main contributions of our proposed SJAN method are as follows: (1) A spatial-spectral attention network is proposed to extract more discriminative spatialspectral features, which can capture the spatial key changed areas by the spatialattention module and explore the separable bands of materials through the spectralattention module. (2) A novel objective function is developed to better distinguish the differences of the learned spatial-spectral features, which simultaneously calculate the similarity of learned spatial-spectral features from the spectrum amplitude and angle perspectives. (3) Comprehensive experiments in three benchmark datasets indicate that the proposed SJAN can achieve superior performance compared to other state-of-the-art changedetection methods. Change Detection Change detection is the process of quantitatively analyzing and characterizing surface changes from remote-sensing data of different time periods. Remote-sensing change detection (CD) is the process of identifying "significant differences" between multi-temporal remote-sensing images. Most current change-detection methods can be classified into two main categories: traditional methods and deep-learning-based methods. Traditional change-detection methods include algebra-based change-detection methods, image-transform-based change-detection methods, and classification-based changedetection methods [14]. Algebraic-based change-detection methods include change vector analysis (CVA) [31], image differencing, image comparison, and image grayscale differencing methods that perform mathematical operations (e.g., differencing, comparing, etc.) on each image to obtain the changed map. CVA measures the amount of change by performing a different operation on the data from each band of different images. However, with the number of bands increasing, it becomes more and more difficult to determine the change types and select the change threshold. Change detection based on image transformation uses the transformation of image pixels to detect changes in images, including principal component analysis (PCA) [32], independent component analysis method (ICA), and multivariate alteration detection (MAD) [33]. Detecting changed regions with the PCA algorithm can detect changed information and can clearly point out the change region, but it is susceptible to noise and requires data preprocessing. The MAD method can effectively remove correlation, but noise has a significant impact on the results and the threshold needs to be adjusted manually. Morton [34] proposed the IR-MAD algorithm in combination with the EM to alleviate these occurrences; it can automatically obtain the change threshold. Classification-based change-detection algorithms involve post-classification comparisons, unsupervised change-detection methods, and artificial-neural-network-based methods. The main advantage of these methods is that they provide accurate information on changes independent of external factors such as atmospheric disturbances. Radhika and Varadarajan proposed a classification detection method using neural networks that provides better accuracy but can only be applied to small images [35]. Another unsupervised novel SVD that traces the function clustering algorithm, which performs well in land-cover classification, was proposed by Vignesh et al. The algorithm grouped images and used these images as a training set for the ensemble minimization learning algorithm (EML) [36]. With the booming development of deep-learning techniques, many deep-learningbased change-detection algorithms have been proposed. For example, Liu et al. [37] proposed a deep convolutional coupling network (SCCN). The input image was connected to each side of the network and transformed into a feature space. The distances of the feature pairs were calculated to generate the different map. Zhan et al. [38] proposed a deep concatenated full convolutional network (FCN), which contains two identical networks sharing the same weights and each network independently generates feature maps for each spatial-temporal image. It exploited more spatial relationships between pixels and achieved better results. Mou et al. [39] proposed a novel recurrent convolutional neural network (RCNN) architecture, which combines CNN and RNN to form an end-to-end network that can be trained to learn joint spectral-spatial-temporal feature representations in a unified framework for multispectral image-change detection. Zhang et al. [40] presented a spectral-spatial joint learning network (SSJLN), which jointly learned spectral-spatial representations and deeply explored the implicit information of the fused features. The direction of change detection is still well worth investigating. Attention Mechanism The attention mechanism aims to simulate the attention behavior of humans in reading, listening, and hearing. The attentional mechanism has been proved helpful for computervision tasks [41,42]. The performance of computer-vision tasks is effectively improved by combining the attention mechanism and deep networks; therefore, the attention mechanism has been widely used in computer-vision fields, such as image classification and semantic segmentation in recent years [43][44][45][46]. At first, the attention mechanism was usually applied to convolutional neural networks. Fu et al. [47] proposed a CNN-based attention mechanism, which recursively learned discriminative region attention and region-based feature representation at multiple scales in a mutually reinforcing manner, and proved its effectiveness in fine-grained problems. Hu et al. [48] proposed the Squeeze-and-Excitation (SE) module that enabled the network to focus on the relationship between channels, using the network automatically to learn the importance of different channel features, improving the accuracy of image classification. Woo et al. [49] proposed the Convolutional Block Attention Module (CBAM), which introduced a spatial-attention mechanism to focus on the spatial features of the image on the basis of the network and the essential channel features, enhancing network stability and image-classification accuracy. Misra et al. [50] proposed a triplet attention mechanism to establish inter-dimensional dependencies, which can be embedded into standard CNNs for different computer-vision challenges. Network Architecture The Siamese network has two branching networks, and both branches have the same architecture and weights [51]. The Siamese network uses pairwise patches or images as input, extracts features through a series of layers, and calculates the similarity of the learned features as output. Hence, the Siamese network is a mainstream network in the field of change detection. As a result, our proposed SJAN is based on a Siamese network. SJAN contains four parts: initial feature-extraction module, spectral-attention module, spatial-attention module, and discrimination module, as shown in Figure 1. The initial feature-extraction module uses the simplest CNN network. The network structure and relevant parameters of the initial feature-extraction module are shown in Table 1. The spatial-attention module and the spectral-attention module aim to optimize the learned initial features so that they can focus on the spatially critical changed regions and separability bands of the spectrum, which will be described in detail in the following section. The discrimination module first fuses the extracted spatial-spectral features, then explores the implicit information of the obtained features, and finally gives the change-detection result with the sigmoid function. Its network structure and relevant parameters are shown in Table 1. First, the spatial-spectral features are extracted from the pairwise blocks at the moment T1 and T2 after a series of convolution and pooling operations, denoted as F 1 H×W×C and F 2 H×W×C , where H, W, and C represent the height, width, and number of channels, respectively. Second, the learned features F 1 H×W×C and F 2 H×W×C are fed to the spectralattention module, respectively, to obtain the features based on spectral attention, denoted as F 1 spectral−att and F 2 spectral−att , which are obtained by multiplying the feature maps with the spectral-attention weights. Third, the features based on spectral attention F 1 spectral−att and F 2 spectral−att are fed to the spatial-attention module to obtain the spatial-spectral features, denoted as F 1 spatital−spectral and F 2 spatital−spectral . Finally, the differential information of the spatial-spectral features F 1 spatital−spectral and F 2 spatital−spectral is fed to the fully connected layers for classification to get the change-detection results. Spatial-Attention Module The spatial-attention module consists of two arithmetic operations and one convolutional layer. It aims to obtain spatial-attention features of each channel. The structure of the spatial-attention module is shown in Figure 2. First, the mean and maximum values for the featured dimensions of the pairwise blocks are obtained to create two 2 × 2 vectors, then the vectors are reduced to a 2 × 2 × 1 vector. Second, the maximum value and the mean value will be dotted. The maximum and mean values of the feature dimensions are calculated to define the changed areas from different aspects, respectively. We perform a point multiplication operation which can obtain an attention matrix with higher weight differences than the concatenation operation, allowing us to better integrate the acquired data. Third, the data are normalized using the 7 × 7 convolution and sigmoid function to obtain the spatial attention weights. Finally, the spatial attention weights and the input features are multiplied to obtain the spatial-attention features. The features obtained from the spatial-attention module are more discriminative because it focuses more on key changed regions in the spatial dimension. Spectral-Attention Module The spectral-feature-extraction network under the attention mechanism can automatically determine the importance of different bands of pairwise blocks in complex scenes, which is useful for multispectral change-detection tasks. The spectral-attention module consists of two pooling layers and a shared MLP. It aims to explore which band is more effective for detecting the target. Figure 3 depicts the network architecture of the spectralattention module. First, the features of the pairwise blocks are downscaled using global maximum pooling and global average pooling to create a 1 × 1 × C vector (C is the number of channels). Second, they are fed into a shared MLP with two 1 × 1 convolutions to ensure that the detailed information of pairwise blocks are acquired. Third, these learned features are dotted. Maximum pooling and average pooling focus on different aspects of the spectral information of the pairwise blocks, respectively, so that we perform a point multiplication operation instead of element-wise summation to make the gap between separability bands for different features as wide as possible. Then the sigmoid function is used to normalize the result, and the result after normalization is the spectral-attention-weight matrix based on the spectral-attention model. Finally, the channel-spectral-attention weights and the input features are multiplied to obtain the spectral-attention features. The features acquired from the spectral-attention module are more discriminative because it focuses more on the separability bands in the spectral dimension. Loss Function Spectral angle is a critical criterion for determining if two spectral vectors are similar, and most existing deep-learning-based change-detection methods do not take the spectral angle into consideration when calculating the similarity. Therefore, the loss function in this thesis is defined from both spectral magnitude and angle perspectives. The loss function of the proposed SJAN includes two terms: spectral amplitude terms and spectral angle term. The total loss function L is defined as follows: where L amplitude represents the loss of spectral amplitude. L angle is the loss of the spectral angle of multispectral images. L amplitude contains two parts: L 1 and L 2 , and is defined as follows: where the parameters λ 1 , λ 2 , and λ 3 are the penalty parameters of the loss functions L angle , L 1 , and L 2 . The optimal values of three parameters are discussed in the Section 4.1. L 1 is calculated from the contrast loss function. This loss function is a common measure of the similarity of multispectral images. It considers the similarity of multispectral images from the spectral amplitude, constraining the distance of similar image block pairs and expanding the space of dissimilar image block pairs. It is defined as follows: where the value of l represents the label information of the input pairwise patch. l = 1 indicates that the patch pair is dissimilar, while l = 0 means that the patch pair is similar. m represents the margin for dissimilar pairs. In our experiment, m is set to 0.5. What is more, d represents the distance of two input patches. It can be seen that the distance of the dissimilar pairs between 0 and m is only considered. If l = 1 and d is greater than the margin, the L 1 loss is regarded as 0. L 2 is calculated by cross-entropy loss. The cross-entropy loss function for the extracted spatial-spectral features aims to make the model predictions closer to the labeled values. It is defined as follows: where the value of y i is 0 or 1, which means the label of information of input pair. y i equals 1 means that the input image block pair is changed. − → y i represents the probability that the input image is a changed sample pair. L angle is a more comprehensive similarity metric that multiplies the spectral cosine and the Euclidean distance directly. To make the spectral cosine have the same principle as the Euclidean distance, we use the formula (1 − cosine) so that a smaller value of the formula represents a closer proximity of similar image blocks. L angle is defined as follows: where A i , B i represent the spectral values of the ith band. Training Process As shown in Figure 1, SJAN is trained in a supervised manner. The data will be pre-processed and then trained in batches. The difference after fully connected layers characterizes the input of cross-entropy loss; the contrast loss function and spectral angle similarity are characterized by the two-stream network after the attention mechanism module. Back propagation is used to update the network weights. Moreover, the weight updating strategy uses the Adam optimization algorithm. Through multiple types of training, the optimal model is obtained. Finally, the test data are fed directly to the obtained optimal model to achieve the change-detection map. The complete end-to-end steps of the proposed SJAN are described in Algorithm 1. Algorithm 1 Framework of SJAN. Input: (1) a series of 11 × 11 pairwise blocks of two multispectral images in the same region at different time and corresponding labels. (2) the number of dataset. Step 1: randomly divide the dataset into the training data and validation data in the ratio of 7 : 3. Step 2: a series of 11 × 11 pairwise blocks in the training set feed to the initial feature extraction module to obtain the initial features F 1 H×W×C and F 2 H×W×C of the pairwise blocks at moments T1 and T2. Step 3: F 1 H×W×C and F 2 H×W×C are fed into the spectral-attention module to acquire spectral features F 1 spectral−att and F 2 spectral−att of pairwise blocks with discriminative information. Step 4: F 1 spectral−att and F 2 spectral−att are fed into the spatial-attention module to obtain the spatial-spectral features F 1 spatital−spectral and F 2 spatital−spectral of the pairwise blocks. Step 5: the difference between F 1 spatital−spectral and F 2 spatital−spectral is fed into the fully connected layers for classification. Step 6: Optimizing the network using the Adam optimizer to obtain the optimal model. Step 7: The test data is fed directly into the trained model to get the change-detection results. Output: (1) Changed map (2) OA, Kappa, AUC Datasets The effectiveness of the proposed SJAN is validated on three datasets, and the three multispectral datasets are described in detail as follows. We used the Minfeng, Hongqi Canal, and Weihe river datasets acquired by the GF-1 satellite sensor as our dataset. Each dataset contains two multispectral images with a spatial resolution of 2 m. The two multispectral images have different times and each image contains four bands: red, green, blue, and near-infrared bands. Figure 4 shows the images of the Hongqi Cancal dataset. The Hongqi Cancal dataset with the image size of 543 × 539, located in West Kowloon Village, Kenli County, Dongying City, Shandong Province, was acquired with the GF-1 satellite on 9 December 2013 and 16 October 2015. Figure 5 shows the image of the Minfeng dataset with the image size of 651 × 461, taken in Kenli County, Dongying City, Shandong Province. The acquisition time is the same as Hongqi Cancal. Figure 6 shows the Weihe river dataset with an image size of 378 × 301, located in Madong Village, Xi'an City, Shaanxi Province, acquired on 19 August 2013 and 29 August 2015, respectively. Evaluation Criteria The proposed SJAN is quantitatively analyzed to demonstrate its robustness and effectiveness. Three evaluation metrics are used to analyze it, overall accuracy (OA), Kappa coefficient, and AUC (area under the ROC zone line) value. Firstly, the overall accuracy is used for evaluation, and the value of OA is within (0, 1), closer to 1 means better detection performance. where TP refers to true position, TN stands for true negative, FP stands for false positive and FN represents false negative. Secondly, the accuracy of the classification is measured using the kappa coefficient, which is within (−1, 1) and is usually within (0, 1), with closer to 1 meaning a better performance. The formula for calculating the kappa coefficient based on the confusion matrix is defined as follows: Finally, the numerical accuracy measure is provided using the AUC. The larger the value of the AUC, the better the classification effect of the classifier. With FPR as the horizontal axis and TPR as the vertical axis, the ROC curve is plotted and the area under the curve is the AUC value, where TPR represents the true positive rate and FPR represents the false positive rate, both of which are calculated as follows: Competitors The proposed SJAN is compared with the following methods: The main contributions of our proposed SJAN method are as follows: (1) CVA [31] is a typically unsupervised change-detection method. Difference operations are performed on the images from two temporal images to identify the changed areas. (2) IRMAD [34] assigns larger weights to the pixels that have not changed, and after several iterations, the weights of the pixel points are compared with the threshold value to determine whether they have changed. IR-MAD is better than MAD in identifying significant changes, and this method is widely used in multivariate change detection. (3) SCCN [37] is a symmetric network, which includes a convolutional layer and several coupling layers. The input images are connected to each side of the network and are transformed into a feature space. The distances of the feature pairs are calculated to generate the difference map. (4) SSJLN [40] considers both spectral and spatial information and deeply explores the implicit information of the fused features. SSJLN is very good at improving changedetection performance. (5) STA [27] designs a new CD based on the self-attention module to simulate a spatialtemporal relationship. The self-attention module can calculate the attention weights between any two pixels at different times and locations, which can generate more discriminative features. (6) DSAMNet [52] includes a CBAM-integrated metric module that learns a change map directly through the feature extractor and an auxiliary deep-supervision module that generates change maps with more spatial information. Performance Analysis First, we conduct a comparison of the training time and the number of parameters of the SJAN method with other deep-learning-based methods to measure the performance of the proposed network. Due to the addition of the attention module, the proposed SJAN method has a higher number of parameters and training time compared to SCCN and SSJLN, as shown in Table 2. Compared with STA and DSAMNet methods based on the attention mechanism, our proposed SJAN method has fewer parameters and less training time. Second, we compare the experimental results of SJAN with other existing changedetection methods from both qualitative and quantitative aspects. The qualitative performances of comparative change-detection methods on the Hongqi Canal, Minfeng, and Weihe river datasets are visually shown in Figures 7-9, respectively. We can clearly see that the CVA method has a large false-alarm rate, detecting changes in almost the entire image, which is not the case in reality. IRMAD detects many changed pixels as unchanged pixels by mistake and has a high omission rate. Traditional changedetection methods rely on manual features that are costly in terms of time and need be designed by professionals. Deep learning can extract more abstract and hierarchical features. Hence, deep-learning-based change-detection methods are attracting more and more attention. SCCN is an unsupervised deep-learning-based change-detection technique that does not consider the label information. Moreover, SCCN does not take the detection of subtle changes and the joint distribution of changed and unchanged pixels into account. Therefore, we can see that the detection results of SCCN include many white-noise spots. SSJLN learns the semantic difference between changed pixels and unchanged pixels by extracting spatialspectral joint features. From (c) and (d) of Figures 7-9, it is clear to see that the number of unchanged pixels incorrectly detected by SSJLN as changed pixels is significantly reduced. The STA method proposed in the last two years applies the attention module to the change detection, and it can be found that the attention module has a positive effect on the change-detection task. However, when extracting spatial-spectral features, the STA method does not take the spectral angle loss into account. SJAN performs the similarity measures from both the spectral angle and the spectral magnitude, which can exploit more discriminative information. Moreover, SJAN uses a fusion strategy of point multiplication to obtain attention weights. It can be observed that SJAN achieves the best results. The DSAMNet method employs a deep supervised network and an attention mechanism to extract more discriminative features. However, it can be seen from Figures 7-9 that the detection performance of DSAMNet on the Hongqi, Mingfeng, and Weihe datasets is not very good. Many changed pixels on the Weihe dataset are misclassified as unchanged pixels, as shown in Figure 9. In contrast, many unchanged pixels on the Minfeng dataset are detected as changed pixels by mistake, as shown in Figure 8. DSAMNet is more suitable for very high-resolution images such as 0.5 m aerial images that contain more spatial information. The spatial resolution of Hongqi, Mingfeng and Weihe datasets is 2 m. It can be concluded that SJAN is more suitable than DSAMNet for the change-detection task on the GF-1 dataset. As shown in Table 3 45, respectively. It can be clearly seen that SJAN has the best detection accuracy among these methods, which is consistent with the results of the qualitative analysis based on the changed detection maps. Therefore, it can be concluded that the proposed SJAN method has better performance than other comparison methods. Parameter Settings This subsection describes the settings of the relevant parameters of the network model, including the convolution, the kernel size for pooling, and the activation function used. First, the parameters of SJAN are shown in Table 1. Specifically, the Siamese network structure includes two convolutional layers (conv1 and conv2), one layer of maximum pooling (pool1) and two layers of convolution (conv3 and conv4), one layer of maximum pooling (pool2) to ensure that the essential features of the images can be fully extracted. The kernel size of the convolution is 3 × 3 and the kernel size of pooling is 2 × 2. The network structure of spectral-and spatial-attention modules has been described in detail and will not be repeated. The fully connected net is designed as two layers, each with dimensions 256, 128, and finally the fully connected layer with output dimension 1 is classified using the sigmoid function. What is more, the input and output dimensions are height × width × depth. BN is the number of bands, where the number of bands is 4 for GF-1. Second, the patch size can have an effect on the test results, so we discuss it in details. Effect of Patch Image blocks contain not only the spectral information of the pixel to be detected, but also the spectral information of its neighboring pixels. Therefore, we use image blocks as the basic processing unit. The size of the image block n greatly affects the accuracy of change detection. The larger the image block, the more detailed the spectral information it contains. However, at the same time, the image block size is chosen as too large, its local key information will be more disturbed. The exponential increase in data volume will also put very high pressure on the training. In our experiments, we set the image block size to 5, 7, 9, and 11, respectively. The experimental results are shown in Figure 10, where blue, orange, gray, and yellow represent image block sizes of 11,9,7, and 5, respectively. It is obvious from Figure 10 that the detection accuracy is worst when n is 5, and the values of OA, Kappa, and AUC are the best when n is 11. What is more, when n is larger than 11, the training data is very large and the training time cost increases exponentially. Therefore, we select the patch size as 11. Third, the other relevant experimental parameters such as training-data division, batch size, and learning rate will be introduced. We select 70 percent of the changed samples and an equal number of unchanged samples to construct the training set. The training and validation data in the training set are further divided into 7:3. In the training phase, a batching strategy is used and the number of samples for each batch is 32. The initial learning rate is set to 10 −4 using the Adam optimizer optimization algorithm. During the experiment, the learning rate is continuously decreased according to the strategy, and after 20 iterations, the respective optimal experimental results are obtained on different datasets. The results on the validation set are shown in Table 4. We can see the results of the validation set are a bit better than those on the testing dataset shown in Table 3. This is because the data distribution of the validated set is more similar to that of the training set than that of the testing set. Moreover, we test the effect of the penalty parameters of the loss function on the change-detection performance. As shown in Figure 11, some of the parameter combinations are listed. Status-a represents that λ 1 , λ 2 , and λ 3 are set to 1, 1, and 1. Status-b represents those three penalty parameters are set to 0.5, 0.5, and 0.75, and the proposed SJAN achieves the best detection on Weihe and Minfeng datasets with these parameter settings. Status-c represents those three penalty parameters are set to 0.25, 0.25, and 0.5. Status-d represents those three penalty parameters are set to 0.25, 0.25, and 1, and the Hongqi dataset has better performance results with these parameter settings. In our experiment, the parameters of λ 1 , λ 2 and λ 3 are set to 0.25, 0.25, and 0.5 on the Hongqi dataset, and 0.5, 0.5, and 0.75 on Minfeng and Weihe River datasets. Comparison with CBAM In this section, we will discuss the difference between the point multiplication operation in the proposed spatial-spectral-attention module and the element-wise summation and concatenation operations of the original CBAM. As shown in Figure 12, blue indicates the result of using point multiplication operations in both the spectral-attention module and the spatial-attention module, denoted as dots. Orange indicates the result of using a point-multiplication operation between MLP outputs instead of an element-wise-summation operation in the spectral-attention module, denoted as spatial-concat. Gray indicates the result of using a point multiplication operation between Maxpooling and Avgpooling instead of the concatenation operation in the spatial-attention module, denoted as spectral-sum. Yellow represents the results of the original CBAM method. It can be seen that using point multiplication instead of elementwise summation in the spectral-attention module achieves better detection performance on the Hongqi dataset, and using point multiplication instead of concatenation in the spatial-attention module can gain better detection accuracy on the Minfeng dataset. What is more, using the point multiplication operation on the Weihe dataset yields better results in both the spectral and spatial-attention modules. As a result, the point multiplication operation is chosen in the spectral and spatial modules to explore more similar information. Ablation Experiment • Effect of the spectral-and spatial-attention modules The proposed SJAN includes the spatial-attention module and a spectral-attention module. When extracting spatial-spectral features, the spatial-attention module focuses on feature extraction of spatially key regions and the spectral-attention module can identify separable bands of different land covers. This section conducts comparative experiments to verify the impact of the spectral-and spatial-attention modules on the detection accuracy. Figure 13 shows the ablation experiment of the spectral-attention module and the spatialattention module in detail. Blue indicates the feature-extraction method based on SJAN, and yellow indicates the feature-extraction method with spatial-and spectral-attention modules removed, denoted as base network. Orange indicates the feature-extraction method with the spectral-attention module only, denoted as base + spectral. Gray indicates the featureextraction method with the spatial-attention module only, denoted as base + spatial. Both the base + spectral method and the base + spatial method achieve better detection accuracy than the base method, which proves the effectiveness of the spatial-and spectral-attention modules. What is more, it can be seen that Hongqi Cancal, Minfeng, and Weihe River datasets based on SJAN have higher values of OA and Kappa than those of other methods, and the AUC values of SJAN are not significantly different from other comparison methods. The results of the ablation experiment show that the spectral-attention module focusing on separable bands in the spectral dimension and the spatial-attention module focusing on key change regions have some beneficial effects on the change-detection task. • Effect of L angle This section experimentally verifies the effect of the loss function with the spectral angular cosine-Euclidean distance on the detection accuracy of different datasets. The proposed loss function not only considers the similarity measure of spectral magnitude, but also considers the similarity measure of spectral angle. The contrast loss function and cross-entropy loss are used from the magnitude dimension. The spectral angular cosine-Euclidean distance is used to explore the spectral angular features of the images from the spectral angle dimension. The OA, Kappa, and AUC values of the detection results on different datasets are shown in Figure 14. Blue indicates the results of the change detection using L 2 loss function, denoted as L 2 . Orange indicates the results of L amplitude loss function that includes both L 1 and L 2 , denoted as L _amplitude . Gray indicates the effect of the total loss function L all that includes L amplitude and L anlge on the detection results, denoted as L _all . It can be clearly seen that L angle , which has accurate detection results for the more intricate details, has a positive effect on the change-detection task. Conclusions A multispectral-image-change-detection method based on the spatial-spectral joint attention network is proposed. The spatial-attention module and spectral-attention module are simultaneously incorporated into the Siamese network to extract more effective and discriminative spatial-spectral features. The spectral-attention module is used to explore the separability bands and the spatial-attention module is used to capture spatially critical regions of variation. In addition, a new loss function is proposed to consider the loss of spatial-spectral features from the spectrum amplitude and angle perspectives. The proposed SJAN method in this paper is validated on three real datasets to verify its effectiveness. The experimental results show that SJAN has better detection performance compared with other existing methods. However, our proposed joint spatial-spectral attention network does not consider the correlation between images at different moments when extracting features. The correlation between images at different moments has an impact on the change-detection performance. In the future, we will improve the attention module using the cross-attention mechanism to obtain the correlation of remote-sensing images at different moments. In addition, we will further address the issue of sample imbalance in future work. Author Contributions: W.Z., Q.Z., S.L., X.P. and X.L. made contributions to proposing the method, doing the experiments and analyzing the result. W.Z., Q.Z., S.L., X.P. and X.L. are involved in the preparation and revision of the manuscript. All authors have read and agreed to the published version of the manuscript.
2022-07-17T15:11:19.899Z
2022-07-14T00:00:00.000
{ "year": 2022, "sha1": "1a00da98986b3747f9fec8e6a50732a0808d457f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/14/14/3394/pdf?version=1657871724", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d1ddf4b929410620e5bc6f4ae0f4bd5e94439cad", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
244440806
pes2o/s2orc
v3-fos-license
Lucius phenomenon: the importance of a primary dermatological care Leprosy presents a varied clinical spectrum. Lucius phenomenon is a rare leprosy reaction characterized by erythematous, painful, slightly infiltrated macules and hemorrhagic bullae that progress to ulceration. This case report describes a patient whose diagnosis of leprosy occurred in the presence of Lucius phenomenon. Late diagnosis and delay in the implementation of specific therapy contributed to an unfavorable outcome, highlighting the importance of early identification and treatment of this disease, as well as its complications. Introduction Leprosy is transmitted by Mycobacterium leprae. The clinical picture and reaction states are influenced by the individual's immune response. 1,2 There are two welldescribed forms of the reactions: type 1 and type 2. 3 Lucius phenomenon (LP) is a rare, severe, and diffuse manifestation. Difficulty in the classification of this condition has been going on for years. 4 Some authors consider it a variant of the type 2 reaction and others as a third reaction pattern, associated with a coagulation disorder. 1,3 Brazil persists with high incidence rates of leprosy. Between 2014 and 2018, the average detection rate was 13.64 new cases for every 100,000 inhabitants. The high rate of endemicity reinforces the importance of establishing management measures to attain early detection, minimizing deformities and physical disabilities. 5 The authors describe a case of lepromatous leprosy (LL) that was diagnosed in the presence of LP, despite having been previously assisted at other health services, showing an unfavorable outcome. Case report A 69-year-old man presented with an infectious condition in his left leg for 15 days, associated with fever and chills. He had received amoxicillin and clavulanate, after an evaluation at a Basic Health Unit. He developed painless polygonal blisters, with spontaneous rupture and formation of ulcer- ated plaques with a necrotic background, initially on the lower limbs, with an ascending pattern. He sought care at an Emergency Unit, where treatment was initiated with meropenem, followed by ceftriaxone associated with clindamycin. He was admitted to the general ward of a referral hospital for contagious infectious diseases with a diagnostic hypothesis of pharmacoderma, with impaired general status and fever. He reported reduced tactile sensitivity, predominantly on the limbs. Then, the antibiotic regimen was suspended, and prednisone was introduced at a dose of 40 mg/day, and a dermatology assessment was requested. On physical examination, there were ulcerated plaques with a necrotic background, with the presence of fibrin and purulent secretion, affecting the lower limbs, abdomen, buttocks, upper limbs, ear pinnae, and upper lip, bilateral palpable and painful lymph nodes in the inguinal regions and scrotal sac edema (Figs. 1---3). Bilateral madarosis and burn injuries on the fingers were important findings, which raised the hypothesis of leprosy, and bacilloscopy of a dermal infiltrate was requested, which showed a bacilloscopic index of 5. Once the diagnosis was confirmed, standard polychemotherapy for lepromatous leprosy was started. The histopathological examination of the biopsies performed at two sites showed an extensive dermal infiltrate predominantly consisting of interstitial, superficial and deep foamy macrophages, full of bacilli, forming globia, in addition to neutrophils around and inside the vessel walls and After the histopathological examination, the dose of prednisone was adjusted to 1 mg/kg/day and thalidomide 100 mg/day was introduced. Despite repeated surgical debridement to remove devitalized tissues and the use of broad-spectrum antibiotics such as vancomycin and piperacillin/tazobactam, the lesions persisted with the secretion of purulent-fibrinous material, suggestive of secondary infection. The patient developed persistent fever and worsening of the general status. Because he had a recurrent pleural effusion, he was submitted to bronchoscopy, with alveolar bronchial lavage analysis, wich revealed a weakly positive GeneXpert. The RIPE regimen was prescribed; however, the patient died of probable sepsis due to a cutaneous focus on the third day. Discussion The diagnosis of leprosy is essentially clinical and should be preferably performed during primary case assistance. An early diagnosis prevents sequelae and interrupts the transmission chain. However, the varied clinical spectrum of the disease, depending on the individual immune response, makes diagnosis difficult by non-dermatologists. 1,3,6 LP is a reaction condition characterized by outbreaks of erythematous, painful, slightly infiltrated macules, and hemorrhagic bullae that progress to ulceration. Several factors can precipitate it, such as infections, drugs, and pregnancy. 2,4 It usually tends to progress with the formation of atrophic stellar scars. 4,7 It usually appears three to four years after disease onset in untreated or inadequately treated patients. 3,4 The evolution pattern usually starts in the lower limbs, ascending to the buttocks, upper limbs, hands, and rarely the back and face. 3 A triad of diagnostic criteria define LP: skin ulceration, vascular thrombosis, and invasion of the blood vessel wall by Hansen's bacilli. 7 The pathophysiology of occlusive thrombi is yet to be completely elucidated, and it may be due to two mechanisms, one of which is the result of immune-mediated events, whereas the other is a direct effect of the presence of Mycobacterium leprae itself in the vessel. 3 Ischemia, infarction, and tissue necrosis can occur as a result of these events, including disseminated intravascular coagulation. 7 There have been reports of gastrointestinal alterations in the presence of LP, with the presence of necrotizing and ulcerative lesions in the digestive tract. 8 Therefore, the parenteral administration of corticoids is recommended. A controversial issue is recommending the use of thalidomide in cases in which LP occurs together with erythema nodosum leprosum (ENL). However, there is no consensus in the literature regarding the use or not of thalidomide in patients without previous ENL. 2,4,7 The presence of pleural effusions in LP patients has been confirmed in autopsies. 3,9 GeneXpert detects the rpoB gene more specifically for M. tuberculosis; however, weakly positive samples can occur in cases of leprosy with a high bacillary index. A study by the Federal Drug Administration lists M. leprae as potentially causing a GeneXpert cross-reaction. 10 Therefore, it is questionnable whether the pleural effusion, in this case, is also a manifestation of LP, or a co-infection with other mycobacteria. This case report describes an undiagnosed LL patient, which initially went untreated and that evolved with LP. The unfavorable outcome described herein is relevant, as it reinforces the need for continuing education of professionals working at all levels of healthy care, aiming at attaining an early diagnosis of leprosy and immediate access to adequate treatment, as well as the management of complications. Likewise, the importance of the dermatologists in the care team for treating leprosy and other dermatoses in the hospital environment is emphasized. Financial support None declared. Authors' contributions Juliana Viana Pinheiro: Drafting and editing of the manuscript; intellectual participation in propaedeutic and/or therapeutic conduct of the studied cases; critical review of the literature; critical review of the manuscript. Maria Araci de Andrade Pontes: Approval of the final version of the manuscript; critical review of the literature; critical review of the manuscript. José Urbano de Medeiros Neto: Approval of the final version of the manuscript; collection, analysis, and interpretation of data. Heitor de Sá Gonçalves: Approval of the final version of the manuscript; effective participation in research orientation; critical review of the literature; critical review of the manuscript.
2021-11-21T16:12:01.481Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "29202df3deb6d048b3898020a1c6c44e8dda0008", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.abd.2020.08.033", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "72a4335f868a2386548d641e6b5685e36f36d399", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16674328
pes2o/s2orc
v3-fos-license
Transcriptome profiling of immune responses to cardiomyopathy syndrome (CMS) in Atlantic salmon Background Cardiomyopathy syndrome (CMS) is a disease associated with severe myocarditis primarily in adult farmed Atlantic salmon (Salmo salar L.), caused by a double-stranded RNA virus named piscine myocarditis virus (PMCV) with structural similarities to the Totiviridae family. Here we present the first characterisation of host immune responses to CMS assessed by microarray transcriptome profiling. Results Unvaccinated farmed Atlantic salmon post-smolts were infected by intraperitoneal injection of PMCV and developed cardiac pathology consistent with CMS. From analysis of heart samples at several time points and different tissues at early and clinical stages by oligonucleotide microarrays (SIQ2.0 chip), six gene sets representing a broad range of immune responses were identified, showing significant temporal and spatial regulation. Histopathological examination of cardiac tissue showed myocardial lesions from 6 weeks post infection (wpi) that peaked at 8-9 wpi and was followed by a recovery. Viral RNA was detected in all organs from 4 wpi suggesting a broad tissue tropism. High correlation between viral load and cardiac histopathology score suggested that cytopathic effect of infection was a major determinant of the myocardial changes. Strong and systemic induction of antiviral and IFN-dependent genes from 2 wpi that levelled off during infection, was followed by a biphasic activation of pathways for B cells and MHC antigen presentation, both peaking at clinical pathology. This was preceded by a distinct cardiac activation of complement at 6 wpi, suggesting a complement-dependent activation of humoral Ab-responses. Peak of cardiac pathology and viral load coincided with cardiac-specific upregulation of T cell response genes and splenic induction of complement genes. Preceding the reduction in viral load and pathology, these responses were probably important for viral clearance and recovery. Conclusions By comparative analysis of gene expression, histology and viral load, the temporal and spatial regulation of immune responses were characterised and novel immune genes identified, ultimately leading to a more complete understanding of host-virus responses and pathology and protection in Atlantic salmon during CMS. Background Cardiomyopathy syndrome (CMS) is a severe cardiac disease affecting Atlantic salmon (Salmo salar L.). Since its first diagnosis in Norway 1985 [1], it has also been diagnosed in sea farms in Scotland, the Faroe island, Denmark and Canada [2]. CMS primarily affects farmed fish from 12 to 18 months after transfer to sea water [3,4], but cases of CMS in wild salmon have also been observed [5]. The diagnosis of CMS is based on cardiac histopathology, characterised by severe inflammation and necrosis of the spongy myocardium of the atrium and ventricle [6]. Inflammatory infiltrates consist of mononuclear cells, probably lymphocytes and macrophages. The compact layer of the ventricle is usually less affected, and always occurs later than changes in the spongious layer [6,7]. Farmed salmon suffering from CMS often lack clinical signs and may die suddenly due to rupture of the atrium or sinus venosus resulting in cardiac tamponade [1,6]. Other symptoms like skin haemorrhages, raised scales and oedema have also been reported [3,5]. At necropsy, ascitic fluid, fibrinous perihepatitis and blood clots on the liver and heart are typical findings [3,5,6]. The first study indicating a transmissible nature of the disease, showed typical cardiac lesions in salmon post-smolts six weeks post injection of cardiac and kidney homogenate from CMS-diseased fish [7]. Recently a novel virus associated with CMS was cultured and identified [8]. The proposed virus named piscine myocarditis virus (PMCV) is a double-stranded RNA virus with structural similarities suggesting assignment to the Totiviridae family. In this study, viral RNA could be detected by quantitative real-time RT-PCR (qPCR) from 2 weeks post challenge, peaking at 6-8 weeks post challenge, coinciding with the increase of histopathological lesions in the heart. Virus particles were also detected by in situ hybridization in degenerate and necrotic cardiac myocytes from field outbreaks of CMS. In the present study, the same PMCV inoculum was used to experimentally reproduce CMS and to characterise the host immune response in infected salmon post-smolts. To gain an understanding of the immune response and host-virus interaction, a genome-wide approach based on oligonucleotide microarrays was used [9]. Six gene sets representing different arms of the immune response were identified, and temporal and spatial regulation was evaluated in combination with histology and relative quantification of viral RNA. The findings provide a comprehensive understanding of the immune response against PMCV in Atlantic salmon, and pathological and protective correlates thereof. Experimental CMS infection No mortality or clinical signs associated with CMS was observed. Potential contamination by other pathogens was excluded by qPCR for known viruses and bacteria from relevant organs and numbers of samples. Histopathological examination of heart was scored 0-3 according to severity of CMS lesions, as summarised in Figure 1. Results were used for evaluation of the infection challenge and for design of gene expression analyses. In control groups, one fish had moderate to severe cardiac lesions at 10 wpi, and was graded score 2 in the spongy layer of the ventricle and score 3 in the atrium. For all the other control fish, only score 0 and 1 were observed. No statistical difference between replicate control groups was found. Groups receiving PMCV inoculum developed cardiac lesions consistent with CMS from 6 wpi and onwards. At 6 wpi, 63% of the infected fish had moderate lesions (score 2) in the atrium (percentages refer to observations, excluding missing values). Lesions were first found in the atrium and subsequently in the spongy layer of the ventricle. The peak of histopathological lesions was observed at 8 wpi, with moderate atrial lesions (score 2) in 36%, and severe lesions (score 3) in 32% of the fish. In the subsequent time points, fewer fish had cardiac lesions, and at 9, 10 and 11 wpi, respectively 7.4%, 4.3% and 3.8% of the fish were scored 3. At 12 wpi, only mild focal lesions (scores 0 and 1) were described in the atrium and spongy ventricle. In general, lesions were first found in the atrium and were more severe than in the spongious layer. Differences between group 3 and 4 were significant for atrial lesions at 10 wpi and epicardial lesions at 11 wpi. Lesions in atrium of control groups 1 and 2 versus infected groups 3 and 4 were statistically different for all time points except 12 wpi, with highest significance between 4 and 11 wpi (p < 0.01). A similar difference was found in spongious lesions with highest significance between 6 and 11 wpi. Lesions in epicardium differed significantly between infected and controls at 4, 6 and 9 wpi. Viral load PMCV levels were analysed by qPCR to document viral replication in heart during infection and in the different tissues at early infection (4 wpi) and peak pathology (8 wpi) stages ( Figure 2). The same six individuals per time point as used for gene expression analyses were tested. Since 0 wpi and the two latest time points (11 and 12 wpi) were not included in microarray analysis, six randomly chosen samples from group 3 and 4 were tested respectively. At 2 wpi, 5 out of 6 fish were positive for viral RNA in heart (median of relative copy number = 20.5 fold, Figure 2a). Levels increased strongly until 4 wpi and then gradually until 6 wpi (median of relative copy number = 11, 583 fold), concurrent with the onset of histopathological changes. Levels reached a plateau phase between 6 and 10 wpi with no significant changes in viral RNA. From 10 to 11 wpi, levels were significantly reduced, indicating a clearance of virus. One week later (12 wpi), both viral load and individual variance were reduced. For most time points, individual variation in viral RNA was observed, analogous to the variation observed for histopathology score. Correlation between histopathology scores and viral C T levels in heart was highly significant (correlation coefficient: 0.75, p = 5.5 × 10 -11 ) (Figure 3). Comparison of viral loads between tissues showed highest and equal viral loads in heart, spleen and kidney ( Figure 2b). Significantly lower and equal levels of viral RNA were found in blood cells (PBL and RBC) and liver. Except for heart (p = 0.030), viral loads were not significantly different between 4 and 8 wpi in any of the tissues investigated. Identification of gene sets representing immune pathways The main purpose of the gene expression study was to identify gene sets representing different immune pathways and characterise their regulation over the time course of CMS in the infected organs. Fish were challenged by injection to ensure simultaneous infection and virus dose. Since histomorphological changes were investigated in cardiac tissue, RNA from infected versus control heart samples from six time points (2,4,6,8,9 and 10 wpi) were used for microarray analysis. In order to examine responses in fish with similar disease status and infection level, individuals with highest histology scores and viral loads were selected from the time points when pathological changes were significant (6-10 wpi). After microarray experiments, 5712 differentially expressed genes with a mean log 2 -ER > |0.65| in at least one time point were selected. Genes implicated in different immune pathways were defined in the resulting list using the STARS software package [9], which contains custom annotation of genes on the microarray based on GO classes, KEGG pathways, mining of literature and public databases and experimental evidence (transcription profiles/meta-analysis). Further, immune genes were arranged in seven sets taking into account both functions and the expression profiles. Six gene sets (Additional file 1) showed differential expression between at least two subsequent time points (one-way ANOVA with Newman-Keuls test, Additional file 2), while one gene set (inflammatory components) was excluded since no significant temporal changes were found. The log 2 -ER for all genes per gene set and time point were combined from microarray results of the two sample pools (2, 6, 9, 10 wpi) and four sample pools (4 and 8 wpi). The resulting expression profiles of the six gene sets are shown as box plots in Figure 4. Gene composition and temporal regulation for each gene set is presented in the following section. Composition and temporal regulation of immune pathways 1: Early antiviral and interferon response This gene set included 85 genes associated with nonspecific innate immunity related to the early antiviral and interferon (IFN) responses. This also included predicted pattern recognition receptors (e.g. toll-like receptors and RIG helicases) and associated genes, and early induced virus-responsive genes known from other salmonid viral The histopathology scores are shown as colours from green (score 0) to red (score 3), grey indicates missing tissue. Asterisks in the middle row represents the level of significance between groups (*/** = p < 0.05/0.01), according to two-sided t-tests. The row in the middle represents the difference between the two infected groups and the two control groups. disease profiles in our microarray database (e.g. inflammasome/pyrin-like genes such as VHSV-induced and TRIM/RING finger genes). The expression profile showed strongest upregulation at the early stages which levelled off during infection (Figure 4a). A median log 2 -ER +2.1 at 2 wpi decreased to +0.7 at 6 wpi. This level remained unchanged until 9 wpi followed by a significant decrease to +0.5 at 10 wpi. A heat map showing the expression of ten genes is given in Figure 5. These were selected either by random or based on their functional importance as evidenced from other studies in fish or higher vertebrates. Early upregulation of the cytoplasmic RNA helicases retinoic acid inducible gene I (rigI) and melanoma differentiation-associated gene 5 (mda5) involved in sensing and degradation of viral RNA, as well as a gene similar to the membrane-bound toll-like receptor 3, implied activation of virus recognition receptors and antiviral signalling. Several genes known to be activated in response to IFN signalling were upregulated, such as signal transducer and activator of transcription 1a (stat1a), myxovirus resistance gene Mx, interferon-inducible protein Gig2-like and radical s-adenosyl methionine domain-containing protein 2 (rsad2) also known as viperin. A similar expression profile was also observed for a suit of genes known to be induced by IFN but with unknown roles in fish immunity, such as interferon-induced protein with tetratricopeptide repeats 5 (ifit5) and very large inducible GTPase 1 (vlig1). A transcript encoding the 52 kDa Ro protein was one of several TRIM/RING finger genes highly induced at 2 and 4 wpi, supporting the role of this multi-gene family in early virus recognition and host defence [10]. 2: Complement response Twenty-two genes associated with the complement system were not differentially regulated at 2 wpi followed by a gradual upregulation from 4 wpi which peaked at 6 wpi (median log 2 -ER +1.5), concurrent with the onset of cardiac pathology. In subsequent time points expression levelled off, with a weak but significant induction at 9 wpi coinciding with pathology peak (Figure 4b). The upregulation at 6 wpi was significantly stronger compared to earlier and later time points. The heat map of representative genes ( Figure 5) shows activation of genes with different roles in the complement system: Antigen: antibody-complex binding by C1q; activating enzymes C2b and C1r/s; membrane-binding proteins and peptide-inflammatory mediators C3a/4a/5a, C3, C3-4 and C5 pre-protein; and membrane-attack protein by C8b. 3: B cell response This gene set included 37 genes involved in differentiation and regulation of B cells and antigen recognition by immunoglobulins. The expression profile was characterised by upregulation at two time points; during early infection 4 wpi and at peak pathology 9 wpi (Figure 4c). The later peak was stronger with median log 2 -ER +0.93 compared to +0.57 at 4 wpi. The two peaks were separated by the complement activation at 6 wpi. Immunoglobulin-related genes, represented with 21 distinct transcripts, comprised a large part of this group (Additional file 1). Genes related to antigen receptor signalling included hematopoietic lineage cell-specific (Lyn substrate 1) protein (hs1) and kelch-like protein 6 (klhl6) ( Figure 5). Similar function was predicted for several genes with Src homology-3/2 (Sh3/2) domains and activities, such as src kinase-associated phosphoprotein 2 (skap2) and dual adapter for phosphotyrosine and 3phosphotyrosine and 3-phosphoinositide (dapp1). The tyrosine-protein kinase lyn also plays a regulatory role in B cell receptor response after antigen binding. The CD9 antigen, which is expressed in many B cell subsets and in plasma cells in mammals [11,12], was strongest induced at the early peak. The opposite was found for CD97 antigen precursor, suggesting that it may have a role in activated B and T cells. 4: MHC antigen presentation This gene set included 34 genes involved in processing and presentation of viral antigens via MHC class I and II. The expression profile was similar to that of B cell response, but with less difference in average induction levels between the two peaks at 4 and 9 wpi, respectively log 2 -ER +1.09 and +1.21 ( Figure 4d). Besides, these genes were significantly upregulated already at the earliest time point (2 wpi). The gene set was dominated by genes related to the MHC class I pathway, such as antigen processing by proteasome components PSMBs/TAPs, and antigen presentation by the MHC class I heavy chain and light chain beta-2-microglobulin ( Figure 5). Examples of MHC class II related genes were a salmon homologue to the HLA class II histocompatibility antigen gamma chain and cathepsin s precursor, a lysosomal cysteine peptidase involved in degradation of peptides for antigenic presentation on MHC class II molecules [13]. 5: T cell response The fifth gene set included 69 genes with known or presumed roles in the regulation and effector functions of T lymphocytes. The expression profile showed a slight but significant upregulation from 2 to 4 wpi which increased by additional +1 median log 2 -ER at 8 wpi and reached maximum of +1.4 log 2 -ER induction at 9 wpi ( Figure 4e). This peak coincided with highest levels of the MHC antigen presentation and B cell response genes, and the time points when viral load and cardiac pathology were peaking. From 9 to 10 wpi gene expression dropped significantly. All classes of effector T cells seemed to be activated from 8 wpi onwards; cytotoxic (CTL) cells by induction of interferon gamma, granzyme and CD8 beta and T helper cells by induction of CD4 T cell surface glycoprotein ( Figure 5). Upregulation of other genes with common regulatory roles in T cell activation included CD3 antigens, T cell receptor genes, CD28 T-cell specific surface glycoprotein and the protooncogene tyrosine-protein kinase lck. 6: Apoptosis A group of 25 genes functionally linked to apoptotic pathways showed a coregulated expression pattern with the T cell response gene set, and was assumed to be involved in controlling cell death of T lymphocytes and/ or host target cells, as their maximum induction coincided with the histopathology peak ( Figure 4f). This gene set included several genes from the family of TNF receptors and caspases, with central roles in the execution phase of apoptosis ( Figure 5). Interestingly, the majority of genes was linked to the family of Rho GTPases, with recently established roles in controlling T cell regulation and apoptosis, e.g. rho-related GTP-binding protein RhoF and G precursors, CDC42 small effector 2, rho GTPase-activating protein 15, regulator of G-protein signalling 1, and several genes related to the Ras superfamily (Additional file 1). Other important regulators of programmed cell death in immunity which were activated included the tnf decoy receptor (tnfrsf6b) and the programmed cell death 1 ligand 1 precursor CD274. Tissue regulation of immune pathways Next, we analysed the tissue-specific features of immune transcriptome responses during CMS. Two RNA sample pools (n = 3 each pool, same individuals as analysed in the time course study) from the same organs as tested for viral load were analysed by microarrays from two time points; before the onset of cardiac pathology at 4 wpi and at peak of cardiac pathology/viral load at 8 wpi. The six gene sets outlined in the time course study were examined (Additional file 1), and their expression profiles are shown as box plots in Figure 6. Early antiviral and IFN-dependent genes were induced in all tissues, with significantly higher median log 2 -ER at 4 wpi compared to 8 wpi (Figure 6a). Levels at 4 wpi were similar in kidney, heart, spleen and blood, being lower in the liver. MHC antigen presentation also responded to infection in all examined tissues and, except for heart, levels were generally stronger at 4 versus 8 wpi ( Figure 6d). The remaining functional groups showed restricted expression changes. The complement response was upregulated in spleen at the peak of pathology 8 wpi (Figure 6b). Genes associated with B cells were upregulated in heart and at both time points (Figure 6c). They also showed a weak but significant induction in kidney at 4 wpi and in RBC at 8 wpi. The T cell and apoptosis gene sets showed similar expression profiles, with induction in heart which was strongest at peak pathology 8 wpi when compared to 4 wpi (Figure 6e-f). In addition, a significant though relatively weak increase was found in RBC between 4 and 8 wpi. Similar to the B cell response, kidney showed a transient induction at 4 wpi. Real-time qPCR analyses To verify the microarray results, six differentially expressed genes were analysed by qPCR in the four sample pools from 4 wpi. The results produced with two independent methods were in close concordance (Figure 7). The coefficient of linear regression was close to unity (0.97) while correlation and linear dependency were highly significant (Pearson r: 0.85, p = 3.8 × 10 -6 ). The qPCR analyses also assessed the individual variation and relationship between viral load and gene expression changes at 4 wpi. Six genes encoding putative antiviral and IFN-dependent genes from gene set 1 were selected due to high induction levels at this early time point. Relative expression of rig-I, mda5, stat1a, ifit5, rsad2 and baf was determined in 20 individuals from CMS infected groups 3 and 4 versus the same control pool as used for the microarray experiments (n = 10) (Figure 8). These genes were strongly induced in all fish with median fold changes from +3 (mda5) to +52.5 (baf). At this time point, no significant histopathological changes were observed, and equal numbers of individuals had histopathology scores of 0 or 1. As expected, none of the analysed genes showed significantly different expression between fish with histopathological scores 0 and 1 (both corresponded to a normal state of heart). Viral load in heart varied between C T 19-25 in these individuals, and gene expression levels and virus C T values were strongly correlated for all six genes (Table 1). This implied that genes were activated as a result of increased viral replication and suggested that they might represent markers of early infection status. Discussion This study addressed the temporal and spatial development of immune responses assessed by transcriptome changes during experimental piscine myocarditis virus infection. The regulation of immune pathways was compared to the disease status evaluated by histopathology and viral load, aiming at a comprehensive understanding of the host-virus interactions. These results provide a framework for in-depth functional studies on immunity and for evaluation of preventive strategies such as vaccination and nutritional intervention during CMS in Atlantic salmon. Challenge trial and infection Since the discovery of CMS, its diagnosis has been based on clinical findings and cardiac histopathology [4]. A virus with structural similarities to the Totiviridae family named PMCV was recently identified as the presumed causative agent of CMS [8]. Thus, pathogenesis and disease progression can now be more thoroughly evaluated by combining virus-specific qPCR with histology. It should be noted that due to difficulties with PMCV cultivation in vitro, virus titration has not been successful yet (M. Rode, personal communication). Consequently, the relative expression of viral RNA in this study could not be related to actual numbers of viral particles. Cardiac histopathology showed moderate to severe lesions consistent with CMS (score 2 or 3) exclusively in infected groups, with only one exception in control groups at 10 wpi. Furthermore, replicate groups were very similar to each other with respect to histopathology score. Significant differences between group replicates were only observed between infected groups at two time points (atrium at 10 wpi and epicardium at 11 wpi). The differences between infected and control groups were mainly associated with lesions in the atrium and spongy layer of the ventricle, which were highly significant from 4 to 11 wpi and 6 to 11 wpi, respectively. As expected, no lesions were observed in the compact layer of the ventricle, and lesions in epicardium were less prominent. These results are coherent with the pathology as described from clinical outbreaks and previous challenge trials with CMS [4,7,8]. After the peak in histology score at 8-9 wpi, lesions declined gradually suggesting the onset of a recovery phase. This was supported from qPCR analysis of viral load which followed the same pattern: increased replication until 6 wpi followed by a plateau phase until 10 wpi, and finally decreasing levels to the end point of the challenge trial. Thus, the strong correlation between histopathology and viral load which peaked concurrently with the activation of T cell pathway genes, suggest that the observed cardiac lesions resulted from virus cytopathic effects and necrosis of infected myocytes, triggering an inflammatory response followed by activation of T-cell mediated immunity. Examination of viral loads across different tissues showed equally high levels of viral RNA in kidney and spleen as compared to heart, while liver and blood cells had lower levels. However, increased replication from 4 to 8 wpi was only observed in heart, supporting that this organ was the main site of virus propagation [8]. However, heart may not be the primary replication site, since viral RNA was detected in all tissues and blood cells early after infection. High infection levels in kidney and spleen are typical for viral diseases in salmon, and are probably related to their roles in attracting primary infected/antigen presenting cells and priming lymphocytes for specific immunity. From a challenge trial with the recently described piscine reovirus (PRV) [14], higher viral loads were found in spleen and kidney as compared to lower but equal levels between heart and liver [15]. While belonging to different families, both PRV and PMCV cause necrosis and inflammation in heart muscle. The lower levels of PRV in heart may reflect a more persistently infecting nature compared to PMCV [8,15]. While clinical CMS outbreaks typically give 5-20% mortality [4], no fish died in the present challenge trial. This suggested that during natural CMS outbreaks, either larger fish (or different life stages), higher numbers of infectious viral particles or possibly additional stressors must be present to give mortality. Coherence between distinct stress factors and viral infection resulting in higher mortality has been shown for other diseases in farmed Atlantic salmon [16,17]. An interesting observation was the high proportion of fish with no or moderate cardiac lesions at time points when viral loads and histopathological scores were significant. Thus, fish obviously exhibited different outcomes of infection. Comparison of these groups is currently under investigation. In the present study, fish with strongest pathology and viral infection were selected at each time point, in order to characterise immune responses at the transcriptome level in fish at similar stages of the disease process and with representative CMS pathology. Fish was challenged by injection to ensure simultaneous infection of all fish and since cohabitation had shown to give slower development of disease and weaker overall pathology ( [8] and unpublished results). The individual qPCR analysis of six antiviral genes in 20 fish 4 wpi showed similar levels of upregulation, supporting that all fish had mounted equal antiviral responses following infection and were in similar disease state. This was further supported by a significant increase in viral loads of fish between week 2 and 4. Temporal development of immune responses Early antiviral and IFN responses were activated at every time point and across tissues during infection. However, the overall expression profile showed declined mRNA levels with time in spite of increased virus replication. This contrasted the strong correlation between expression levels of six selected genes and viral load observed at 4 wpi, implying that activation of these genes with increased production of viral RNA was predominant at the early stage and possibly related to autocrine effects, such as pathogen recognition and induction of signalling pathways. The subsequent reduction in transcriptional activity might be due to increased paracrine effects of proteins in the induced innate responses, including both effector functions for clearance of virus and recruitment of immune cells for development of humoral and adaptive immunity. The IFN type I responses to different viral diseases have been reported in salmonids, with particular focus on IFN alpha and Mx protein [18][19][20][21]. We identified a suite of putative IFN-dependent genes with stronger upregulation which have been unknown or scarcely investigated in salmon. Most of these genes have shown responses to other viral diseases in salmon [9]. Strongest induction at 2 wpi was found for ifit5 and rsad2, also known as viperin. Both genes are known to be induced by IFN and involved in defence against viruses [22,23]. Little is known about gig1-and gig2-like genes in fish, but they were induced by viral infection in grass carp cells [24]. Members of this gene family were also strongly induced in rainbow trout 24 h post infection with the parasite causing whirling disease [25]. Four different genes from the tripartite motif (TRIM) family C-IV were also significantly induced over several time points. One of them, TRIM25, has been implicated in the RIG-I pathway by regulating the capability of RIG-I to activate type I IFN [26,27]. Several genes belonging to families of IFN-inducible GTPases were also early induced, including two transcript similar to very large inducible GTPase 1 (VLIG1) and eight transcripts similar to GTPase IMAP family member 7. The role of these novel GTPases in vertebrate infection is gaining interest [28,29]. Proteins of the complement system bind and opsonize viral particles, marking them for phagocytosis by APCs. Binding to antigen-antibody complexes makes the complement system a bridge between the innate and the adaptive immune system. This is in line with results of the present study, where upregulation of complement genes at 6 wpi took place shortly after the first activation of B cell-and MHC antigen presentation genes and the onset of cardiac histopathology. In subsequent time points, activation of the adaptive immune response was most prominent. This distinct sequence of immune events was evidence for a coordinated regulation of responses, and the 'bridging' role of the complement system between the early innate response and the fully activated adaptive response. Coincidence with the first occurrence of moderate cardiac lesions (histopathology score 2), suggests complement genes as candidates disease markers for the early clinical stage of CMS. The immediate activation of antigen presentation as has been observed during early virus infection in salmon [19], was supported by the upregulation of proteasome and MHC class I pathway genes that coincided with the early IFN/antiviral response at 2 wpi. This was analogous to the typical development of an adaptive immune response: while IFNs are strongest activated and elicit antiviral effects very early after infection, they also have an activating effect on antigen processing and presentation [30]. Activation of antigen presentation is also the first step in the cellular immune response mediated by B and T lymphocytes. The first peak of B cell activity was detected at 4 wpi following the typical pattern of a humoral immune response in teleost fish, usually expected between 4 and 6 weeks after infection [31]. However, the co-regulated B cell-and MHC antigen presentation genes showed a biphasic expression, with a second and even stronger activation at 8 and 9 wpi when the clinical signs were also peaking. This observation is probably explained by the higher influx of leukocytes and level of inflammatory reactions in heart tissue as supported by histology. Interestingly, stronger second peak of induction occurred after the activation of complement genes at 6 wpi. This may indicate that a potential humoral response based on antibody-dependent cellular cytotoxicity and virus neutralization is complement-dependent. Future development of tools for assessment of virus-specific antibody titers may confirm this. Most of the representative genes ( Figure 5) followed the typical regulatory pattern for B cell and antigen presentation components. For example, the strongest induction of CD9 was found at 2 and 4 wpi, before the strongest T cell activation was detected. It has been shown that CD9 is induced downstream of the antigen receptor during T-independent humoral B cell response [32]. However, the majority of genes showed highest upregulation in heart at peak pathology and viral load, 8 and 9 weeks after infection, indicating their role in B cell responses and presentation of viral antigens to effector T cells. One example was CD97, a surface protein of both B and T lymphocytes which is expressed at low levels in inactive cells but rapidly induced after activation [33]. Thus, it can be used as a marker of general activation of lymphocytes. In this study it was a representative marker gene for the overall expression profile of B/T lymphocyte-related responses. The co-regulatory pattern of T cell-and apoptosisrelated genes correlating with histopathology score was a prominent feature of the immune response in CMS hearts. Control of cell death by apoptosis is a fundamental process for regulation of the T cell response and for maintaining homeostasis in the immune system after it has expanded to combat infections [34]. Importantly, dysfunctional control of T cell function and apoptosis is associated with immunopathology [35]. Thus, the apoptosis-related profile coinciding with the T cell profile in this study may represent novel genes involved in regulation of effector function and controlled cell death of T cells in salmon. Of particular interest were several genes encoding TNF-related proteins and programmed cell death ligand 1 (aka B7-H1/CD274). The dominance of genes encoding Rho GTPases was interesting, since they have been implicated in the regulation of TCR signaling, T cell cytoskeletal reorganization, T cell migration and T cell apoptosis [36]. It seemed to be a borderline from 6 to 8 wpi when expression of T cell/apoptosis pathways was significantly induced, coinciding with the first occurrence of histopathology scores 3 and virus C T values below 20. According to this pattern, the first severe inflammation and cytopathic effects caused by the virus (histopathology score 2) at 6 wpi was probably the priming event for a strong influx of lymphocytes to the infected heart tissue. Cardiac elevation of mRNA levels for CD8, granzyme and IFN gamma at 8-10 wpi indicated activity of cytotoxic CD8 + T cells. Genes encoding CD4 were also induced, but at lower levels. One week after elevation of T cell activity (9 wpi, median of relative fold change of PMCV = 10, 021) viral load and histopathology score were decreasing (10 wpi, median PMCV fold change = 8, 404), and the first significant decrease was evident at 11 wpi (median PMCV fold change = 983). This indicated that the cellular effector response mediated by T cells, and in particular CD8 + T cells, was contributing to a successful clearance of the virus infection. Among other interesting genes in this group, TNF decoy receptor showed the highest correlation versus histopathology score and viral load. However, the function of this receptor in salmon immunity is not known. Tissue regulation of immune responses The systemic induction of early antiviral and IFNdependent genes was expected, given the observed replication of PMCV in all tissues and the fact that most of these genes are presumably activated in the presence of viral RNA. The stronger induction at early infection 4 wpi compared to peak viral load 8 wpi was common for all tissues and blood, and has already been discussed. The functional relation between IFNs and MHC antigen presentation pathways was supported by their similar expression profiles across tissues and time points. The only exception was the cardiac expression of the latter, which was equally induced between time points (4 and 8 wpi). Proteasome and MHC components are commonly induced by IFNs during viral infection [19,37]. In addition to heart tissue, where pathology developed, these genes were equally induced in spleen and kidney at 4 wpi supporting the importance of these tissues for lymphocyte maturation and priming of the immune response [38]. The observation that this induction was not in sync with viral load may further suggest that these responses were time-dependent, e.g. related to the stage of disease rather than viral load and pathology. Little is known about the expression of complement components in Atlantic salmon during viral infections. In common carp and channel catfish, the highest expression of complement was found in the liver [39,40]. In humans, liver is also the main source of complement component C3, but production in macrophages and endothelial cells has also been shown [41]. During CMS, complement genes were only activated in extrahepatic tissue and more specifically in cardiac tissue, where virus infection was most prominent. Interestingly, complement genes were induced in the spleen during clinical phase, suggesting that splenocytes (e.g. macrophages) represent an important source of complement and can play a role in this response in salmonid virus infection. This induction of complement was also independent of viral load, which was equal between 4 and 8 wpi. In humans, the complement component C3 has an important role in regulating the maturation of B cells in the spleen [42]. Thus, the induction of splenic complement might reflect signalling events between activation of antigen presenting cells such as B cells and possibly production of virus-specific antibodies. However, more research is needed to understand this process. Tissue regulation of adaptive immune responses as represented by expression of B cell, T cell and apoptosis gene sets shared some common features. Most notable was the opposite regulation of these responses in heart and kidney between the early and clinical stage, which was characterised by an induced expression from 4 to 8 wpi in heart, and reduced expression from 4 to 8 wpi in kidney. Interestingly, twelve genes (among them CD8, CD37, granzyme and TNF decoy receptor) showed no regulation in heart but induced expression in kidney at 4 wpi. On the contrary, at 8 wpi induction was restricted to heart while no expression changes were found in kidney. This could be evidence for an early clonal expansion and maturation of effector T cells in kidney which then migrated to the heart for elimination of virus-infected cells four weeks later. The adaptive immune responses in kidney was activated at the early stage of infection despite equally high levels of viral RNA at both 4 and 8 wpi, further suggesting a specific role for kidney in the early priming and maturation of cellular immunity. Conclusions We used oligonucleotide microarrays to assess transcriptome changes in Atlantic salmon experimentally infected with PMCV, inducing cardiac pathology consistent with CMS and transient viraemia. From comparative analysis of gene expression, histology and viral load, the temporal and spatial regulation of immune responses were characterised and novel immune genes identified, ultimately leading to a more complete understanding of host-pathogen responses and pathology and protection in Atlantic salmon during CMS. Experimental infection and sampling The infection trial was performed at VESO Vikan (Veterinary Science Opportunities, Namsos, Norway), a GLP-certified research station for infectious challenge experiments on aquatic organisms. The trial was approved by The National Animal Research Authority http://www.fdu.no according to the 'European Convention for the Protection of Vertebrate Animals used for Experimental and other Scientific Purposes' (EST 123). The experimental design with selection of sampling times and PMCV inoculum was based on results from two previous pilot trials [7] (and unpublished results). From both of these experiments, histopathological lesions associated with CMS were significant from week 6 until week 10 post challenge (injection). Therefore, in the present study we sampled weekly from 8 until 12 weeks post challenge, aiming to cover the period with CMS pathology. Biweekly sampling after infection (2, 4, 6 wpi) was done in order to cover the early phase before pathology. Unvaccinated Atlantic salmon (Salmo salar L., standard strain from Aqua Gen AS) were smoltified (seawater-adapted) according to standard procedures and kept at 12°C under standardised conditions (light, feeding, water flow, salinity and fish density). Fish were acclimatised in respective tanks for at least one week before challenge. The trial was conducted in four separated tanks; one infected and one control group in duplicates. Each tank contained 120 fish with an average weight of 50 g at the beginning of the experiment. Injection of PMCV was performed after sedation (benzocaine, 30-40 mg L -1 ). Infected groups received an intraperitoneal (i.p.) injection dose (0.2 ml) of a supernatant from a GF-1 cell line (derived from the fin tissue of orange-spotted grouper, Epinephelus coioides [43]) infected with PMCV as described [8]. In short, heart tissue from freshly dead Atlantic salmon was collected from a clinical field outbreak of CMS (diagnosed by histopathological examination, score > 3). Tissue was homogenised, centrifuged to remove cellular debris (4000 g at 4°C for 20 min) and filtrated (0.22 μm filter) before inoculation onto GF-1 cells. Cells were grown in plug seal cap culture vessels at 15°C in L-15 supplemented with 1% L-glutamine (2 mM), 0.1% gentamicin sulphate (50 μg ml -1 ) (all from Sigma Aldrich, St. Louis, USA) and 10% fetal bovine serum (Invitrogen, CA, USA). Cytopathic effect (CPE) was evident by accumulation of cytoplasmic vacuoles from 6 until 21 days post inoculation, when supernatant and cell lysate were harvested. CPE was reproduced when passaged onto fresh cells. Inoculation of cells with heart tissue homogenate prepared from healthy Atlantic salmon (confirmed by histopathology, score 0) did not give CPE. Tanks with control groups were injected i.p. with the same dose of conditioned medium from uninfected cell culture prepared as described above. Both PMCV and control inoculums were tested negative for salmonid alphavirus subtype 3, infectious pancreatic necrosis virus, infectious salmon anaemia virus, piscine reovirus and grouper nervous necrosis virus by qPCR. An overview of samplings and analyses is given in Additional file 3. Tissues and blood were sampled at eight time points: 2, 4, 6, 8, 9, 10, 11 and 12 wpi, in order to cover early infection phase (biweekly sampling) and clinical phase with improved coverage (weekly sampling). In addition, samples were taken from fish before the experiment started (0 wpi). From each time point, 15 fish from each of the four tanks were sedated (as described above) and euthanized by decapitation. Standardised samples from heart, mid-kidney, liver and spleen were snap-frozen in liquid nitrogen for RNA, and fixed in formalin (10% neutral phosphate-buffered) for histology. Blood was sampled from the caudal vein in heparinized vacutainers and kept on ice. Peripheral blood leukocytes (PBL) were separated from red blood cells (RBC) as described [44] and stored at -80°C until RNA was extracted. Histopathology Formalin-fixed heart samples were prepared by paraffin wax embedding and standard histological techniques [45]. Sections were stained with haematoxylin and eosin. From each fish, a longitudinal section of the whole heart was evaluated. All cardiac compartments were examined and classified histologically using a visual analogue scale. Atrium, epicardium, compact and spongy layers of the ventricle were graded from 0 to 3 according to the severity of the lesions [7]. Score 0 and 1 was considered normal, with no histopathological findings (score 0), or a single or few focal lesions (score 1). Score 2 represented several distinct lesions and increased mononuclear infiltration. Score 3 represented multifocal to confluent lesions in > 50% of tissue and moderate to severe leukocyte infiltration. Sections were coded and evaluation was randomised and blinded. RNA extraction Tissue samples for microarray hybridization and qPCR were stored at -80°C prior to RNA extraction. Standardised tissue sections of 10 mg from each organ and 5-10 × 10 6 blood cells (PBL and RBC) were prepared under sterile/RNase-free conditions. Tissue sections from heart consisted of an equal mix of ventricle and atrium with all compartments included. Frozen sections were transferred directly to 1 ml chilled TRIzol (Invitrogen) in 2 ml tubes with screw caps (Precellys ® 24, Bertin Technologies, Orléans, France). Two steel beads (2 mm diameter) were added to each tube and tissue was homogenized in a Precellys ® 24 homogenizer for two times 25 sec at 5000 rounds per minute with a pause of 5 sec between rounds. Blood samples were homogenized in 1 ml chilled TRIzol by repetitive pipetting up and down. RNA was extracted from the homogenized tissues using PureLink RNA Mini kits according to the protocol for TRIzol-homogenised samples (Invitrogen). The concentration of extracted total RNA was measured with a NanoDrop 1000 Spectrometer (Thermo Scientific, Waltham, MA, USA). The integrity of total RNA was determined using an Agilent 2100 Bioanalyzer with RNA Nano kits (Agilent Technologies, CA, USA). Samples with RNA integrity number (RIN) of 8 or higher were accepted. Design of microarray experiments An overview of microarray analyses is given in Additional file 3. The salmonid oligonucleotide microarray (SIQ2.0, NCBI GEO platform GPL10679) was used, consisting of 21 K features printed in duplicates on 4 × 44 K chips from Agilent Technologies [9]. Two-color design was used, where pooled infected fish labelled with fluorescent Cy5 dye and non-infected pooled control fish from the same time point labelled with Cy3 dye were competitively hybridised on the array. Microarray hybridizations were divided in two experimental lines. The first was a time course study in heart tissue from 2, 4, 6, 8, 9 and 10 wpi. These time points were selected based on the results from histopathological examination (Figure 1), and covered the early infection phase and peak of cardiac pathology. From each time point, biological replicates of test (infected) samples consisted of two RNA pools with each pool consisting of three individual fish. For time points 4 and 8 wpi, representing respectively the early and clinical phase of infection, two new pools were added, each consisting of two fish (different than those used in the first two pools). The individual fish were selected for maximum heart histopathology score at the time points when this was significant (from 6 wpi onwards, see Figure 1). Reference samples were pooled RNA (equimolar amounts of total RNA) from eight to ten fish from groups 1 and 2 (noninfected controls) per each time point. The second experimental line focused on tissue responses in midkidney, liver, spleen, PBL and RBC (in addition to heart as described above) at the early and clinical phase of infection, respectively 4 and 8 wpi. Similar to the time course study, biological replicates were two pools of RNA, each consisting of three individual fish, per each tissue and time point. The two pools were RNA from the same six individuals as used for the time course study. Reference sample pools were prepared in the same manner, with RNA from the same non-infected control individuals as used for the time course study. Recording of microarray experiment metadata was in compliance with the Minimum Information About a Microarray Experiment (MIAME) guidelines [46]. Microarray hybridization and data processing Unless specified otherwise, all reagents and equipment used for microarray analyses were from Agilent Technologies according to manufacturer's protocol. Labelling and amplification of RNA was performed on 500 ng total RNA using Quick Amp Labeling Kits, Two-Color and RNA Spike-In Kits, Two-Color. For fragmentation of labelled RNA, the Gene Expression Hybridization Kit was used. Hybridizations were performed for 17 hours in an Agilent hybridization oven set to 65°C with a rotation speed of 10 rounds per minute. Arrays were washed for one minute with Gene Expression Wash Buffer I at room temperature, and one minute with Gene Expression Wash Buffer II at 37°C. Slides were scanned immediately after washing using a GenePix Personal 4100 A scanner (Molecular Devices, Sunnyvale, CA, USA) at 5 μm resolution and with manually adjusted laser power to ensure an overall intensity ratio close to unity between Cy3 and Cy5 channels and with minimal saturation of features. The GenePix Pro software (version 6.1) was used for spot-grid alignment, feature extraction of fluorescence intensity values and assessment of spot quality. After filtration of low quality spots, data were exported into the STARS platform [9] for data transformation, normalization and quality filtering. The values in spot replicates were averaged, Lowess normalization of log 2 -expression ratios (ER) was performed, and differentially expressed genes (DEG) were selected based on mean log 2 -ER > |0.65| in at least one time point and tissue, spot signal quality threshold, number of positive spots and one-sample t-test (p < 0.05, H 0 : log 2 -ER = 0). Corrections for false discovery rate were not employed as previous microarray studies in Atlantic salmon have demonstrated them to be overly conservative [47,48]. The final list of DEG used for further analysis included 5712 genes. Data was submitted to GEO (submission number GSE28843). Gene sets and annotations For this work, functional subgroups or gene sets were compiled from the list of 5712 differentially expressed genes (Additional file 1). These were created by the use of the STARS software package customized for mining of microarray gene expression data [9]. STARS contain custom annotations of genes on the microarray based on GO classes, KEGG pathways, mining of literature and public databases and experimental evidence (transcription profiles/meta-analyses). Quantitative real-time RT-PCR The following section relates to the analysis of host gene expression. Experiments were conducted according to the MIQE guidelines [49]. Synthesis of cDNA was performed on 0.2 μg DNAse-treated total RNA (Turbo DNA-freeTM, Ambion, Austin, TX, USA) using the TaqMan ® Gold Reverse Transcription kit (Applied Biosystems, Foster City, CA, USA) in 25 μl reactions with random hexamer priming according to manufacturer's protocol. Complementary DNA was stored undiluted at -80°C in aliquots to avoid repeated freeze-thawing. To avoid risk for presence of residual DNA contamination, control reactions without RT was tested on respective tissues and qPCR primers were possibly designed to span introns. Oligonucleotide primers were designed with the program eprimer3 from the EMBOSS program package (version 5.0.0, http:// emboss.sourceforge.net/). Amplicon size was set to 80-160 and melting temperature to 59-61°C. Primers were purchased from Invitrogen (Additional file 4). In silico analysis of gene targets was performed using a customised program for BLAST and sequence alignments [9]. PCR amplicon size and specificity were confirmed by gel electrophoresis and melting curve analysis (Tm calling; LightCycler 480, Roche Diagnostics, Mannheim, Germany). QPCR was conducted using 2× SYBR ® Green Master Mix (Roche Diagnostics) in an optimised 12 μl reaction volume, using 5 μl of 1:10 diluted cDNA, and primer concentrations of 0.42 μM. PCR reactions were prepared manually and run in duplicates in 96-well optical plates on a Light Cycler 480 (Roche Diagnostics) with the following conditions: 95°C for 5 min (pre-incubation), 95°C for 5 sec, 60°C for 15 sec, 72°C for 15 sec (amplification, 45 cycles) and continuous increase from 65°C to 97°C with standard ramp rate (melting curve). Cycle threshold (C T ) values were calculated using the second derivative method. For evaluation of the results, the mean of duplicates was used. Duplicate measurements that differed more than 0.5 C T values were removed and reanalysed. Relative expression ratios of test samples versus the average of the controls were calculated according to [50]. Elongation factor 1α (GenBank ID: BT072490.1) was used as reference gene [51], and was found to be stably transcribed in control and test samples according to the BestKeeper software [52]. The efficiency of the PCR reactions was estimated for all primer pairs by six times 1:5 dilution series of a cDNA mix of all used samples. The efficiency values were estimated by using the LightCycler ® 480 Software (version 1.5.0.39). All measured efficiencies were between 1.905 and 1.999. Viral load Relative quantification of PMCV was employed by qPCR on RNA isolated as described above from selected samples (heart; weeks 0, 2, 4, 6, 8, 9, 10, 11, 12, n = 6, kidney/liver/spleen/PBL/RBC; weeks 4 and 8, n = 6). Each sample's RNA concentration was normalized to 62.5 ng per 20 μl cDNA synthesis reaction, which was part of the SuperScript ® III Platinum ® Two-Step qRT-PCR Kit with SYBR ® Green (Invitrogen). In order to reduce secondary structures, RNA was heat denatured at 95°C -5 min and then cooled down to 4°C prior to addition of RT enzyme and master mix. cDNA synthesis reaction conditions and temperature cycling were kept in line with manufacturer's guidelines. qPCR was performed on each sample in triplicate reactions containing 12.5 μl 2× Platinum ® SYBR ® Green qPCR SuperMix-UDG, ROX reference dye was added to the master mix to give a final reaction concentration of 50 nM, 1.25 μl 6 μM ORF2-3F (5'-GGAAGCAGAAGTGGTGGAGCGT-3') and 1.25 μl 6 μM ORF2-3R (5'-CCGGTTTTGCG CCCTTCGTC-3'). Ten μl 1:10 dilution of cDNA was added per reaction. The reaction conditions were UDGincubation at 50°C for 2 min, activation of the hot-start polymerase at 95°C for 2 min, followed by 45 cycles of 95°C for 15 sec, primer annealing for 15 sec and extension for 45 sec at 60°C. Melting curve analysis was performed to confirm formation of expected amplicon. The viral loads were expressed as a relative copy number with non-infected controls (0 wpi) set to 1, calculated by the formula 2^(C T(0wpi median) -C T(sample) ). Statistical analyses Histopathology scores and pair-wise comparison of gene sets were tested for significant differences by an independent two-sample t test using the function t-test() in the R software STATS-package (version 2.10.1, http:// www.cran.r-project.org/). In addition, one-way ANOVA followed by Newman-Keuls test was used to assess differences between time points and tissues for each gene set. Correlations and respective p-values were calculated by the cor.test() function in R. For all tests, significance levels of the resulting p-values with p < 0.05 and p > 0.01 are marked with single asterisk (*), and p < 0.01 are marked with double asterisk (**) in all figures. The function of the regression line and the respective p-value for the confirmation of the microarray experiments by qPCR were calculated by the lm()-function ("linear model") in R.
2016-05-12T22:15:10.714Z
2011-09-23T00:00:00.000
{ "year": 2011, "sha1": "831da0da8aa83572f256cc818b5cf1b20f43631e", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-12-459", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "87efeb19d2da10f6c175ba7125ed17af8789af9e", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
228137675
pes2o/s2orc
v3-fos-license
Increased prevalence of impulse control disorder symptoms in endocrine diseases treated with dopamine agonists: a cross-sectional study Introduction Impulse control disorders (ICDs) have been described as a side effect of dopamine agonists (DAs) in neurological as well as endocrine conditions. Few studies have evaluated the neuropsychological effect of DAs in hyperprolactinemic patients, and these have reported a relationship between DAs and ICDs. Our objective was to screen for ICD symptoms in individuals with DA-treated endocrine conditions. Materials and methods A cross-sectional analysis was conducted on 132 patients with pituitary disorders treated with DAs (DA exposed), as well as 58 patients with pituitary disorders and no history of DA exposure (non-DA exposed). Participants responded to the full version of the Questionnaire for Impulsive-Compulsive Disorders in Parkinson’s disease (QUIP). Results Compared with the non-DA-exposed group, a higher prevalence of DA-exposed patients tested positive for symptoms of any ICD or related behavior (52% vs. 31%, p < 0.01), any ICD (46% vs. 24%, p < 0.01), any related behavior (31% vs. 17%, p < 0.05), compulsive sexual behavior (27% vs. 14%, p < 0.04), and punding (20% vs. 7%, p < 0.02) by QUIP. On univariate analysis, DA treatment was associated with a two- to threefold increased risk of any ICD or related behavior [odds ratio (OR) 2.43] and any ICD (OR 2.70). In a multivariate analysis, independent risk factors for any ICD or related behavior were DA use (adjusted OR 2.22) and age (adjusted OR 6.76). Male gender was predictive of the risk of hypersexuality (adjusted OR 3.82). Discussion Despite the QUIP limitations, a clear sign of increased risk of ICDs emerges in individuals with DA-treated pituitary disorders. Our data contribute to the growing evidence of DA-induced ICDs in endocrine conditions. Introduction Impulse control disorders (ICDs) are psychopathological conditions characterized by difficulty resisting urges to engage in behaviors that are excessive and potentially harmful to oneself or others [1]. These disorders can cause significant impairment in social and occupational functioning, as well as legal and financial difficulties [1]. The four major ICDs are pathological gambling, hypersexuality, compulsive buying/shopping, and binge eating [2]. ICDs have been described as a side effect of treatment with dopamine agonists (DAs) since 2000, when the first cases of pathological gambling were reported in patients treated for Parkinson's disease (PD) [3]. Up to 20% of PD patients experience ICDs over the course of their illness [4]. 3 An increased frequency of ICDs has been also described in individuals with restless leg syndrome treated with DAs [5]. Pharmacological stimulation by DAs of the D3 dopamine receptors in the mesocorticolimbic dopaminergic pathway seems to be the mechanism underlying the activation of the reward system that leads to ICDs [6]. DAs are also used to treat endocrine conditions, such as prolactin (PRL)-secreting adenoma, growth hormone (GH)and GH/PRL-secreting adenoma, non-functioning pituitary adenoma, and idiopathic hyperprolactinemia. Therapy with DA aims to reduce levels of PRL and, in the case of adenoma, to induce tumor shrinkage. The dose of DA is generally lower for endocrine conditions (e.g., cabergoline 0.25-3.0 mg/week, up to 11 mg/week in resistant patients) than that used to treat PD [7]. Nevertheless, some cases of ICDs have been reported in prolactinomas treated with DA [8][9][10][11][12][13][14]. Only a few cross-sectional studies [15][16][17] and one prospective study [18] have evaluated the neuropsychological effect of DAs in hyperprolactinemic subjects; these studies found a relationship between DAs and ICDs. The present study aimed to screen for ICDs and related behavior symptoms in patients with DA-treated endocrine conditions through a neuropsychological screening questionnaire. Materials and methods This was an observational cross-sectional study including 180 patients enrolled in the outpatient neuroendocrine clinic of the "City of Health and Science of Turin" University Hospital (Turin, Italy) from October 2016 through February 2019. Of these, 132 patients were affected by functional hyperprolactinemia or PRL-, GH-, or GH/PRLsecreting adenoma with present or past use of DAs, defined as DA exposed; cabergoline was the only DA employed in the study. The reference group consisted of 58 patients with functional hyperprolactinemia or PRL, GH-, or GH/PRLsecreting adenoma never treated with DAs or with hypothalamic/pituitary disease without hyperprolactinemia, defined as non-DA exposed. Exclusion criteria were a history of PD and known psychiatric illness. Clinical information was collected through a review of medical records and during the survey. All participants in the DA-exposed and non-DA-exposed groups completed the following self-administered, validated neuropsychological tools: (1) the full version of the Questionnaire for Impulsive-Compulsive Disorders in Parkinson's disease (QUIP) to screen for symptoms of ICDs (compulsive gambling, compulsive sexual behavior, compulsive buying, compulsive eating) and related behaviors (hobbyism, punding, walkabout) [19]. The Italian QUIP version herein employed was used in an international, multicenter study on Parkinson's disease patients [20], with the questionnaire's last section on PD medication modified to non-PD medication. The full version assessed ICDs not only currently but also any time during DA treatment, to include in the analysis those who discontinued cabergoline. (2) Test Your Memory was used to evaluate intellectual efficiency [21] and exclude severe cognitive impairment. Lastly, (3) the validated Italian version of the Hospital Anxiety and Depression Scale (HADS) was used to assess the presence of anxiety and depressive symptoms [22]. Data are presented as medians (first and third quartiles) for skewed variables. Statistical analysis was performed using the Stata program (version 15; StataCorp LLC, College Station, TX, USA). The DA-exposed and non-DAexposed groups were compared using the Mann-Whitney U test for continuous variables and using the chi-squared and Fisher tests for categorical variables. Multivariate analysis was performed using a logistic regression model, after logarithmic transformation of all variables with a skewed distribution. A p value < 0.05 was considered statistically significant. The study was approved by the local ethics committee. All patients gave written informed consent. Results The population characteristics are summarized in Table 1. In the DA-exposed group, the median age was 40.5 years (27.5, 52.0) at diagnosis and 48.9 years (39.1, 62.7) at questionnaire completion; 58% were women. At study enrollment, current medications included DAs for 81% of DA-exposed individuals; 19% had previously discontinued DA treatment. Median weekly cabergoline dose was 1 mg (0.75, 1.5) and median treatment duration was 4 years (1.5, 8). Compared with the DA exposed, the non-DA exposed had a higher median age at the time of diagnosis (52.0 years; 38.5, 66.0; p < 0.01) and of questionnaire completion (66.8 years; 52.0, 73.1; p < 0.01), as well as a trend toward a lower percentage of women (43%; p = 0.06). Hyperprolactinemia (PRL levels > 25 ng/mL) was observed in 84% of the DA exposed. Of these, 90% had PRL levels within a normal range after DA therapy. Normal PRL, testosterone, and both PRL and testosterone levels were found in 85%, 80%, and 76% of male DA exposed, respectively, after DA alone or in combination with testosterone replacement therapy. In the non-DA-exposed group, only 18% of patients had hyperprolactinemia. Of these, 90% had PRL levels within a normal range, spontaneously or after neurosurgery; one subject had borderline-elevated PRL values without the need for DA treatment. The results of the QUIP screening are summarized in Table 2. Compared with the reference group, the DA-exposed individuals showed a higher prevalence of compulsive sexual behavior (27% vs. 14%; p < 0.04) and punding (20% vs. 7%; p < 0.02), as well as a trend toward a higher prevalence of compulsive buying (18% vs. 9%; p < 0.07) and compulsive eating (22% vs. 12%; p < 0.08), but no difference in compulsive gambling (6% vs. 3%; NS), hobbyism (21% vs. 14%; NS), walkabout (8% vs. 5%; NS), excessive amount of time spent on impulsive behaviors (7% vs. 3%; NS), difficulty in controlling the amount of time spent on impulsive behaviors (8% vs. 5%; NS), or non-PD excess therapy (4% vs. 3%; NS). Table 1 Clinical characteristics of dopamine agonist-exposed and non-exposed individuals Data are presented as the median (interquartile range) DA = dopamine agonist DA exposed = patients with pituitary disorders treated with dopamine agonist Non-DA exposed = patients with pituitary disorders and no history of dopamine agonist exposure DA exposed (n = 132) Non-DA exposed (n = 58) p value Table 2 Prevalence of impulse control disorders and related behavior symptoms in dopamine agonist-exposed and non-exposed individuals DA = dopamine agonist DA exposed = patients with pituitary disorders treated with dopamine agonist Non-DA exposed = patients with pituitary disorders and no history of dopamine agonist exposure DA exposed (n = 132) Non-DA exposed (n = 58) p value In a stratified analysis of DA-exposed men, compared with individuals not achieving normal levels of PRL and testosterone, those with a combined restoration of both hormones did not show a higher prevalence of compulsive sexual behavior (42% vs. 23%; NS), positive screening for any ICD (57% vs. 38%; NS), or positive screening for any ICD or related behavior (62% vs. 46%; NS). Several models of logistic regression were used to identify predictors of any ICD or related behavior (Table 3a). After adjusting for gender, screening positivity was associated with DA treatment (β 0.37; adjusted OR 2.1, 95% CI 1.06-4.26) and age as a continuous variable (β 0.03; per change in regressor over the entire range: adjusted OR 6.65, 95% CI 1.58-30.06). The association remained significant after adjusting for the HADS depression score. In an unadjusted analysis, screening positivity was associated with DA duration but not dose; however, this association did not remain significant after adjusting for age and gender. Discussion This study revealed that DA treatment for endocrine conditions is associated with a higher risk of testing positive for ICDs by QUIP. Aside from case series published in the literature [8][9][10][11][12][13][14], only a few endocrinological trials have investigated the relationship between DA and ICDs. The first cross-sectional study conducted in the United States by Bancos et al. included 77 patients with prolactinomas treated with DA and 70 patients with DA-naïve, non-functioning pituitary adenomas [15]. The use of DA was associated with an increased rate of hypersexuality but not with the overall prevalence of ICDs. Interestingly, men with prolactinomas treated with DAs showed an unadjusted OR of 9.9 for any ICD compared with those with non-functioning pituitary adenomas. A multicenter cross-sectional study included 308 patients with DA-treated prolactinoma but lacked a control group [16]. A modified QUIP showed a prevalence of 17% of any ICD; hypersexuality was most common. Independent predictive factors for ICD were male gender and alcohol use; nadir PRL did not reach statistical significance. Another multicenter cross-sectional study enrolled 113 DA-treated hyperprolactinemic patients and 99 healthy controls [17]. Patients were more likely than controls to test positive by the QUIP-Shortened Version for any ICD, hypersexuality, compulsive buying and punding, and by the Hypersexual Behavior Inventory for hypersexuality. Independent risk factors for hypersexuality were male sex, eugonadism, lower score of the Hardy's classification of pituitary adenomas, and psychiatric comorbidity. A higher stress score was associated with compulsive buying and punding. A Turkish prospective study included 25 patients with prolactinomas and 63 controls (31 non-functioning pituitary adenomas and 32 healthy individuals) [18]. During a 1-year follow-up, 8% of the patients with prolactinomas developed hypersexuality, which reversed fully or decreased upon discontinuation of DA treatment. Our study represents one of the largest samples described in the literature, enrolling patients only from a single, tertiary referral center. Compared with the reference group, the DA-exposed patients were almost 11.5 years younger at diagnosis and 18 years younger at questionnaire completion. Because there was a trend toward a higher prevalence of women in the DAexposed group, gender and age were used as covariates in a multiple logistic regression. The DA-exposed individuals were more likely to test positive by QUIP for any ICD, any related behavior, any ICD or related behavior, compulsive sexual behavior, and punding. Trends toward higher rates of compulsive buying and compulsive eating were also found in the case group. No difference in the prevalence of compulsive gambling, hobbyism, walkabout, excessive amount of time spent on impulsive behaviors, difficulty in controlling the amount of time spent on impulsive behaviors, or non-PD excess therapy was found between the two groups. Independent, positive predictors of any ICD or related behavior were DA use and age at questionnaire completion, but not male gender, which was an independent risk factor for sexual behavior only. The increased ICD risk in older age rather than younger age was not in line with previous studies of the general population, as well as PD and endocrine patients. This may be related to the clinical characteristics of our specific sample with endocrine disorders and cabergoline-induced ICDs and is, thus, not applicable in other clinical settings. The implications of these findings are still unclear, as the impact of age on the type and severity of ICD needs further investigation. After adjusting for confounding factors, DA dose and duration did not correlate with the presence of any ICD or related behavior. The lack of interaction between DA dose and ICD risk is most likely a result of the limited statistical approach, because a resolution of ICDs has been described after DA dose reduction, not only after DA cessation [14]. To be consistent with a previous study on PD conducted by some of us [20], we planned to use the QUIP as a screening tool for ICD. This questionnaire was validated in a sample of PD patients undergoing a diagnostic interview by an investigator blinded to the QUIP results [19]. The QUIP is designed to be sensitive for the detection of ICDs and related disorders, but it is not highly specific. In fact, Weintraub et al. combined the four ICDs (compulsive gambling, compulsive sexual behavior, compulsive buying, compulsive eating) to increase the sensitivity to 97% in identifying an individual with any ICD. Similarly, they combined the ICDs with compulsive behaviors to increase the sensitivity to 96% in identifying an individual with any disorder/ behavior. The negative predictive values for each ICD were very high, whereas the positive predictive values were low overall. Thus, with a high degree of certainty, a negative screen corresponds to the absence of ICD. To deal with the low positive predictive value, an individual who screens positive should undergo a clinical interview. In our study, the high prevalence of positive screens could be supported by two hypotheses. First, the QUIP cut-off values validated by Weintraub et al. in a PD sample might not be adequate in our population, thus leading to an overestimation of the actual prevalence of ICDs. Second, the different personality profiles observed in patients with non-functioning pituitary adenoma [23] might have influenced the QUIP results in our non-DA-exposed group. It would be preferable to have the QUIP validated in an endocrine setting; the lack of a follow-up interview represents a major issue. As a screening tool, the QUIP does not provide specific information about the severity of ICDs or related behavior symptoms. Other authors used a specific questionnaire to assess the consequences of hypersexuality [17]. This limitation suggests that a positive screen needs to be followed by a clinical interview to verify whether a patient truly has clinically significant ICDs or other compulsive behaviors and how severe they are. Another advantage of a clinical interview is to better discriminate between hypersexual diagnosis and normal return in libido. Another limitation of the QUIP is the exclusion of other impulsive activities that would fit a non-neurological clinical setting. For example, De Sousa et al. suggested the inclusion of impulsive activities like exercise, caffeine consumption, and video game use [17], and we propose mobile and social network use, trichotillomania, kleptomania, and nail biting. Also, it would be interesting to assess the ability to focus attention or concentrate, as DA-treated hyperprolactinemic patients, compared with controls, had a higher attentional impulsiveness subscale score, meaning a tendency for quicker impulsive decisions or cognitive impulsiveness [24]. Based on the QUIP characteristics, we postulate that this screening tool may be of value in detecting subclinical ICD symptoms in patients at risk for developing ICDs before starting DA treatment; however, this should be confirmed in an ad hoc study. Further studies are warranted to validate both the English and Italian QUIP in an endocrine setting, before and after DA therapy. Aside from the questionnaire performance, a clear sign of increased risk of ICDs emerges in individuals treated with DA and appears to be consistent among PD and endocrine populations. In fact, the adjusted OR for any ICD or related behavior in our study (2.22) was similar to that described in PD (2.72) [25]. Also, an increased risk for hypersexuality was shown in our sample (adjusted OR 2.62) and in a previous study (OR 5.07 in the whole group) [15]. The gender difference in the risk of developing hypersexuality or other ICDs raises some interesting questions. As male gender is a recognized independent risk factor for ICDs [26], we speculate whether this could be due to a different neurobiological substrate between men and women, reflected in clinical disorders linked to the dopamine system. In the human brain, sex differences have been described in the extrastriatal dopamine D2 receptors of healthy individuals [27] and in the striatal dopamine D2/D3 receptor of smokers and nonsmokers [28]. Moreover, the mesostriatal and mesolimbic DA systems of rats show different sensitivities to circulating estrogens and androgens, because of specific subsets of midbrain DA neurons immunopositive for estrogen receptor β or androgen receptors. These findings provide an anatomical model with separate effects for androgens and estrogens over the mesostriatal and mesolimbic DA systems [29]. In summary, gonadal hormones appear to act like neuromodulators concurring in the sex differences in impulsive and compulsive behaviors [30]. Testosterone has been hypothesized to be permissive in the development of any ICD and hypersexuality, but no statistical association between hypersexuality and testosterone increase has been found [17]. In hypersexual men, an independent trend toward higher testosterone was found at assessment, but it did not reach statistical significance. In our study, normal PRL and testosterone levels were not associated with hypersexuality, any ICD, or any ICD or related behavior. However, our statistical power was limited, because 24% of our patients did not reach normal testosterone and PRL levels. It is noteworthy that in the literature, the relationship between testosterone and hypersexuality have been described as a trend or association that did not persist upon multivariate analysis; however, these findings may be related to limitations in the study design, the type of control group, and the lower percentage of men compared with women, which would considerably reduce statistical power. Finally, the cross-sectional nature of the present study does not address the direction of causality. A prospective study is needed to examine any chronological relationship with the onset of ICD, because the lack of correlation between ICDs and duration of DA treatment might be due to recall bias, especially for those who had discontinued DA treatment. We also believe that the individual threshold dose for DA-induced ICDs cannot be addressed by a crosssectional section but requires assessment in a longitudinal study. In conclusion, we confirm that DA treatment for endocrine conditions is associated with a higher prevalence of symptoms of any ICD or related behavior and, separately, any ICD, any behavior, compulsive sexual behavior, and punding, as well as a trend toward a higher prevalence of compulsive buying and compulsive eating, as assessed by the full QUIP questionnaire. Although lower doses of agents with low D3 receptor affinity are used in an endocrine setting, the effects are similar to those seen in PD and restless leg syndrome, confirming that ICD development is not related to a preexisting alteration of the dopaminergic system but is secondary to DA exposure. DA use and age were predictive of the risk of any ICD or behavior, whereas male gender was predictive of the risk of hypersexuality only. DA dose and duration were not associated with ICD risk. Although the QUIP is not validated in an endocrine setting and does not provide us with the actual frequency of ICDs as defined by standard criteria, it represents an easyto-use, self-administered screening tool. This large, cross-sectional study from a single, tertiary referral center contributes to the growing evidence of DAinduced ICDs in endocrine conditions. Funding Open access funding provided by Università degli Studi di Torino within the CRUI-CARE Agreement.
2020-12-13T14:12:12.222Z
2020-12-12T00:00:00.000
{ "year": 2020, "sha1": "36843bf45f2be3ecb442d1db9f5a5ba95f0b908a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40618-020-01478-0.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "36843bf45f2be3ecb442d1db9f5a5ba95f0b908a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
141726644
pes2o/s2orc
v3-fos-license
Effects of Administrative Climate and Interpersonal Climate in University on Teachers’ Mental Health A self-made questionnaire was used to test the influence of university administrative and interpersonal climate to teachers’ mental health. 826 teachers were stratified randomly from 20 universities across China, and the survey data of samples were analyzed by correlation analysis and hierarchical regression analysis. The results show that the administrative and interpersonal climate in universities is one significantly positive predicting variable of the teachers’ mental health. The study finds out that administrators and educational practitioners should strengthen the construction of soft power in universities, such as promoting the culture and positive organizational climate, building up good administrative and interpersonal climate, so as to promote teachers’ mental health development. Introduction Every university has its unique culture and climate, such as the taste of the campus, the arrangement of the classrooms, the conditions of the libraries and laboratories, the running and walking in the playgrounds, students' clothes, styles of walking, tones of talking, people's attitudes of meeting each other, and even the features of the presidents (Zhu, 1982).These unique characteristics of each university are referred to as the organizational climate of a university.In fact, the climate of organization may be roughly conceived as the "personality" of the organization, that is, climate is to an organization as personality is to an individual (Halpin & Croft, 1963).A university's organizational climate is a set of lasting internal psychological features which can distinguish one university from another (Robert, 1975;Hoy, Hannum, & Tstchannen-Moran, 1998;Pan & Qin, 2007a). There are many researches about school organizational climate at home and abroad.Their researches are mainly focused on the following aspects.(1) Some are describing and measuring the degrees of school organizational climate, such as OCDQ1 (Halpin & Croft, 1963), OCDQ-RE and OCDQ-RM (Hoy et al., 1991(Hoy et al., , 1996)), OCI (Stern, 1963), POS (Likert & Bowers, 1968), and so on; (2) Some are studying the relationship between school organizational climate and the organizational effectiveness, such as school effectiveness (Hoy et al., 1990;Gelade & Gilbert, 2003;Griffith, 2006;Van Houtte, 2005), organizational health (Cullen et al., 1999) and student achievement (Hoy & Hannum, 1997;Dumay, 2009;Yin & Ma, 2009), teachers' job satisfaction (Nalcaci, 2012;Pan & Qin, 2007a), job burnout (Tian & Li, 2006) and teacher commitment (Riehl & Sipple, 1996;Zhu et al., 2011); (3) Some others are trying to predict and manipulate school organizational climate, for example, school climate in predicting school effectiveness (Hoy et al., 1990), school health (Cullen et al., 1999), school disorder (Gottfredson, 2005), teachers' job satisfaction, mental health (Deng, Pan & He, 2006;Pan & Qin, 2007b;Ou, Pan, & Huang, 2008) and student achievement (Hoy & Hannum, 1997;Dumay, 2009;Yin & Ma, 2009).In China, some scholars in the past discussed the school organizational climate, for example, Zhu (1982) tried to analyze it early in 1940s.He believes that school organizational climate is spirit of school, and he thinks that there are 5 meanings in "spirit" of the term "school spirit".Pan (2007) believes organizational climate in universities is a set of lasting internal psychological features which can distinguish one university from another, and they found that school organizational climate had 4 dimensions, i.e.: administrative climate, teaching climate, studying climate and interpersonal climate.At same time, Pan found that school administrative climate and interpersonal climate had a distinct positive correlation with mental health of teachers in secondary schools (Pan & Qin, 2007b;Ou, Pan, & Huang, 2008).However, whether the mental health of teachers is affected by organizational climate in universities?The answer is still uncertain, therefore, it is very worth to study the relationship between the mental health of teachers and the organizational climate in universities. A university is a unique place to cultivate people.How good a school organizational climate is will influence the mental health of its members directly or indirectly, just as Owens et al. (1987) states that teachers will appear more smart and confident in a school of nice environment than those in a school of tense relationship.This assumption is based on the following theories: (1) Dialectical Materialism (Gollobin, 1986).This theory thinks that a person's subjective world originates from the objective world.One transforms himself while he is trying to change the objective world.This relationship about the subject and object provides the basis for the current study.(2) The theory of "cognitive map" put forward by Tolman.According to Tolman and Lewin (Tang & Chen, 2001), a person only exists when he interacts with the environment.People will form a certain "cognitive map" or "conscious sense" of the environment when they are interacting with it.And this environment mainly consists of social relationship, organizations and natural settings related to one's experiences.And this "conscious sense" further governs a person's behavior and acts upon the environment and mental activities.(3) The "field theory" proposed by Lewin (1951).Lewin thinks that man's behavior is only the function of its living space, and the living space is made of "all possible elements, including people (P) and environment (E)".A formula can be used to show its meaning: B = f (P.E) (B: behavior; P: people; E: environment and f: function).In the process of people's interaction with the environment, there will surely be an interactive "field".And this "field" will react to man's mind and behavior.Therefore, man's mind and behavior will be restricted by this "field".Accordingly, in a school, a special place to educate the youth, as the most direct environment acting upon the teachers, school organizational climate is certain to play a very important role in teachers' physical and psychological development.Based on above analysis, a hypothesis can be proposed: Hypothesis 1: There is a positive relationship between university administrative climate (UAC) and teachers' mental health (TMH), UAC is a significant positive predictor of TMH. Hypothesis 2: There is a positive relationship between university interpersonal climate (UIC) and teachers' mental health (TMH), UIC is a significant positive predictor of TMH. Hypothesis 3: UIC exerts obvious mediating effects between UAC and TMH. Participants This study applies the method of survey to find out the relationship among university administrative organizational climate, interpersonal climate and teachers' mental health in university in China.The questionnaire is in Chinese so as to ease and the process of answering and to better the result.By using stratified random sampling, 4 universities were chosen from each of the five districts: east, west, south, north and the central districts in China.There are 20 universities were finally chosen.Fifty teachers in each university were chosen by random to answer the questionnaire.Among the delivered 1000 questionnaires, there are 826 valid ones.open in the process of doing their duties), and administrative efficiency (AE: Do managers possess good qualities and skills to manage?What is the efficiency of managing?).Interpersonal climate includes interpersonal action (IAc: This refers to the relationship between persons.It shows itself as whether they are united and help each other in their work), interpersonal harmony (IH: is a feeling of the environment, such as a harmonious and peaceful interpersonal relationship), interpersonal attitude (IAt: is a tendency of recognizing and attracting each other, such as friendly, kind and enthusiastic to each other), and interpersonal distance (ID: is what persons perceive about the remoteness and closeness of the interpersonal relationship) (Pan and Song, 2014).A five-point scale (1 = never, 5 = always) is applied in the scale.Exploratory and confirmatory factor analyses show that the scale's structural validity is good (χ 2 /df = 2.06, CFI = 0.92, GFI = 0.94, RMSEA = 0.045).The internal consistency reliability is 0.89.According to Hoy et al. (1991)'s standardization method, the score was standardized using 500 as the average (the average of the score of each subscale in the school) and 100 as the deviation, resulting in a standardized score (SDS).The formula used for calculating the score for each subscale was: SDS = X (x -y)/SD + 500; where x is the score for each dimension, y is the average, and SD is the standard deviation.The organizational climate standard value is following: An administrative climate and interpersonal climate standard value smaller than 500 is undesirable, and the smaller it is, the less desirable.A standard value greater than 500 is good, and the higher it is, the better. Self-Rated Health Measurement Scale (SRHMS) One scale to evaluate teachers' mental health in a Self-rated Health Measurement Scale (Xu, 1999) is used in this study.This scale is composed of positive passion, negative passion and cognitive function three factors.It includes 15 items, and 10 scales from "very unhealthy" to "very health" are followed each item ("0" = very unhealthy; "10" = very health).The negative passion uses minus values.The larger the value is, the healthier it is.The content validity, the structural validity and criterion validity, and the reliability are proved to be good by Xu (1999).In this study, the internal consistency reliability is 0.86. Procedure The researchers are trained to collect the data.They choose the teachers in each university as a group to do the questionnaire.The teachers do the questionnaire all by themselves anonymously.All the measurements are processed with SPSSwin20.0,and are analyzed by ways of description, partial correlation and hierarchical regression. Correlation Analysis of University Administrative Climate and Teachers' Mental Health • Descriptive statistics and correlation analysis of the valid data gets a very good result, shown in Table 1. • Table 1 demonstrates that there is a very significant positive correlation between the four factors of university administrative climate and teachers' mental health.This again proves that the better the university administrative climate is, the healthier the teachers' mental health is.Hypothesis 1 is validated. Correlation Analysis of University Interpersonal Climate and Teachers' Mental Health • Descriptive statistics and correlation analysis of the valid data gets a very good result, shown in Table 2. • Table 2 demonstrates that there is a very significant positive correlation between the four factors of university interpersonal climate and teachers' mental health.It proves that the better the university interpersonal climate is, the healthier the teachers' mental health is.Hypothesis 2 is validated. Hierarchical Regression Analysis of UAC, UIC and Teachers' Mental Health • The significant positive correlation among UAC, UIC and teachers' mental health, which only shows an influence tendency.And this tendency may be the results of other variables.In order to solve this problem, the hierarchical regression analysis is applied to further confirm the close relationship among UAC, UIC and teachers' mental health.The first step is to put geographical information as gender, teaching years, educational background and ranks as independent variables.The second step is to put UAC as independent variables based on the geographical information.The third step is to put UIC as independent variables based on the geographical information and UAC.Then, the prediction of these factors to teachers' mental health is analyzed.The result is shown in Table 3. • The data in Table 3 shows that: 1) When the geographical information is used to analyze the influences to teachers' mental health, only the gender is significant, but it is weak, only accounting for 0.6% of the variation; 2) the UAC factor has a significant prediction to teachers' mental health (β UAC = 0.371, p < 0.001) when the geographical information is controlled, and it accounts for 11.7% of the variation.It shows that the better the UAC is, the healthier the teachers' mental health is; 3) The UIC factor has a significant prediction to teachers' mental health (β UIC = 0.391, p < 0.001) when the geographical information and UAC are controlled, and it accounts for 9.8% of the variation.It shows that the better the UIC is, the healthier the teachers' mental health is; 4) In the third step, UIC is added to the regression model between UAC and teachers' mental health, the standardized regression coefficient of UAC and teachers' mental health is sharply reduced to 0.148 from 0.371, though t value is still at significant level; the standardized regression coefficient of UIC and teachers' mental health is 0.391, t = 5.39, p < 0.01.Hence, the third-step appraisal of mediating variable is well met, and UIC exerts partial mediating effect.Hypothesis 3 is partly proved. Effects of Each Sub-Factors of UAC and UIC to Teachers' Mental Health • If only the main effect of UCA and UIC to teachers' mental health is analyzed, the effects of the sub-factors of them may be simplified.Therefore, a multi regression analysis is carried out by putting the four sub-factors of UAC and four sub-factors of UIC as independent variables, teachers' mental health as a dependent variable.The results are shown in Table 4 to Table 5. • Table 4 shows that the management style has a very significant effect to teachers' mental health, and it accounts for 13.7% of the variation.Management style refers to managers' managing behaviors, such as whether it is democratic or autocratic, rigorous or flexible, open or closed.This means that different management styles are the key factors to influence teachers' mental health.• The data in Table 5 shows that interpersonal action and interpersonal harmony are the key factors to influence teachers' mental health, and they accounts for 19.5% of the variation.Interpersonal action refers to the relationship between persons.It shows itself as whether they are united and help each other in their work. Interpersonal harmony is a feeling of the environment, such as a harmonious and peaceful interpersonal relationship.Interpersonal action and interpersonal harmony have a very significant effect to teachers' mental health.This shows that the better the interpersonal action and interpersonal harmony are, the healthier the teachers' psychological hearth is. Discussion In this study, we demonstrate that there is a positive correlation between UAC and TMH, and UAC plays a positive predictive role in TMH, consistent with relevant studies home and abroad.For example, Pan (2004) found there is a close relationship between administrative climate of school and SCL-90 results of teachers.In Pan's opinion, the core of administrative climate is the leadership of the president, the president's consideration and influence have a significant correlation and great influence to teachers' mental health (Pan & Cheng, 2001).Cheng and Tang (1997) find out that there is a very positive correlation between president's leadership and aspects of organizational climate.They think that good leadership may lead to good relationship among president, teacher and students, and a good interpersonal relationship will promote good leadership, which will greatly influence the healthy development of school organizational climate and teachers' psychology.On the other hand, with the development of bureaucratic administration and president's prime responsibility, power in most universities is gradually in charge of a few persons, they manage university autocratically.This style of managing is closely related to the future and development of a school, and this leadership will obviously influence teachers' behavior.The goals of the tasks, rules, working procedures, measures of awarding and punishing, welfare, fairness and just and so on will all have a great impact on teachers' psychological behavior.There are mainly three classic types of leadership: authoritarian, democratic and laissez-faire leadership (Kreitner, 1989).Different leadership will lead to different leading style and behaviors, which will form different organizational climate.Different organizational climate will influence members' psychological climate, for example, autocratic leadership controls power in one person, he exercises all the power himself without staff members' participation, lack of respect and trust of the staff members.In this case, a leader will never listen to the opinions of the staff members and he only thinks of the work, not the needs and feelings of the staff members.In a school of such a case, teachers are always treated unfairly and their social needs are not met.Therefore, they only carry out the tasks given simply and passively.The morale in such school is low, and the staff members feel depressed all the time.On the other hand, democratic leadership is based on equality and cooperation.The management members and the other staff members are cooperators with an equal status.Leaders respect teachers' different abilities and qualifications, trust them, and invite them to participate in the school management.Some important and big issues of the school are determined by leaders and staff members together.Leaders not only concern the work, but also the life and personal development of the staff members.In a school of such a case, staff members feel the sincerity from the leaders, and a sense of trust and admiration developed.They are ready to carry out any tasks given.People help each other, learn from each other, work actively and enthusiastically, and a strong sense of responsibility fills every one.In a school of such a case, staff members are happy to work and have a good efficiency.They are encouraged positively and usually possess a strong sense of pride, group honor, success and happiness.All these contribute to teachers' mental health. The study is still find that university interpersonal climate is the significant influence factor to teachers' mental health.As we know, interpersonal relationship is always a very important factor to influence a person's mental health.Ding Zan believe that the adaptation of man's psychology is mainly the adaptation of the interpersonal relationship, and the morbidity of man's psychology is mainly caused by the inharmonious interpersonal relationship (Li & Zhao, 2004).Festinger (1957) believes that affinity between persons can be an effective means to melt the unhappy aspect of interpersonal relationship.When people are discussing and interacting with each other, the introduction of some cognitive factors, such as a piece of new information or suggestion, can help to get rid of the inharmonious elements, and thus lessen the worries greatly.Friendly actions between group members can effectively promote communication, as staff members can express their depression and satisfaction through communication.Ou, Pan, and Huang (2008) find that the university organizational climate is significantly correlated with teachers' mental health, and interpersonal action and interpersonal harmony are significant to teachers' mental health in regression analysis, and interpersonal climate is the positive prediction factor of teachers' mental health.Interpersonal relationship in a school usually shows itself in the inter-action relationship among leaders, staff members and students.If there is a tense relationship between management members and staff members, if they do not respect and trust, concern and support each other, it is likely to develop a kind of conflict or even hostile psychological state.And in this case, it is easy to lead to psychological clash, and further to a greater psychological distance.In a school of such case, staff members guard against each other, and estrangement, suspiciousness, hostility are the right words to describe them.They are often finding excuses not to do the job.And they often give obstacles to others' job.They do their job according to their own wills and there is no one to follow the rules in a school.In such a school, staff members feel worried, disappointed, lonely and helpless.They are living in a society without a sense of security and social support.If a person lives and works in such an environment for a long time, it is quite easy for him to develop a kind of mental problem, and thus influencing their mental health.On the other hand, in a school of a desired organizational climate, staff members concern, help and depend on each other, and they work together cooperatively, having a relatively high agreement in their goals.They feel proud to be in such a school and are willing to work there.They regard the school issues as one part of their own business.At the same time, management members, teachers and students interact freely and equally.Therefore, the sense of social security, success and respect and so on are met.And they work happily and with high efficiency.These certainly promote the development of their mental health. This study has its limitations, firstly, the control over common method variables is not enough.Because the test is conducted by single-source subjects, the subjects are of individual differences and the measuring tools and testing situations may also affect the accuracy of the test.Moreover, the same data source, testing situation, item context and item itself may lead to a spurious relationship between predictor variables and criterion variables.Such an artificial relationship will interfere the findings of the research and generate potential systematic error for the results.Secondly, this is not an experimental research, which may limit the extrapolation validity of the results.Therefore, the future studies can control common method bias by new methods and adopt experimental method and expand the scope of the study. Conclusion The teachers' mental health is affected by the organizational climate in university.The administrative and interpersonal climate in universities is one significantly positive predicting variable of the teachers' mental health. Especially the style of administrating, interpersonal communication, and interpersonal harmony climate are more significant in influencing teachers' mental health in universities.Therefore, the administrators and educational practitioners should strengthen the construction of soft power in universities, such as promoting the culture and positive organizational climate, building good organizational climate, so as to promote teachers' mental healthily development. Table 1 . Correlation analysis of university administrative climate and teachers' mental health (n = 826). Note: UAC: University Administrative Climate Table 2 . Correlation analysis of university interpersonal climate and teachers' mental health (n = 826). Table 3 . Hierarchical regression analysis of UAC, UIC and Teachers' Mental Health. Table 5 . Regression analysis of UIC to teachers' mental health (n = 826). The relationship between teachers and students are well, the students respect the teachers and the teachers care for the students.
2019-03-03T16:40:24.652Z
2015-06-18T00:00:00.000
{ "year": 2015, "sha1": "2f58792661f2fbd67ef492e57f024c4bc62d4fef", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=57534", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "2f58792661f2fbd67ef492e57f024c4bc62d4fef", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
244482051
pes2o/s2orc
v3-fos-license
Reformulation of Public Help Index θ Using Null Player Free Winning Coalitions This paper proposes a new representation for the Public Help Index θ (briefly, PHI θ). Based on winning coalitions, the PHI θ index was introduced by Bertini et al. in (2008). The goal of this article is to reformulate the PHI θ index using null player free winning coalitions. The set of these coalitions unequivocally defines a simple game. Expressing the PHI θ index by the winning coalitions that do not contain null players allows us in a transparent way to show the parts of the power assigned to null and non-null players in a simple game. Moreover, this new representation may imply a reduction of computational cost (in the sense of space complexity) in algorithms to compute the PHI θ index if at least one of the players is a null player. We also discuss some relationships among the Holler index, the PHI θ index, and the gnp index (based on null player free winning coalitions) proposed by Álvarez-Mozos et al. in (2015). Introduction In the literature, a power index is defined as a measure that is meant to assess the a priori power, as the influence, or as the payoff expectation of players in a simple game. This paper concentrates on the Public Help Index θ (briefly, PHI c or just θ) introduced by Bertini et al. (2008) and based on winning coalitions. The PHI θ index was born as a modification of the Public Good Index (briefly, PGI) based on minimal winning coalitions. The PGI index was introduced by Holler (1982); for this reason, it is also known as the Holler index. Even though the structure of the two indices mentioned above are very similar, their characterizations differ (see Sect. 5, for example, where a comparison of two indices can be found). The PHI θ index takes part of the so-called public power indices (see Bertini and Stach (2015) and Stach (2016) as θ is well-defined in the social context where goods are public. The PHI θ index considers all winning coalitions, not only minimal winning coalitions (as in the PGI index). Hence, as Bertini and Stach (2015) remarked, "θ rather describes power relationships in the consumption of public goods, whereas the PGI analyzes the production of public goods. In production, one must take care to exclude freeriding; this is why the PGI considers minimal winning coalitions; in the consumption of public goods, you cannot avoid free-riding." The PHI θ index is much closer to the König and Bräuninger (1998) index (briefly, KB) or the Z index introduced by Nevison (1979). Both indices (KB and Z) are not only based on winning coalitions and closely related to the Banzhaf (1965) index but are also proportional to each other and to the θ index. In particular, the quotient of each of these indices and the sum of its values taken over of all players is equal to the θ index, which is not difficult to show (see Bertini et al. (2013) and Stach (2016), for example). Moreover, if we consider voting rules to make yes/no choices (acceptance and rejection) by a voting body and assume that all vote configurations are equally probable, then the KB index for particular voter i is interpreted as the conditional probability that "i is successful" given "the proposal is accepted" (see Valenciano et al. (2004), for example). The Z index was defined by Nevison (1979) as an absolute measure of satisfaction. So, from this point of view, the PHI θ index can be seen as the relative measure of being successful (having the result for which one voted). PHI θ takes all winning coalitions into account. This also provides a non-negative power to null players (if any), reducing the power of others. However, considering the consumption of public goods and social help, we cannot exclude free-riders in many cases (especially when we consider the health of the whole society, an important issue nowadays during the COVID-19 pandemic, for example). Conventional methods that measure the voting power of players, such as the Banzhaf (1965), Shapley and Shubik (1954) indices, for example, are unsuitable for this purpose. When distributing the public good, they do not take the null players into account. The θ index is solidary with weak players (not only null players), while the mentioned above indices are not. In their paper that introduced the solidarity value, Nowak and Radzik (1994) provided a very convincing example about solidarity with "weaker" players. In this example, there are three brothers who live together. One of them is a disabled person who can contribute nothing to any coalition. The Shapley-Shubik, Banzhaf, and PGI indices assign zero power to the "weak" brother. Nowak and Radzik (1994) posed the following question: "Should the disabled brother leave his family?" The θ index represents a solidary with the "weak" brother (see Example 5 in Sect. 4). The object of this article is to find a new formula for PHI θ based on so-called null player free winning coalitions, a concept first time used in Álvarez-Mozos et al. (2015). In this way, we can show the parts of the power assigned to the null and non-null players in the presence of null players in a simple game in a transparent way. It should be noted that the set of null player free winning coalitions unequivocally defines a simple game. We also discuss some relationships among the Holler index, the θ index, and the g np index (based on null player free winning coalitions) proposed by Álvarez-Mozos et al. (2015). In particular, we prove that just like the PHI θ index also the g np index satisfies the dominance and bicameral meet properties. In addition, the θ and g np indices are equal when a game is free of null players (see Sect. 2.1). In the forthcoming paper by Stach and Bertini (2021), a representation that is based on the information contained in a set of null player free winning coalitions is given for some well-known indices like the Banzhaf (1965) index, the Rae (1969) index, Coleman's (1971) indices to prevent action and to initiate action, Nevison's (1979) Z index, and the König and Bräuninger (1998) index. Moreover, some relationships among the Banzhaf index and these power indices are also established by using the new reformulations. The rest of the paper is structured as follows. In Sect. 2, we provide some basic definitions and notations. Section 3 contains the new formula of PHI θ using the concept of null player free winning coalitions. In Sect. 4, some examples of a simple game are considered to compare PHI θ with the g np and PGI indices in order to reveal the possible application fields in which the θ index is suitable and to show how the new formula of PHI θ can be useful in its algorithmic calculation in games with at least one null player and in games that are determined by a set of null player free winning coalitions. Section 5 is dedicated to comparing the three power indices (h, PHI θ, and g np ) by taking some desirable properties of power indices into account. With Sect. 6, we conclude. Definitions and Notations A cooperative n-person game is a pair (N, v) where N = {1, 2, ..., n} is a finite set of n players and v is real-valued function v ∶ 2 N → R with v(�) = 0 . 2 N denotes the set of all subsets of N. Each S ∈ 2 N is called a coalition and N is called the grand coalition. Hereafter we call both (N, v) and v a game since N is inherent in the definition of v.v(S) stands for the worth of coalition S in game v. |S|= s denotes the cardinality of S, so |N|= n. A simple game is a monotonic 1 cooperative game such that v(S) ∈ {0, 1} for all S ⊆ N and v(N) = 1 . In simple games, we call coalitions with property v(S) = 1 winning coalitions, while those with v(S) = 0 are called losing coalitions. By W, we denote the set of all winning coalitions in simple game (N, v). Any simple game may be unequivocally described by its set of winning coalitions. W i stands for the set of all winning coalitions that contain player i. A simple game is proper if the following condition holds: ∀S ⊆ N if v(S) = 1 , then v(N�S) = 0 . In this paper, we analyze only proper simple games (for a proper simple game, see (Stach 2011), for example). By S N , we denote the set of all simple games on N. is called a marginal contribution of player i to S ∈ 2 N�{i} . A coalition S ∈ 2 N is called a null player free winning coalition if S ∈ W and none of its members is a null player. By W n− , we denote the set of all null player free winning coalitions, and W n− i denotes the set of all null player free winning coalitions that contain player i. Like W and W m , W n− determines an unequivocally simple game. A simple game (N, v) is called a weighted game and is represented by [q; w 1 , ..., w n ] if there exists a non-negative vector of the weights of players (w 1 , ..., w n ) and a majority quota q ∑ A power index f is a function that assigns a unique vector is a measure of the (a priori) player i's power in the game v -specified by its player set N and its characteristic function v. So, the power index is an attempt to quantify the differences in power between players. The bigger the value f i (v) , the more significant is the power of player i in simple game v. Some considerated power indices In this section we give the definitions of the three power indices considered in this paper: the θ index, the g np index, and the Public Good Index (h). Let v ∈ S N and i ∈ N . Public Help Index θ (introduced and axiomatized by Bertini, et al. (2008)) is given by: The g np power index (also called null player free index) was defined in ( The Public Good Index (also called Holler index), h, was proposed by Holler (1982). For all v ∈ S N and i ∈ N the h index is defined as follows: . For an axiomatic characterization of the Public Good Index, see Holler and Packel (1983). Moreover, Napel in (1999Napel in ( , 2001 showed the independence and nonredundancy of the Holler and Packel axioms. Some Proporties of Power Indices in Simple Games In this section we provide definitions of some well-known and desirable properties of power indices in simple games. In Sect. 5, we will compare the power indices defined in Sect. 2.1 (θ, g np , and h) by taking these properties into consideration. We say that f satisfies the following: holds for each non-null player i ∈ N; -the symmetry (anonymity) property if, for all simple games (N, v), each player i ∈ N , and every permutation ∶ N → N , the following equation holds: -the dominance (local monotonicity) property if, for all weighted games [q; w 1 , ..., w n ] and any two distinct where v' arising from v by the redistribution of weights ( holds; -the strong monotonicity property if, for two simple games v and v' having the same grand coalition N and if i is a player such that are defined as follows: A New Formula for the Public Help Index Let us consider a simple game v ∈ S N such that n ≥ 2 . Let k stand for the number of null players in v, 0 ≤ k < n . By N n-, we denote the set of non-null players in v, it means the set without nulls. By the way, the coalition N nis a carrier in the sense of Shapley (1953). Theorem 1. If v ∈ S N and 0 ≤ k < n , then the PHI θ index can be expressed as follows: Proof. Let us fix an n-player simple game with k null players ( 0 ≤ k < n ) such that W ≠ ∅ . The set of all winning coalitions (W) can be seen as the union of two disjoint subsets: W = W n− ∪ (W�W n− ) where W�W n− is the set of all winning coalitions that contain at least one null player. Note that, if 0 < k < n holds in a simple game, then N ∈ (W�W n− ). All winning coalitions with null players (W�W n− ) can be simply obtained from null player free winning coalition W n− . Namely, if we have k null players, we can obtain (2 k − 1) different non-empty coalitions that contain only null players. Null players without other players cannot form a winning coalition. A null player (or a coalition of null players) can take part in winning coalitions if he/she joins one of the null player free coalitions. So, if we have |W n− | null player free coalitions in a game, it is possible to create |W n− |(2 k − 1) different winning coalitions with at least one null player. So, the number of all winning coalitions in a simple game is given by the following equation: |W| = |W n− | + |W n− |(2 k − 1) . From this, we immediately obtain: |W| = 2 k |W n− |. Let us assume that player i is a null player in game W. Thus, in total, i takes part in 2 k−1 different coalitions with all null players, and i belongs to |W n− |2 k−1 winning coalitions. Hence, |W i | = |W n− |2 k−1 and k null players belong to ∑ j∈N,j is a null player �W j � = k|W n− |2 k−1 winning coalitions in total. 3 Reformulation of Public Help Index θ Using Null Player Free… Now, let us assume that i is a non-null player in game W, i ∈ N n− . So, in total, i takes part in |W n− i | null player free coalitions, and i belongs to |W n− i |(2 k − 1) winning coalitions with at least one null player. Hence, If we take all non-null players into consideration, we have ∑ From the above considerations for null and non-null players, we have ∑ we can express the PHI θ index for all v ∈ S N as follows: We can still simplify the above formula by dividing the numerator and denominator of the first part (if i ∈ N n− ) by 2 k and then similarly dividing the numerator and denominator of the second part (if i ∉ N n− ) by 2 k−1 . After these operations, Formula (1) easily follows. □ Note that, if k = 0 in a simple game (i.e., all players are non-null players), the above Formula (1) of the PHI θ index is equal to the g np index proposed by Álvarez-Mozos (2015) (see also Sect. 2). Thanks to (1), for a given simple game (N, v) with k null players, we can easily separate the part of the power assigned by PHI θ to all null players from the part assigned to non-null players (see Corollary 1). Corollary 1 If v ∈ S N and 0 ≤ k < n , then the total power assigned to all null players (TPNP) is equal to. If 0 < k < n , then this value is equal to Proof. The demonstration is immediate from Formula (1) (see Theorem 1 and its proof). Note that for each null player i we have |W i | = 2 k−1 |W n− | = |W| 2 (see the proof of Theorem 1) which also results from the identity proposed by Dubey and Shapley (1979, p. 127). As a consequence of Corollary 1, we can also immediately calculate the total power assigned by the PHI θ index to all non-null players (see Corollary 2). Corollary 2 If v ∈ S N and 0 ≤ k < n , then the total power assigned to all non-null players (TPNNP) is equal to. . Proof. The demonstration immediately follows from Formula (1) by summing over j (v) for all j ∈ N n− . Since the denominator is the same for all j ∈ N n− , one just needs to sum over |W n− j | in the numerator. □ Public Help Index and Null Player Free Index in Examples In this section, we present some examples to illustrate the use of new Formula (1) to compare PHI θ and g np as well as to show the application fields in which the θ index is suitable. It is worth noticing that, in all of the examples presented below as well as in the general case of simple games with null players, the PHI θ can be calculated from both the original formula and the new one. However, when we would like to have an algorithm to calculate the PHI θ index in an automatic way, then Formula (1) seems more suitable for games with the presence of null players. This formula explicitly induces to identify null players first, eliminate them from consideration, and find the set of winning coalitions without them (which means the set of null player free winning coalitions), thus saving the space complexity of an algorithm. This is particularly valid for simple games with a large number of players and null players. Furthermore, when a simple game is defined by the set of null player free winning coalitions, the calculation of the θ index by Formula (1) is immediate. Example 1 Let us consider weighted game [3; 3, 1, 0]. In this game, we have three players. The game can model the national health service (i.e., a publicly funded healthcare system) of a certain country. Player 1 (with a weight equal to w 1 = 3 ) can represent those groups of the society that guarantees themselves health service by paying a regular fee. Player 2 (with w 2 = 1 ) can represent a group that contributes to the health service, but this is insufficient to guarantee all services. Player 3 ( w 3 = 0 ) represents a group that is unable to guarantee themselves health services (the homeless, underprivileged, migrants, and unemployed, for example). In this game, Players 2 and 3 are null players. Their marginal contribution to all winning coalitions is null. During the time of a pandemic, we cannot exclude null players for the common prosperity in the "division" of the rights to the health service (this means players that do not contribute anything). According to Public Help Reformulation of Public Help Index θ Using Null Player Free… Index θ and Formulas (2) and (3), the total power assigned to the null players is equal to. So, since all non-null players are symmetric, due to anonymity property, each of the null player's power is equal to 2 (v) = 3 (v) = 1 4 . As the θ index satisfies the efficiency property (see Sect. 2), the power of Player 1 immediately follows 1 (v) = 1 − 1 4 − 1 4 = 1 2 ; this is in line with the calculation made using Formula (1). Table 1 shows the distribution of power according to the three indices considered in this paper. Example 2 Let us consider another example of a weighted game that illustrates how much faster we can find the distribution of the PHI θ index in simple games with a presence of null players. Namely, let us consider the weighted game [9; 5, 5, 1, 1, 1]. In this game, we have three null players: 3, 4, and 5. So, k = 3. Next (1) . This example, even with only five players, perfectly shows how the computation of PHI θ in games with numerous players can be simplified by applying the new formula. This means considering only null player free winning coalitions instead of all winning coalitions in the calculation of the θ index. For example, in a weighted game, it is not that difficult to identify all null players (see Chakravarty et al. (2014, p. 234), for example). Namely, let a weighted game be given .., i} and w(S) < q} be the maximum weight of any losing coalitions that is a subset of {1, 2, …, i} and i = n ∑ j=i w j denote the sum of all weights from player i to player n. According to the necessary and sufficient condition provided by Matsui and Matsui (2000), a player i > 1 is a null player if and only if i−1 + i < q. Concluding, the new formula of θ can reduce the space complexity of calculating the θ in simple games with at least one null player. Generally speaking, the space complexity of an algorithm (computer program) quantifies the amount of memory space that is required by an algorithm to run as a function of the length of the input (see Kuo and Zuo (2003), for example). Hence, a lower number of players in the input implies a reduction in the space complexity of an algorithm to calculate the PHI θ index. Example 3 Let us consider a real-world political example of a voting system. Namely, the 1958 European Union voting system which is described by the weighted game [12; 4, 4, 4, 2, 2, 1]. In this system the approval of a decision required at least 12 votes of the total 17. The corresponding weights were four votes each for Germany, France and Italy, two votes each for the Netherlands and Belgium, and only one vote for Luxembourg. The assessment of Luxembourg's voting power is null by the standard power indices as those proposed by Shapley and Shubik (1953) and Banzhaf (1965). The role of Luxembourg in the Council of Ministers in the first period of the European Economic Community certainly was not null (see Mayer (2018), for example). Also, the Holler and g np indices assign zero power to Luxembourg. Of course, the evaluation of voting power is sensitive to measurement concepts applied. The PHI θ index assigns the lowest non-null voting power to Luxembourg; i.e., 1/9 (see Table 2). Example 4 Let us turn to the COVID-19 example and modify it a bit considering only two groups of players: Polish citizens, and the set of null players. The set of null players can represent groups of immigrants, unemployment, and closed businesses by COVID-19, for example. Each Polish citizen has weight equal to 1, and each null player has weight equal to 0. As was mentioned before in this kind of problem all players should have rights to have a medical service, access to the vaccine (where available), etc., and the surplus would be shared among the non-null players (subsidies for closed businesses, unemployment caused by etc.). If so, we can apply the PHI θ index to share the total pandemic budget. Let us assume that in this weighted game, we have m Polish citizens ( 0 < m < n and m is on the order of 37-38 millions) and k null players ( 0 < k < m). Regardless of the adopted majority threshold, power indices such as Banzhaf, Shapley-Shubik, and g np , distribute the available budget equally among Polish citizens only. So each player gets 1/m of total medical service budget available. The θ index distribution depends on the adopted majority quota and the number of null players. If we assume the absolute majority of the Polish citizens, then the game representation depends on whether m is an even or odd number. So, let us assume (without a loss of generality) that m is odd and the quota is placed at q = (m + 1)/2. Then, the game can be represented in the following way: Note that the computations of the PHI θ index for each player in this game are easy using both, the new and former formulas. We use Eq. (1) here. So, let us find |W n− | and |W n− i | for each non-null player i (Polish citizen). In this example, all null player free winning coalitions must have at least m+1 2 non-null players. So, when m is an odd number. Therefore, using Formula (1), we obtain the following: < 1 , then i (v) results a higher solidary value here than in the case discussed in Example 1 (where the null player obtains half of what the non-null player does). Then, if m tends towards infinity, then 1 tends towards zero, and the value assigned to each "weaker" player tends towards the value assigned to a non-null player. Of course, the same is valid when m is an even number. In this example, we have: |W n− |= 1, |W n− i |= 1 for i = 1, 2. Player 3 is a null player. From Formula (1) or simply from the former one, we immediately obtain if i ∉ N n− and m is an odd number. "Should the disabled brother leave his family?" This was a question that was posed by Nowak and Radzik (1994). If Players 1 and 2 take responsibility for their "weaker" brother (Player 3), then the PHI θ(v) index distribution (2/5, 2/5, 1/5) seems to be a "better" solution in this case than that which is given by the g np , Banzhaf, or Shapley-Shubik indices: (1/2, 1/2, 0). Public Help, Null Player Free, and Holler Indices in Comparison The PHI θ and g np indices were born as modifications of the Holler index (h). In this section, we compare the θ, g np , and h indices by taking the ranges of the indices and some desirable properties of the power indices into account. All of these indices have the same structure, but they based on different families of coalitions: winning coalitions (W), null player free winning coalitions (W n− ), and minimal winning coalitions (W m ), respectively. Given a player set N, each of these sets unequivocally determines the simple game and W m ⊆ W n− ⊆ W ⊆ 2 N . Therefore, W n− can be seen as either a restriction of W or as an extension of W m . The aim of comparing these three indices is to try to explain how changes in the types of considered coalitions in the index influence the characteristic of the index. By definition, all of the indices considered (h, θ, and g np ) are non-negative and relative (i.e., range from 0 to 1). Theorem 2 For each simple game v and every non-null player i ∈ N , the following inequality holds: Proof. The demonstration immediately follows from the definition of the g np index and Formula (1); i.e., g for each non-null player i ∈ N . If we have k > 0 null players in a simple game, for each null player . □ Let us consider some known properties of the power indices mentioned in Sect. 2. For every simple game v and each player i ∈ N , g np i (v) ≥ 0 and i (v) ≥ 0 . Thus, the non-negativity property is satisfied by both indices: θ and g np . Moreover, the PHI θ satisfies the positivity property, which is stronger than non-negativity, i.e. i (v) > 0 for every i ∈ N , while g np and h fail this property. The symmetry property imposes that the power index does not depend on the labeling of the players. Thus, "symmetric" players should be assigned the same power in the game. The h, θ, and g np indices satisfy the symmetry property. The efficiency property requires that the players' power values add up to 1 for all simple games. Like symmetry, the efficiency property is used in the axiomatic characterization of the h, θ, and g np indices (see Holler and Packel 1982;Bertini et al. 2008;Álvarez-Mozos et al. 2015). The null player property requires that the power index assigns zero power to a player with no influence on the worth of any coalition S ∈ 2 N . The g np index satisfies the null player postulate by definition. It is actually pretty obvious that θ does not satisfy the null player property: v(N) = 1 for every simple game, so it is immediately clear that every null player is part of at least one winning coalition and thus enjoys positive power according to θ (see Formulas (1), (2), and (3)). The h index satisfies the null player property (see Holler (1982), for example). The power value assigned to player i by the PHI θ index depends on the number of null players in a game (see Formula (1)). So, the null player removable property is not satisfied by θ, but the h and g np indices satisfy it by definition. Even if the dominance property is not satisfied by the Holler index (1982), θ satisfies it (see Bertini et al. 2008;Bertini et al. 2013). paradox thereby violates Young's (1985) strong monotonicity condition. So, the h, θ, and g np indices violate the strong monotonicity property, which can be observe in the same example of games used to show the failure of the fattening postulate. Bertini et al. (2013) provided an example that showed that h and θ violate the transfer property proposed by Dubey (1975). The same example can be used to show a failure of the transfer property for g np , since the considered games are free of null players. Bertini et al. (2013) demonstrated that h and θ satisfy the bicameral meet property. Similarly, it could be shown that g np also fulfills this property. Theorem 4. The g np index satisfies the bicameral meet property. Proof. Let us consider three simple games v 1 = (N 1 , v 1 By the definitions of N and W(v) , it follows that the set of null player free winning coalitions in N is a Cartesian product of the null player free winning coalitions in the two separate games ( v 1 , v 2 ). So, the following equations hold for every Thus, for any non-null players i, j ∈ N 1 , we have: Bertini and Stach (2015) showed that for any simple game v ∈ S N , 1 2n−1 ≤ i (v) ≤ 2 n+1 for any i ∈ N . The minimum PHI θ index a null player i can achieve in game v ∈ S N is shown in Theorem 5 and Corollary 3. Theorem 5 If v ∈ S N and 1 ≤ k < n , then the following inequality holds: is a null player and k is the number of null players in game v. Proof. Let us fix an n-player simple game with k null players ( 1 ≤ k < n ) such that the set of null player free winning coalitions is not empty, i.e. W n− ≠ � . Let us consider an arbitrary null player. Then from formula (1) we have Now we demonstrate that the minimal power that an arbitrary null player i can obtain in a simple game is equal to 1 2n−k . The PHI index for null player i has a minimal value if the denominator of (4) attains a maximal value. The maximal value of denominator (4) is attain for maximal values of ∑ j∈N �W n− j � �W n− � . Since the maximal value of |W n− j | |W n− | is 1 for every j ∈ N , we see that the maximal value of ∑ j∈N �W n− j � �W n− � is (n − k) . Thus, from (4) we have: The important contribution of this paper is the novel approach to calculate and express the PHI θ index. The new formula proposed in Sect. 3 for calculating the PHI θ uses only null player free winning coalitions. It should be note that the set of these coalitions unequivocally defines a simple game. In a transparent way, the new formula shows the power assigned to null and non-null players. The set of null player free winning coalitions has a non-greater cardinality than the set of all winning coalitions. Therefore, Formula (1) given in Sect. 3 can facilitate the calculations of PHI θ for a large number of players with the presence of null players in a simple game (see Sect. 4 in particular). This means that eliminating the null players from a game gives the advantage in reducing the space complexity of an algorithm that is used. Furthermore, if the representation of the weighted game is transformed into a minimal weighted representation in integers, then the null players are exactly those who have a weight equal to zero (see Freixas and Kurz (2014), for example), if there are some of them then it makes sense to apply the new formula proposed in this paper. An alternative easy way to identify null players is by means of classical indices, as the Banzhaf or Shapley-Shubik ones. Then, the null players are exactly ones who are assigned 0 as a payoff. Moreover, when a simple game is defined by the set of null player free winning coalitions, the calculation of the θ index by Formula (1) is immediate. Moreover, we compare the h, θ, and g np indices by taking some desirable properties for the power indices in simple games into account (see Sect. 5). In this way, we obtain a picture of the differences of the considered indices. Note that, for simple games without null players, θ and g np are equal. In particular, we proved that the g np index satisfies the dominance and bicameral meet properties (see Theorems 3 and 4). One of the further developments is following the ideas of Freixas (2005aFreixas ( , 2005bFreixas ( , 2012Freixas ( , 2020 and Freixas and Pons (2021) and provide an extension of the PHI θ index for games with several levels of approval, for example. The next idea is a modification of θ to simple games with the known probability distribution over coalitions and then study the properties of the obtained index, see Freixas and Pons (2017), for example.
2021-11-23T16:08:31.200Z
2021-11-21T00:00:00.000
{ "year": 2021, "sha1": "658ae1a413b7e09552895cc5d47e166c7b8d4c7f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10726-021-09769-4.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a3b505a9b0f3a720d743bdef21f4d08ed404a69a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Medicine" ] }
232098881
pes2o/s2orc
v3-fos-license
Antibody Response to Canine Parvovirus Vaccination in Dogs with Hypothyroidism Treated with Levothyroxine (1) Background: No information is available on how dogs with hypothyroidism (HypoT) respond to vaccination. This study measured pre- and post-vaccination anti-canine parvovirus (CPV) antibodies in dogs with HypoT treated with levothyroxine and compared the results to those of healthy dogs. (2) Methods: Six dogs with HypoT and healthy age-matched control dogs (n = 23) were vaccinated against CPV with a modified-live vaccine. Hemagglutination inhibition was used to measure antibodies on days 0, 7, and 28. The comparison of the vaccination response of dogs with HypoT and healthy dogs were performed with univariate analysis. (3) Results: Pre-vaccination antibodies (≥10) were detected in 100% of dogs with HypoT (6/6; 95% CI: 55.7–100) and in 100% of healthy dogs (23/23; 95% CI: 83.1–100.0). A ≥4-fold titer increase was observed in none of the dogs with HypoT and in 4.3% of the healthy dogs (1/23; CI95%: <0.01–22.7). Mild vaccine-associated adverse events (VAAEs) were detected in 33.3% of the dogs with HypoT (2/6; 95% CI: 9.3–70.4) and in 43.5% (10/23; 95% CI: 25.6–63.2) of the healthy dogs. (4) Conclusions: There was neither a significant difference in the dogs’ pre-vaccination antibodies (p = 1.000), or vaccination response (p = 0.735), nor in the occurrence of post-vaccination VAAEs (p = 0.798). The vaccination response in dogs with levothyroxine-treated HypoT seems to be similar to that of healthy dogs. Introduction Canine parvovirus (CPV) is highly contagious and infection can be fatal if unprotected dogs are exposed to the virus [1]; thus, all dogs should be protected at any time [2]. Vaccination with this core component induces excellent immunity against infection at least in healthy dogs; nearly all of these dogs develop anti-CPV antibodies, indicating protection [2][3][4]. There is a complex relationship between the immune system and the neuroendocrine system. Several immune cells contain receptors for neuroendocrine hormones and recent evidence indicates that thyroid hormones, such as L-thyroxine (T4), maintain specific immune responses, including cell-mediated immunity, natural killer cell activity, antiviral action of interferons, as well as the proliferation of T and B lymphocytes [5]. Hypothyroidism (HypoT) is a common endocrinopathy in dogs [6]. Although its true prevalence remains largely unknown, many dogs are presented with or are treated for HypoT [7,8]. It is currently unknown whether these dogs develop and maintain (long-lasting) immunity by vaccination with modified live CPV. So far, only a few experimental studies exist on the effect of thyroid hormones on the humoral immune response. In one of these studies, raising or lowering circulating thyroid hormones had no effect on the antibody response in domestic fowl [9]. Further studies evaluating antibody response in hypothyroid rodents are contradictory, showing either an enhanced [10,11] or a suppressed antibody response [12,13]. So far, there are no data in dogs or in humans. Furthermore, it has been questioned whether the vaccination of dogs with HypoT is safe. It has been suggested that the common occurrence of HypoT in the dog population might be related to the increased use of modified live virus (MLV) vaccines and the induction of autoantibodies [14]; signs of HypoT could thus be triggered after vaccination even in dogs that are well-controlled at the time of vaccination [3]. Therefore, the aim of this study was to measure pre-and post-vaccination anti-CPV antibodies in dogs with HypoT treated with levothyroxine and compare the results to those for healthy dogs. Study Population Dogs with HypoT (n = 6) were patients of the Clinic of Small Animal Medicine, Centre for Clinical Veterinary Medicine, LMU Munich or a private practice in Southern Germany. Healthy dogs (n = 23) were presented for their annual vaccination to the same clinic or private practice or to a charity organisation. The study protocol was approved by the Government of Upper Bavaria, reference number 55.2-1-54-2532.3-61-11. Dogs were only included if they had received their last vaccine >12 and ≤15 months ago. Dogs that had received antibody preparations within the last 12 months were excluded. Dogs with HypoT had to have a diagnosis of HypoT and the disease had to be wellcontrolled at the time of vaccination. Suspicion of HypoT was based on history, physical examination findings, and the results of laboratory testing (hematology and biochemistry profile) that are typically reported for dogs with HypoT [6,7,15,16]. A diagnosis of HypoT was confirmed if endogenous thyroid-stimulating hormone (TSH) was increased and T4 was below the reference range. If endogenous TSH was within the reference range, the diagnosis was based on a low T4 and free thyroxine (fT4) value and additionally the resolution of clinical signs with levothyroxine treatment. Dogs with an elevated TSH were classified as having primary HypoT. Dogs with a normal TSH were classified as having unclassified HypoT [15,17]. Dogs being fed with a homemade diet or bone and raw meat were excluded to avoid any influence of ingested thyroid tissue on their thyroid hormone levels [18]. Dogs from the HypoT group were examined for the presence of concurrent diseases. The control of HypoT was based on the resolution of clinical signs (e.g., lethargy, exercise intolerance, weight gain), physical examination findings (e.g., haircoat, general appearance), laboratory data (e.g., hypercholesterolemia, elevated liver enzymes, elevated fructosamine), and a T4 post-pill concentration that was within the reference range or slightly above and was classified as "good" or "moderate" control. The group of healthy dogs was age-matched (≥4 years of age) and included only dogs that had (1) no history of illness, anesthesia, surgery, or medical treatment (besides deworming) during the last 4 weeks and (2) no remarkable findings in physical examination. Information on the dogs' signalment, origin, and environment, as well as data on the vaccination history (previous vaccinations, complete vaccination series, time since last vaccination) and medical history were collected. Study Protocol Physical examination was performed on days 0, 7, and 28 in order to determine the health status of all dogs. All the dogs received a single dose of a combined MLV vaccine containing CPV-2 strain 154 with a viral titer of 10 7.0-8.4 TCID 50 as well as canine distemper virus (CDV) and canine adenovirus-2 (CAV-2) (Nobivac ® SHP, MSD) on day 0. Owners had to pay special attention to the occurrence of vaccine-associated adverse events (VAAEs) or further abnormalities related to the dog s health and behavior. Serum samples were taken on days 0, 7, and 28 for the evaluation of pre-and post-vaccination anti-CPV antibodies. Detection of Antibodies by Hemagglutination Inhibition For the detection of anti-CPV antibodies, serum samples were frozen at −20 • C and tested by hemagglutination inhibition (HI) at the end of the study with a protocol based on Carmichael and coworkers (1980) using 8 hemagglutinating units of CPV-2, strain vBI 265 (Provided by: James A. Baker Institute for Animal Health, College of Veterinary Medicine, Cornell University, 235 Hungerford Hill Road, Ithaca, NY 14853, USA) [19]. The highest dilution that completely inhibited the hemagglutination of CPV antigen was defined as the endpoint. The evaluation of HI was performed by 2 independent investigators blinded to the history of the patients; divergent results were rechecked by a third independent and blinded investigator. Anti-CPV antibody titers ≥10 (with 10 being the first dilution) were considered positive. Dogs with an at least 4-fold titer increase (=2 titer steps) were considered as "responding to vaccination" [20]. Dogs that had no detectable anti-CPV antibodies before and after vaccination were defined as "non-responders". Statistical Analysis Data analysis was performed using R 4.0.3 (2020-10-10, R Foundation for Statistical Computing, Vienna, Austria). Basic non-parametric bootstrapping with 1000 resamples and replacement was used to calculate the means and 95% confidence intervals for the numeric variables age, bodyweight, and time since last vaccination of dogs with HypoT and healthy dogs. Bayesian logistic regression was used to determine significant differences in the age, body weight, and time since last vaccination between dogs from the 2 respective groups; due to the non-normally distributed and heteroscedastic characteristics, the variable time since last vaccination was logarithmized. In order to verify the modelling results, differences were additionally studied using classic statistical tests. Therefore, the normality of data distribution was assessed with the Shapiro-Wilk normality test and visually using Quantile-Quantile plots. Non-normally distributed data were further analyzed using a non-parametric 2 sample Mann-Whitney test, while normally distributed data were further checked for the homogeneity of variance across groups via the Bartlett test. Normally distributed and homogeneous data were assessed by the Student's t-test; normally distributed non-homogeneous data were assessed by Welsh's t-test. Bayesian logistic regression was also used to compare (1) the presence of anti-CPV antibodies before vaccination between dogs with HypoT and healthy dogs, (2) anti-CPV antibody response after vaccination, and (3) the occurrence of VAAEs. The normality and homoscedasticity of residuals of all Bayesian models were assessed via visual residual-diagnostics. Results with a p-value < 0.05 were considered statistically significant; results with a p-values between 0.1 and 0.05 were considered suggestive. Dog Population The present study included 6 dogs with HypoT (Table 1). Of these dogs, 4 dogs were male (66.7%) and 2 dogs were female (33.3%). Their ages ranged between 7 and 13 years (mean age: 9.32 years; 95%CI: 7.97-11.2). Body weight ranged between 26 and 46 kg (mean weight: 37.2; 95%CI: 32.3-42.1). Three dogs lived in an urban and 3 dogs in a rural area (each 50%). Two dogs had >5 daily contacts with other dogs, 2 dogs had 3-5 daily contacts (33.3%), and 2 dogs had ≤2 daily contacts with other dogs (each 33.3%). Primary HypoT was diagnosed in 4 dogs (66.7%). Unclassified HypoT was diagnosed in 2 dogs (33.3%). In all dogs, HypoT was well controlled during the whole study course. The median dose of levothyroxine (Forthyron ® , Dechra, Aulendorf, Germany) was 9.4 mg/kg twice daily (BID) (range: 7.7-11 mg/kg). Treatment with levothyroxine had been started immediately after the diagnosis of HypoT in all dogs. The median time between establishment of HypoT diagnosis and vaccination (on study day 0) was 997 days (range: 253-2492 days). Two dogs had concurrent diseases. One dog suffered from symmetrical lupoid onychodystrophy, which was well-controlled. At the time of the study entry, increased liver enzymes were noted in another dog which had been treated for mitral valve disease and bradyarrhythmia before. During the course of the study, this dog was diagnosed with lymphoma and additional medications were given ( Table 1). All dogs with HypoT and all healthy dogs had been vaccinated in the past (>12 and ≤15 months ago). None of the dogs with HypoT and only 7 of the dogs in the healthy group (30.4%) had received a primary immunization series according to expert guidelines [2,3]. Dogs were considered to have received a full primary vaccination if they had received CPV vaccinations in 3-4 weeks intervals and the last vaccination with at least 14-16 weeks of age, followed by a booster after 11-13 months. In dogs ≥12 weeks, immunization was considered complete if they had received 2 vaccinations every 3-4 weeks and a booster after 11-13 months. After the primary immunization series, subsequent boosters had to be given at least every 3 years [2,3]. The mean time since the last vaccination was 1.04 years (95% CI: 1.02-1.08) in dogs with HypoT and 1.06 years (95% CI: 1.04-1.08) in healthy dogs. There was no significant difference between the mean age of the dogs with HypoT (9.3 years) and healthy dogs (7.4 years) (Bayesian p-value = 0.091; student s p-value = 0.089). There was also no significant difference between the median time since last vaccination of dogs with HypoT (1.03 years) and healthy dogs (1.06 years) (Bayesian p-value = 0.540; Mann-Whitney p-value = 0.552). The mean body weight of the dogs with HypoT (37.2 kg) and healthy dogs (21.7 kg) differed significantly (Bayesian p-value = 0.005; student s p-value = 0.005). Comparison of Dogs with HypoT and Healthy Dogs No significant difference in the presence of pre-vaccination antibody titers ≥10 on day 0 (p = 1.000) could be found between dogs with HypoT and healthy dogs. In addition, there was no significant difference in the response to vaccination between dogs with HypoT and healthy dogs (p = 0.735). Additionally, no difference in the occurrence of VAAEs (p = 0.798) was detected. Discussion All the dogs with HypoT in the present study had anti-CPV antibodies before vaccination, suggesting that they had effectively reacted to previous vaccinations (or infection) and were protected against parvovirosis. It has been proposed that altering thyroid hormones could have an influence on the maintenance of the immune function and thus on the vaccination response [21]. Several immune cells contain receptors for thyroid hormones, indicating relationships between thyroid hormones and the immune system. In human medicine, thyroid hormones in higher concentrations but still within normal physiological ranges were shown to be positively associated with immune function-e.g., with the proliferation of monocytes and several lymphocyte subpopulations [5]-leading to a greater responsiveness of the immune system. Furthermore, thyroid hormones can affect endogenous glucocorticoid levels [22][23][24]. The administration of triiodothyronine (T3) and T4 suppressed the basal and ACTH-stimulated levels of blood cortisol, at least in rats [23]; in contrast, low levels of thyroid hormones could lead to a chronic elevation of endogenous blood cortisol and thus impaired immune function [22], although a previous study revealed that the immune response of dogs with treated hyperadrenocorticism (HAC) to MLV vaccination against CPV was not significantly impaired in comparison to that of healthy dogs [25]. This is the first study that examines and compares the response of modified life CPV vaccination in dogs with HypoT to that of healthy ones. The findings of the present study are especially important for implementing vaccination guidelines in dogs with HypoT; in addition, the results could serve as a model for vaccination in humans with HypoT. Functional canine HypoT is mainly caused by primary diseases of the thyroid gland [26]. Lymphocytic thyroiditis is the leading cause affecting more than 50% of cases [27][28][29][30]. In humans, Hashimoto thyroiditis, a chronic inflammation of the thyroid gland, represents the most common cause of hypothyroidism; it is also considered to be autoimmune in origin. In dogs as well as in humans, it is currently unknown whether HypoT leads to (long-lasting) immunity after vaccination with modified live vaccines. An adequate vaccination response (≥4-fold titer increase) could not be observed in any of the dogs in the HypoT group; only one dog developed a titer increase (but only one titer step, which is considered to be negligible). Even so, most of the healthy dogs (95.7%) did not develop an adequate vaccination response either. The most likely cause for not having a ≥4-fold titer increase in both groups is pre-existing antibodies. A previous study has already demonstrated that healthy dogs with pre-existing anti-CPV antibody titers ≥80 are more likely to lack vaccination response than dogs with titers <80 [20], since pre-existing neutralizing antibodies can bind to vaccine virus and thus prevent an active immune response. This is why regular CPV re-vaccinations are not advised in adult dogs with pre-existing antibodies. It is presently uncertain whether vaccination in dogs with HypoT is safe. Similar to autoimmune thyroiditis in humans, a primarily immune-mediated disease is suspected in dogs with HypoT [26,31] or at least a process with defective immune regulation [30,32]. Although the destruction of canine thyroid tissue is largely explained by direct T-cell toxicity, autoantibodies are also thought to be important in the pathogenesis of canine HypoT [33,34]. Due to the high frequency of lymphocytic thyroiditis in dogs and the frequent use of vaccines in veterinary medicine (at least in the past), it has been suggested that lymphocytic thyroiditis could be related to or triggered by a type II hypersensitivity reaction following the overstimulation of the immune system by vaccines, leading to the induction of autoantibodies [14,[35][36][37]. In humans, autoantibodies against thyroid antigens (thyroperoxidase and thyroglobulin) are usually present, although the role of those antibodies in the disease process still remains unclear [38,39]. A comparable diagnostic approach has been made in hypothyroid dogs, in which, in addition to thyroglobulin autoantibodies (TgAA) [32,[40][41][42] antibodies against T3 [40,41,43,44] and T4 [41,44] could also be found. In contrast, antibodies against thyroid peroxidase have not been detected [14,45]. HogenEsch and colleagues were able to demonstrate that various autoantibodies were induced when dogs were vaccinated according to a standard vaccination protocol. However, an increase in thyroglobulin antibodies could not be detected in the vaccinated dogs, all of them beagles. Furthermore, no clear indication of thyroid dysfunction could be detected. A "nodule" was found in the thyroid gland in only one dog in that study [14]. According to the authors, this is known as a common lesion in beagles [46] and could be interpreted as an early manifestation of thyroiditis [14]. Besides the induction of autoantibodies due to hypersensitivity, it is also conceivable that the contamination of vaccines with foreign thyroglobulin, primarily of bovine serum, could induce thyroglobulin antibodies in vaccinated dogs [4]. Bovine serum is commonly used in the production of cell culture-based vaccines, especially in high-titer CPV (and CDV) vaccines [47][48][49]. It could therefore be assumed that HypoT and subsequent clinical disease could be triggered after vaccination in dogs predisposed to HypoT or that signs of HypoT reoccur in dogs with HypoT that were actually well-controlled at the time of vaccination [3,50]. Scott-Moncrieff and colleagues targeted canine and bovine thyroglobulin antibodies in laboratory beagles (n = 20) as well as privately owned adult dogs (n = 16) at defined times before and after vaccination. Post-mortem histopathological examinations of the Beagle at the age of 5.5 years found no evidence that repeated routine vaccinations lead to immune-mediated thyroiditis in dogs. However, since an unexpectedly high prevalence of thyroiditis also occurred in the unvaccinated control group, the scope of the investigation for the detection of such an association was limited [51]. All the dogs from the HypoT group in the present study were well-controlled and signs of HypoT had not reoccurred after vaccination during the study course. However, since cell destruction can be caused by different things (e.g., by the complement system, phagocytosis, or natural killer cells), clinical consequences can occur at different time points; destruction via macrophages or natural killer cells can take days to weeks, whereas cytolysis via the complement systems is much faster [52]. Taking a possible combination of predisposing factors for development of immune-mediated HypoT into account, antibody testing against important infectious diseases should be recommended particularly in dogs with HypoT, and regular re-vaccinations should only be considered when antibodies cannot be detected. Two of the dogs in the HypoT group in the present study had concurrent diseases. One dog was presented with newly diagnosed lymphoma and treated with prednisolone during the study course at an anti-inflammatory dose. In dogs, lymphoma can lead to reduced T-cell numbers [53] and even to changes in antibody production, especially when the tumor cells secrete paraproteins-i.e., abnormal immunoglobulins-which simultaneously interfere with normal antibody production [54,55]. Although a human medicine meta-analysis revealed that tumor patients develop an impaired humoral immune response to vaccinations before tumor therapy [56], the immune competence of the dog with lymphoma might have been additionally impaired due to the glucocorticoid treatment, as the suppression of pituitary and adrenal function was noted in dogs that were treated for 35 days with anti-inflammatory doses of prednisone [57]. However, the vaccination response of the dog with lymphoma in the present study did not differ to that of healthy dogs, and the dog showed no VAAEs. The second dog from the HypoT group had concurrent symmetrical lupoid onychodystrophy (SLO), which has also been hypothesized to be activated by repeated vaccinations [48]. That dog was well-controlled with oral pentoxifylline and vitamin E, and signs of SLO did not worsen after vaccination; however, the dog developed mild gastrointestinal signs for a few days after vaccination. To date, there are no data on whether individuals with HypoT are more likely to develop VAAEs after MLV vaccination. Lethargy and gastrointestinal signs after vaccination were commonly observed in the present study and can result from the owners special attention toward VAAEs. The occurrence of VAAEs did not differ significantly between dogs from the HypoT group and healthy dogs. Only one third of the dogs in the HypoT group showed mild gastrointestinal signs. Since no other signs of HypoT were reported in the dogs after vaccination and due to the mild and self-limiting nature of the gastrointestinal signs, they presumably resulted from active CPV replication in the gastrointestinal cells and not from the worsening of HypoT. However, an increased replication of MLVs in dogs with HypoT might occur due to a declined function of the innate immune system (e.g., monocytes), which is a first-line defense mechanism against viral infections [58]. Data from the present study suggest that MLVs can therefore be considered safe for dogs with HypoT, at least when they are well-controlled. In 4 dogs, primary HypoT was diagnosed based on increased TSH and low T4 concentrations in addition to the presence of clinical signs. In 2 dogs, the origin of HypoT could not be determined, as their endogenous TSH concentrations were low. Their diagnosis was based on low T4 and free thyroxine (fT4) concentrations and the presence of clinical signs that were resolved with levothyroxine treatment. In dogs, HypoT is the consequence of primary disease in almost all cases. However, in up to 40% of cases increased TSH levels are not present [59]. It has been demonstrated that TSH can decrease over time in dogs with surgical induced hypothyroidism as a consequence of vacuolar changes of thyrotrophic cells of the adenohypophysis [60]. The prevalence of thyroid function decreases with age in both humans and dogs [61,62]. Concerning the age distribution, lymphocytic thyroiditis is rarely described in dogs younger than 2 years and reaches its highest peak at 4-5 years, whereas idiopathic atrophy (or TgAA-negative HypoT) affects dogs from 4 years onwards and peaks around 8-9 years [6]. Therefore, only dogs ≥4 years were included in the present study. The main limitation of the study was the low number of dogs in the HypoT group, making the assessment of the vaccination response difficult. Long-term studies involving larger numbers of dogs with HypoT and different stages of disease and/or its medical control would be useful. Furthermore, the investigation of cellular immunity would be desirable for future research. Conclusions All the dogs with HypoT had pre-vaccination antibodies against CPV indicating protection. Vaccination response in dogs with well-controlled HypoT was similar to that of healthy dogs, and thus the immune function seems to be comparable. However, mild VAAEs were commonly observed after vaccination. Thus, measurement of antibodies against CPV infection would therefore be an excellent possibility in dogs with HypoT to confirm that protection is present instead of routine re-vaccination. Long-term studies involving larger numbers of dogs with HypoT and different stages of disease are needed. Southern Germany. Healthy dogs were presented for their annual vaccination to the same clinic or private practice or to a charity organisation. The study protocol was approved by the Government of Upper Bavaria, reference number 55.2-1-54-2532.3-61-11. Dogs were only included if they had received their last vaccine >12 and ≤15 months ago. Dogs that had received antibody preparations within the last 12 months were excluded. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The authors confirm that the datasets analyzed during the study are available from the first author or the corresponding author upon reasonable request.
2021-03-04T05:45:17.240Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "20b8598b82e20293b7fc0613673fd4a901293049", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-393X/9/2/180/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "20b8598b82e20293b7fc0613673fd4a901293049", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257842629
pes2o/s2orc
v3-fos-license
A Panoramic Review of Traditional Use, Phytochemical Composition and Pharmacology of Justicia Gendarussa Burm f Justicia gendarussa Burm f. also known as Willow-leaved justicia (family Acanthaceae) is a plant species native to china. This shrub has been used in Indian and Chinese traditional medicine. This plant can be located in different Asian countries. Various phytoconstituents are identified from this plant like Flavonoids, alkaloids, steroids, triterpenoids, phenolic compounds, saponin and glycosides. The main biologically active constituents found are Lupeol, stigmasterol, Aromadendrin, naringenin, Vitexin, apigenin, Justiprocumin A and B, Patentiflorin A and Justidrusamide A-D. These constituents help to contributes various pharmacological activities like anti-inflammatory, anti-oxidant, anti-viral, anti-helmintic, anti-bacterial, anti-fungal, anti-angiogenic, hepatoprotective action and antifertility activity. The current review focus on the botanical description, traditional uses, phytochemical constituents and pharmacological activities of the plant Justicia gendarussa Burm f. Introduction Justicia gendarussaBurm.f is herb or shrubs in family Acanthaceae, which are spread in Asian countries such as, India, China, Philippines,Indonesia, Malaysia, Sri Lanka, Pakistan, Thailand and the Andaman Islands.The genus Justicia is distributed in tropical regions of the world.A species native to China.There are around 300 species all over the world among which about 50 are reported in India (1).Some of the species which belongs to genus Justicia are Justicia bentonia L, Justicia glabra J. Koenig ex Roxb, Justicia diffusaWilld, Justicia glaucaRottler, Justicia prostrate (C.B.Clarke) Gamble, Justicia procumbens L, Justicia simplex D. Don, Justicia bentonica, Justicia traquebariensisL.f,Justicia spicigera, Justicia beddomei (Clarke) Bennett (2).This plant is generally use for treatment of rheumatism, fever, headache, hemiplegia, ear discomfort, muscle pain, gastrointestinal problems, eczema, bronchitis, dyspepsia, symptoms of vaginal discharges and eye diseases in Indian and Chinese traditional medicine (3).This plant shows various bioactivities including anti-inflammatory, antimicrobial (antifungal, antiviral, antibacterial),antitumor, anti-sickling, anthelmintic, andanalgesic property.A wide variety of biologically active bioactive compounds have been found such as alkaloids, • Email: editor@ijfmr.com Justicia gendarussa is native to china and is scattered around many Asian countries like India, Malaysia, Thailand, Indonesia, Philippines, Sri Lanka and Pakistan.The plant is found in tropical and subtropical regions of Asia and in India it is located at seashore area like Surat, Valsad and hills like Khasi hills, Pavagarh.In India gardens it is frequently grown as a border plant or fence.Very often it is known as an escape.It grows very fast and is propagated by cutting.This plant can survive in adverse conditions and can resist heavy rainfall and thrives in shade (8).The plant has been used by the native medical practitioners and tribes to treat different ailments including inflammation, liver disorders, tumours and skin diseases.In Ayurveda, the plant is beneficial for the treatment of inflammation,myringitis, bronchitis, vaginal discharges, eye diseases, dyspepsia and fever (9).The plant has strong pungent odour, bitter, hot and dry; is considered as emetic, febrifuge, emmengogue and diaphoretic.Theleaves and tender shoots are assumed to be diaphoretic and they are given in the treatment of chronic rheumatism.The leaves are utilized as antiperiodic, insecticidal and alterative (10).The fresh leaves are applied topically in rheumatism and oedema of beriberi.An infusion of the leaves is given internally for hemiplegia, cephalalgia and facial paralysis (7).The juice prepared from the fresh leaves is identified to possess the property of stopping internal bleeding; it is dropped into the ear for earache and into the corresponding nostril on the side of the head affected with hemicrania; it is also used for colic problems in children.Oil prepared from the leaves of Justicia gendarussa is useful in dermatitis (11).The Malays employ the plant as an antipyretic; use in the treatment of lunacy, frailty, and snake bite; it is also given for stomach troubles and amenorrhoea.In La Reunion the decoction of leaves is use as an stimulant and emetic.The roots alsoposses many medicinal properties (12).The extract obtain from the roots of willow leaved Justicia is prescribed for constipation, laxative action helps in easy bowel movement.In Madagascar, the plant is mainly use for arthritis, the decoction of the root boiled with milk is used in dysentery, rheumatism and jaundice (13).The decoction of the flower tops is generally used for the purpose of fumigation or as a drink.The bark is also use as emetic.Leaves are utilized as contraceptive agents in both male and female.Chewing of leaves in male cause decrease in sperm count and in female it helps to postpone pregnancy. Steroids and triterpenoids Identification of stigmasterol, lupeol and 16-hydroxylupeol.thestudy of identification and isolation of different phytochemicals were carried out by researchers on the samples of plant justicia gendarussa from Kishoregonj, Bangladesh in November, 2009, which was identified by an expert taxonomist.The air-dried powder of whole plant (0.5 kg) of gendarussa vulgarisNees was soaked for 10 days in 2.5 L of methanol at room temperature and was then filtered through cotton plug, followed by whatman filter paper number 1 (16).Rotary evaporator was used to concentrare the extract.theconcentrated methanolic extract aliquot (5.0 g) was fractioned into petroleum ether (PEF, 1.8g), chloroform (CLF, 0.7g), carbon tetrachloride (CTF, 0.9g) and aqueous (AQF, 1.1g) soluble fractions by using modified Kupchan partitioning method.Series of different processes were carried out which includes solvent-solvent partitioning, purification of methanol extract of the whole plant by using petrol-ether fraction and repeated chromatographic separation which helped to identify some compounds that is Stigmasterol, lupeol and 16-hydroxylupeol.Sephadex (LH-20) was used for the chromatography of the fraction and was eluted using solvents like dichloromethane: methanol: n-hexane mixture (5:1:2) followed by dichloromethane and methanol mixture (9:1; 1:1) and then methanol (100%) (17).The elucidation of structures was done by NMR data examination, similarity with noted values and co-TLC with standards. Glycosides Bioassay-directed separation of methanolic extracts of stalk and bark of willow leaved justicia helped in identification of two anti-HIV compound .compound 1 and 2 are glycoside moieties which belongs to the class of arylnaphthalide lignan (ANL).HPLC analysis of active fractions of plant extract disclosed that ANL glycosides were present.Active fraction named as F26 obtained from the chromatographic fractionlization of the methanol extract using silica gel column was exposed to preparative HPLC separation to make 8 different fractions (F4 to F48).These two ANL glycoside were elucidated from fractions F45 and F48 (19).When the 1H and 13C NMR data was compared with known aryl naphthalide lignan compound both the compounds showed the presence of methylene-dioxy and two methoxy groups.The compounds 1 and 2 with molecular formula C35H38O17 were segregated as a white powder and can be diff erentiated only by the acetyloxy group position.9- 3H)-one as compound 1 and 2 respectively.Compounds are named as Justiprocumins A and B(19). Flavonoids 5.3.1 Apigenin and vitexin The flavonoid components were separated and characterized by using Reverse phase high performance liquid chromatography (RP-HPLC) and Thin layer chromatography (TLC).The configurations and chemical bonds were recognized using UltravioletVisible spectrophotometry, Nuclear magnetic resonance NMR (13C and 1H), Fourier Transform-Infra Red spectroscopy (FTIR), Diff erential Scanning Calorimetry (DSC),Chromatography-Mass Spectrometry (LCMS) and Scanning Electron Microscopy (SEM) (20).In the procedure, ethyl ether, petrol ether, and ethyl acetate were used to partition the methanol extract of the leaves.In order to execute column chromatography, the latter was concentrated.The chemicals recovered from column chromatography were used in TLC.The HPLC investigation showed that the TLC-identified compounds 1 and 2 had the same retention times as the reference flavonoids Apigenin and Vitexin.Finally, the identification of two bioactive flavonoids Apigenin and Vitexin was made possible by spectrum data(21). Naringenin and kaempferol Naringenin and kaempferol were discovered in methanol extracts of Justicia gendarussa mature and young leaves from 4 areas in Malaysia.Gas chromatography-flame ionisation detector (GC-FID) analysis was used to determine their identities.Mature leaves from the Skudai and Muar districts had the highest quantities of chemicals, measuring 507.692 and 1226.964mg/kg, respectively (22).Data analysis revealed that the ratio of naringenin to kaempferol in the leaf extracts was proportional.According to the research, geographical differences among plant samples and the physiological stage of organ parts may be responsible for variability in flavonoid content within a plant species (23).Early in the flavonoid biosynthetic pathway, naringenin was produced and activated by chalcone isomerase (CHI).Then, it was hydroxylated by flavanol synthases to provide dihydrokaempferol, which was then changed into kaempferol (FLS).Mature leaves are when naringenin is entirely transformed into kaempferol.This might be the cause for higher levels of these flavonoids in mature foliage (24). Patentiflorin A DGP, also known as patentiflorin A, was first discovered in a Justicia gendarussa plant (Acanthaceae).It demonstrated anti-ZIKV activity against other flaviviruses and broad-spectrum antiviral action in vitro and in vivo as the glycosylated diphyllin (26).By limiting the acidification of endosomal/lysosomal compartments in the target cells, MOA demonstrated how DGP prevents ZIKV from fusing with cellular membranes and infecting host cells.Additionally, it exhibits strong action against a variety of HIV strains with IC50 values between 15 and 21 nM;MOA shown that it functions as a possible HIV-1 reverse transcription inhibitor.With IC50 values between 24-37 nM, patentiflorin A from J. gendarussa shown anti-HIV activity against a variety of HIV strains, outperforming the first anti-HIV medication that was clinically utilised, zidovudine AZT (IC50 77-95 nM) (26). Anti-Oxidant Activity The Justicia gendarussa plant's aerial component (leaf) extract was tested for its anti-oxidant action using DPPH for free radical scavenging activity at a concentration of 10 µg/mL.Utilizing in-vitro models, the plant's methanolic extract was investigated for its anti-oxidant properties (14).At concentrations of 145± 5.00 g/ml and 185± 8.66 g/ml, respectively, the plant extract of stem (methanol) Justicia gendarussa generated callus on the solid and liquid surface demonstrated the significant antioxidant activity (27).The Justicia gendarussa plant's leaf extract in methanol demonstrated exceptional anti-oxidant activity utilising the DPPH radicalscavenging assay against the standard flavonoids (Ascorbic acid, Gallic acid and Butylatedhydroxyl toluene).By using a hydrogen peroxide scavenging activity method, an extract (methanol) of the plant leaf Justicia gendarussa demonstrated anti-oxidant activity at a concentration of 50-200µg/mL−1 (27).Ethyl acetate leaf extract of Justicia gendarussa uses the Ferric Reducing Antioxidant Power (FRAP) assay to demonstrate the impressive anti-oxidant activity.Leaf extract from the plant Justicia gendarussa demonstrated anti-oxidant action through Nitric Oxide Scavenging activity.By DPPH free radical scavenging activity, the plant leaf from Justicia gendarussa has demonstrated significant anti-oxidant activity(16)(28). Anti-viral activity The diphyllin glycosides found in the Justicia gendarussa plant's leaves and stems exhibit anti-HIV activity against a variety of HIV strains.The plant extract (ethanolic) from Justicia gendarussa leaves has an anti-HIV impact on HIV-infected MT-4 cells (Human T-cell leukaemia lines).Using a standardised human PBMC (Human Peripheral Blood Mononuclear Cell) assay, the arylnaphthalene lignin (ANL) glycoside, pententiflorin isolated from the MeOH extract of plant leaf and stem Justicia gendarussa, exhibits remarkable anti-HIV activity against the HIV-1 clinical isolates BAL and SF162 (Both M-tropic), LAV0.04 (T-tropic), and 89.6 (dual tropic) (19).Additionally, justiprocumin B demonstrated potent activity against a variety of HIV strains, with IC50 values between 15 -21 nM (AZT, IC50 77 to 95 nM), the AZT-resistant isolate HIV-11617-1 (IC50 185 nM) and an IC50 value of 495 nM for the nevirapine-resistant isolate HIV-1N119 .Both the NNRTI-resistant isolate (HIV-1N) of the analogue nevaripine and the NRTI-resistant isolate HIV-1 of the analogue AZT were highly inhibited by the substance (19). Anti-arthritic activity At a dosage of 100 mg/kg, the extract (95% ethanolic extract) from the Justicia gendarussa plant leaf has anti-arthritic activity in male albino wistar rats subjected to FCA (Freund's complete adjuvant) induced arthritis as well as bovine type II collagen-induced arthritis.In Freund's complete adjuvant induced arthritic model and collagen induced arthritis, the treated rats demonstrated significant reductions in paw volume of 43% and 47% respectively, when compared to the standard (aspirin), which showed 26% and 38%.33.Increased levels of WBC count and C-reactive protein (levels rise dramatically during inflammatory) were significantly suppressed in plant-treated rats, indicating a potent recovery from anaemia (27). Cytotoxic activity The root of Gendarussa vulgaris was tested for cytotoxicity on vero cell lines and MCF7 cell lines.The effect of a defatted methanolic extract of justicia gendarussa root at 100, 500, and 1000 ug/ml concentrations was compared to standard Vinblastine by using viability and Trypan blue assays on MCF7 cell line.MTT cell viability assay of methanolic extract of root shows 7.3% cytotoxity in MCF 7 cell line at 1000ug/ml concentration.MTT cell viability assay of methanolic extract of root shows 24.9% cytotoxicity in vero cell line at 1000g/ml concentration.In VERO and MCF 7 cell lines, the methanolic root extract of Gendarussa vulgaris has no cytotoxic activity (29). The studies also showed that Justicia gendarussa leaf extract in methanol exhibits excellent cytotoxic activity against breast cancer.In the MTT assay, standard drug used is tamoxifen.Justicia gendarussa plant extract exhibits cytotoxic action against human cancer cell lines (HepG2 and HeLa cell lines is a human liver carcinoma Cell lines).Through the MTT assay testing, it has been demonstrated that the leaf extract of plant justicia gendarussa showed significant cytotoxic effects against the human cancer cell lines (HT-29, HeLa, and BxPC3).Using the brine shrimp lethality bioassay method, the hydroalcoholic extract of the plant Justicia gendarussa's root and leaves sowed the cytotoxic activity.A brine shrimp lethality bioassay was used to test the test substances at various concentrations (1-1000 g/ml).The methanolic extract was the most cytotoxic.The LD 50 value was determined to be 25.44 g/ml.The IC50 values are determined to be 16 ug/mland 5 ug/ml, respectively.J. gendarussa leaf extract may have cytotoxic activity on human cancer cell lines, particularly BxPC-3 cells(22). Analgesic activity Through the use of a hot plate and an acetic acid-induced writhing test method, an aerial component of the plant Justicia gendarussa with ethanol (95% v/v) demonstrated significant analgesic action.Through the acetic acid-induced writhing experiment and hot plate method, the 95% ethanolic extract of Justicia gendarussa leaves exhibits analgesic effect on Swiss albino mice at concentrations of 125, 250, and 500mg/kg(30). Anti-angiogenic activity The plant's leaves demonstrated strong antiangiogenic activity.Angiogenesis, also known as neovascularisation, is the activation, proliferation, and migration of endothelial cells from preexisting blood vessels.It is essential for wound healing.The ChrioAllontoic Membrane assay (CAM) assay was used to determine the anti-angiogenic activity of the aqueous and ethanolic extract of leaves obtain from plant gendarussa vulgarisNees (16).The acute toxicity of aqueous and ethanol extracts was also investigated using a brine shrimp lethality bio assay, which revealed that the LC50 values for both extracts were greater than 1000 ppm.As a control, ß-1,4 galactan sulphate was used.Both extracts had no effect at low concentrations (less than 10ug/ml).Both extracts inhibited neovascularization in a dosedependent manner at concentrations ranging from 10-100 ug/ml and is applicable for both the methods. Anti-bacterial activity Justicia gendarussa phytochemical extracts such as alkaloid, flavonoid, terpenoids and glycoside extract demonstrated significant antibacterial activity against gram positive and gram negative strains of microorganism.The alkaloid extract was found to act as a more relevant inhibitor in both gram negative and grame positive bacteria compared to all other extracts in this study (31).At 20 ul/ml, the chloroform extract of Justicia gendarussa demonstrated significant (p<0.005)antibacterial activity against both Pseudomonas aeroginosa and Staphylococcus aureus, but not for E. coli, Vibrio chlorea and Proteus mirabilis (32).At 1000 ug/ml concentration ethyl acetate and ethanol extracts of Justicia gendarussa leaves exhibited potent antibacterial activity against gram positive and gram negative strains.Thedics diffusion method was also used to assess the antibacterial activity of plant Justicia gendarussa stem extract (hexane and aqueous extract) against Escherichia coli and Staphylococcus aureus.Using the disc diffusion method, the solvent (petroleum ether, methanolic and chloroform) used for the extraction purpose of the plant Justicia gendarussa displayed anti-bacterial activity at concentrations of 25, 50, 75, and 100µL.Also the Justicia gendarussa leaves were being extracted using solvents like diethyl ether, hexane, ethyl acetate, dichloromethane and methanol, and the antibacterial activity was examined using the disc diffusion method against bacteria (gm +ve and gm -ve) including with zone of inhibition Staphylococcus aureus (26.33mm),Salmonella paratypi A (19.50mm),Bacillus subtilis (20.25mm),Salmonella typhimusium (17.20mm),Escherichia coli (21.40 mm),Shigella flexneri (26.20 mm) and Proleus mirabilis (24.50mm). Anti-inflammatory activity In an ethyl acetate fraction purified from a methanolic extract of roots of Justicia gendarussa (EJG).the anti-inflammatory activity and mechanism(s) of action were investigated .Using partitioned fractions obtained from the methanolic extract of J. partitioned fractions were used in anti-inflammatory tests on rats (33).Comparing ethyl acetate fraction to other extracts and Voveran, it inhibited edoema by 80% and 93% during the third and fifth hours of carrageenan-induced rat paw edoema.Male Wistar rats weighing 120-150 g were utilised for the tests.Carrageenan was employed to cause rat paw edoema (34).By injecting 0.1 ml of 1% carrageenan in 0.9% saline by using an aponeurosis injection, an edoema was generated on the right hind paw.Crude methanolic extract of J. gendarussa root extract was fractionated under the guidance of bioassays to produce the fractions hexane fraction (HJG),butanol fraction (BJG), ethyl acetate fraction (EJG),dichloromethane fraction (DJG), and aqueous fraction (AJG).One hour before to the injection of carrageenan, these fractions were given orally at a dose of 50 mg/kg along with the conventional medication (Voveran) at a dose of 20 mg/kg.A paw edoema metre was used to measure the volume of the right paw prior to injection as well as three and five hours after inflammation was induced (35). The invitro anti-inflammatory activity was studied using HRBC method.Healthy subjects blood was drawn and combined with an equal amount of sterilised Alsevers solution.The packed cells in this blood solution were separated after being centrifuged at 3000 rpm (36).Isosaline solution was used to wash the packed cells and to create a 10% v/v suspension.The assessment of the anti-inflammatory properties was done using this HRBC suspension.Separate mixtures of various extract concentrations, reference samples, and controls were added to 1 mL of phosphate buffer, 2 mL of hyposaline and 0.5 mL of HRBC suspension.All of the assay solutions were centrifuged at 3000 rpm after 30 minutes of incubation at 37°C.A spectrophotometer operating at 560 nm was used to determine the haemoglobin concentration after the supernatant liquid was decanted.By assuming that all of the hemolysis produced in the control was created, the percentage hemolysis was estimated(37). Anti -anxiety activity The ethanolic extract of the plant's aerial part was evaluated in Swiss albino mice for anti-anxiety activity using the light dark test as well as the elevated plus maze test.For 21 days, an ethanolic extract of the plant at a concentration of 200 -500 mg/kg body weight was administered orally.The elevated plus maze test method revealed that the ethanolic extract increased the time spent in the open arms as well as the number of entries into the open arms (38).The extract also increases the time spent in the light area in the light dark test model.The outcome was compared to the standard medication diazepam.This study found that an ethanolic extract of the plant has significant (p<0.01)anti-anxiety activity. Anti-helmentic activity Methanolic extracts of the stems and leaves were found to be effective against Pheretimaposthuma.Various concentrations of stem and leaf extract (10,20,30,40, and 50 mg/ml) were applied to earth worms in petri dishes (5).The duration of paralysis and death were measured and compared to the standard drug Albendazole.In comparison to the reference drug, which showed paralysis at 17 minutes and death at 48 minutes at a concentration of 10 mg/ml, the leaf and stem extracts showed paralysis at 35.33 min and 41.33 min, respectively, and death at 70.67 min and 89.33 min(12). Anti -fungal activity The antifungal activity of different extracts of Justicia gendarussa against dermatophytic species was tested in vitro using the agar cup diffusion technique.Trichophyton mentagrophytes, Trichophyton rubrum, Microsporumgypseum, and Microsporumfulvum were tested using chloroform, methanol, and aqueous extracts of the whole plant.The antifungal activity of different solvent extracts of Justicia gendarussa in terms of inhibition zone diameter can be concluded as chloroform> methanol> water in decreasing order (39).The disc diffusion method was used to investigate the antifungal activity of Justicia gendarussa stem aqueous and hexane extracts.The Candida albicans inoculum was made with potato dextrose broth.To make potato dextrose agar media, autoclave 3.9 gm in 100ml.Using sterile cotton swabs, inoculate the test microorganisms on the Potato dextrose agar plates.On sterile discs, aqueous and hexane extracts of Justicia gendarussa Burm stem were placed.To remove the solvent, the discs were dried aseptically under laminar air flow.Dried discs were placed on the surface of cultureinoculated potato dextrose agar plates and incubated for 48 hours at room temperature (40).Himedia zone reader was used to assess antifungal activity.The above results clearly show that aqueous extract of Justicia gendarussa and STD had more potent antifungal activity against Candida albicans than hexane extract.The zone of inhibition of aqueous extract was 11mm, hexane extract was 7mm, and standard was only 2mm(39). Hepatoprotective activity In vitro and in vivo models of carbon tetra chloride-induced liver injury demonstrated a strong hepatoprotective effect activity by the plant Justicia gendarussa's methanolic extract.Using primary rat hepatocytes and carbon tetra chloride as a hepatotoxin, researchers examined the in vitro hepatoprotective effect of Justicia gendarussa Burm (12).In addition to CCl4 (10 mM), different doses of plant extract (10, 50, and 100 ug/ml) and silymarin (100 ug/ml) were treated with the isolated primary rat hepatocytes (18).Cell viability was measured by Trypan blue exclusion assay.The transaminase enzymes in cell suspension were measured.Methanolic extract produced significant moderate hepatoprotective effect.The extract significantly decreased the levels of the liver enzymes AST and ALT while increasing the levels of antioxidant enzymes.At a dose of 300 mg/kg, the effect was seen to be meaningful (41). Conclusion: As evidenced by study Justicia gendarussa is an alluring specimen with a long history of traditional medicinal use.This review broadly elaborates the traditional use, phytochemical constituents and pharmacological activities of the plant Justicia gendarussaBurm f.The study shows that the plant species poses various pharmacological activities like Anti-inflammatory, Anti-oxidant, Anti-viral, Antibacterial, Anti-fungal, Anti-angiogenic activity.The different chemical constituents identified in this species are Saponin, Terpenoids, Flavonoids, Glycosides, Alkaloids.From this it can be concluded that the plant specimen could be useful for development of different commercial drugs.
2023-03-31T15:23:34.649Z
2023-03-26T00:00:00.000
{ "year": 2023, "sha1": "acbe38ffbb978a1f02a7867407ffeece89561d57", "oa_license": "CCBYSA", "oa_url": "https://www.ijfmr.com/papers/2023/2/1983.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b2071161a9f639a181ef54d16aee9f4684930cf7", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [] }
256678824
pes2o/s2orc
v3-fos-license
Spatiotemporal dynamics of clonal selection and diversification in normal endometrial epithelium It has become evident that somatic mutations in cancer-associated genes accumulate in the normal endometrium, but spatiotemporal understanding of the evolution and expansion of mutant clones is limited. To elucidate the timing and mechanism of the clonal expansion of somatic mutations in cancer-associated genes in the normal endometrium, we sequence 1311 endometrial glands from 37 women. By collecting endometrial glands from different parts of the endometrium, we show that multiple glands with the same somatic mutations occupy substantial areas of the endometrium. We demonstrate that “rhizome structures”, in which the basal glands run horizontally along the muscular layer and multiple vertical glands rise from the basal gland, originate from the same ancestral clone. Moreover, mutant clones detected in the vertical glands diversify by acquiring additional mutations. These results suggest that clonal expansions through the rhizome structures are involved in the mechanism by which mutant clones extend their territories. Furthermore, we show clonal expansions and copy neutral loss-of-heterozygosity events occur early in life, suggesting such events can be tolerated many years in the normal endometrium. Our results of the evolutionary dynamics of mutant clones in the human endometrium will lead to a better understanding of the mechanisms of endometrial regeneration during the menstrual cycle and the development of therapies for the prevention and treatment of endometrium-related diseases. Through regeneration, the endometrium accumulates somatic mutations that can lead to diseases like endometriosis and cancer. Here, the authors use genomics to analyse normal endometrial glands from different patient cohorts, detect rhizome structures with common clonal ancestors and infer clonal expansion dynamics. C ancer is a collection of diseases characterized by uncontrollable cellular growth and spread caused by somatic mutations 1,2 . Genes whose mutations confer selective growth advantages on cells are called cancer drivers or cancer-associated genes 3 . With the advent of next-generation sequencing technologies, large-scale cancer genome projects have identified hundreds of cancer-associated genes 4,5 . The lifetime incidence of cancer in different tissues is correlated with the number of stem cell divisions in the corresponding tissues, suggesting that cancer-associated gene mutations randomly arise and accumulate in human adult stem cells during life 6,7 . Recent studies have demonstrated that cancer-associated gene mutations lurk not only in benign tumors but also in histologically normal tissues 8 such as bladder 9 , blood 10 , colon [11][12][13] , endometrium [14][15][16][17] , esophagus [18][19][20] , liver 21 , lung 22 , skin 23 , and ureter 24 . Cells carrying cancer-associated gene mutations are thought to manifest increased cellular fitness over their neighboring cells and to be positively selected within normal tissues 25 . However, it remains to be elucidated when and how mutant clones expand their territories in normal tissue. It is crucial to investigate the distribution of cancer-associated gene mutations across time and space 26 . Apart from higher primates, menstruation is quite rare in the animal kingdom 27 . The human endometrium has a unique capability to cyclically regenerate and remodel throughout a woman's reproductive life. Histologically, the human endometrium is stratified into the stratum functionalis and the stratum basalis. The functionalis is eroded during menstruation and regenerates from the remaining basalis 28 . The highly regenerative nature of this tissue poses risks for developing endometrium-related diseases such as adenomyosis, endometrial hyperplasia, endometriosis, and endometrial and ovarian cancer in adult women. The normal endometrium can be a source of mutant clones that lead to the development of endometrium-related diseases. By sequencing histologically normal endometrial glands, we and others have identified numerous somatic mutations in cancer-associated genes, such as PIK3CA and KRAS [15][16][17] . Individual glands carry distinct somatic mutations in clonal states, verifying that the composition of each gland is monoclonal 29 but that gland-to-gland variation shapes the mosaiclike genomic composition of the uterine endometrial epithelium 15,17 . Additionally, we found that several glands had identical mutation profiles, even though we randomly collected glands by enzymatically dissociating nearly the entire surgically resected part of the endometrium, implying the presence of mutant clones occupying certain areas of histologically normal endometrium 15 . The latest three-dimensional (3D) imaging analyses have revealed that the morphology of the human endometrium is much more complicated than previously believed 30,31 . In particular, plexus network structures of endometrial glands within the basalis were discovered in humans 30,31 but not in mice 32,33 . Mutation profiling of endometrial glands guided by 3D imaging techniques can help illuminate mechanisms by which somatic mutations spread within the endometrial epithelium. In this work, we perform target-gene sequencing, whole-exome sequencing (WES), and whole-genome sequencing (WGS) for 1311 endometrial glands from 37 women across a wide range of ages. Combined with the estimation of chronological ages at which genome events such as clonal expansions and somatic copy number alterations occurred, our sequencing analyses preserving the spatial information of endometrial glands provide fundamental clues to understand the spatiotemporal dynamics of clones with cancerassociated gene mutations in the normal endometrium. Somatic mutations accumulate with age and cumulative number of menstrual cycles (CNMCs). To dissect active mutational processes, we classified the somatic single-nucleotide variants (SNVs) with high MAF (≥0.25) into 96 mutation classes constituted by the six pyrimidine substitutions (C>T, C>A, C>G, T>C, T>G, and T>A) in combination with the flanking 5′ and 3′ bases (Fig. 1f). We explored mutational signatures characterizing the mutational processes operative in normal endometrial glands, where the spectrum of somatic SNVs with high MAF was fitted to a set of the Catalogue of Somatic Mutations in Cancer (COSMIC) mutational signatures 34 ("Methods"). Three mutational signatures (SBS1, SBS5, and SBS18) were significantly overrepresented (Fig. 1g). SBS1 is a clock-like mutation signature characterized by C>T transitions at CpG motifs, indicative of the deamination of methyl-cytosines 35 . SBS5 is another clock-like mutation signature with unknown etiology 36 . SBS18, represented by C>A transversions, has been attributed to DNA damage induced by reactive oxygen species 37 . We calculated the burden of somatic mutations for each subject over glands. The burden varied substantially among the subjects ( Fig. 2a and Supplementary Fig. 1). We examined whether the burden of somatic mutations was associated with the clinical characteristics of the subjects. The burden of somatic SNVs with high MAF showed strong linear relationships with age (Pearson's correlation coefficient, r = 0.79; P = 7.5 × 10 −8 ) and CNMCs (r = 0.81; P = 2.4 × 10 −8 ) (Fig. 2b). CNMCs explained the burden of somatic SNVs with high MAF better than age, probably because menarche was negatively associated with the burden after adjustment for age (r = −0.30; P = 0.09) (Fig. 2c, Supplementary Fig. 2a). Moreover, both age and CNMCs were significantly associated with the burden of C>T transitions at CpG motifs ( Fig. 2d and Supplementary Fig. 2d) and with the burdens of C>T, C>A, C>G, and T>C substitutions, but not with the burden of T>G or T>A substitutions ( Fig. 2e and Supplementary Fig. 2e), reminiscent of the abovementioned two mutational signatures with clock-like properties (SBS1 and SBS5). Next, we examined the burden of driver mutations that were defined as non-silent SNVs and short insertions and deletions (indels) with high MAF in cancer-driver genes (ARID1A, CTNNB1, FBXW7, KRAS, PIK3CA, PIK3R1, PPP2R1A, PTEN, and TP53). The burden of driver mutations was significantly associated with age (r = 0.63; P = 9.9 × 10 −5 ) and CNMCs (r = 0.69; P = 1.3 × 10 −5 ) ( Supplementary Fig. 3). Again, CNMCs explained the burden of driver mutations better than age, because age of menarche showed a significant negative correlation after the adjustment for age (r = −0.50; P = 3.4 × 10 −3 ). The effects of parity, body mass index (BMI), pack years of smoking and disease status were not significantly associated with the burdens of somatic SNVs with high MAF and driver mutations after the adjustment for age ( Supplementary Figs. 2b, c and 3). The average age and CNMCs of patients with KRAS mutations exhibiting allelic imbalance (MAF ≥ 0.8) were higher (P = 0.041 and P = 0.021, respectively; Wilcoxon rank-sum test). Additionally, mutations exhibiting allelic imbalance were overrepresented in patients with endometriosis (P = 0.012; Fisher's exact test). Further studies are needed to validate the associations between KRAS mutations exhibiting allelic imbalance and clinical variables. Strong positive selection acting on cancer-associated genes. The distributions of the cancer-associated gene mutations in normal endometrial glands were similar to those in cancers based on the COSMIC database 38 (Fig. 3a-f and Supplementary Fig. 4). Additionally, nonsilent mutations predominated over silent mutations in the cancer-associated genes. Therefore, we assessed (Fig. 3g). The dN/dS ratios of nonsense mutations for these sets of genes were 6.15 (95% CI, 4.35-8.68; P < 10 −16 ), 5.88 (95% CI, 3.63-9.53; P = 6.7 × 10 −13 ) and 11.1 (95% CI, 4.95-24.7; P = 4.7 × 10 −9 ), respectively (Fig. 3g). Notably, the extent of positive selection was strongest in the pan-gynecologic cancer-associated genes. According to the dN/dS ratios, up to 93.1% of the missense and 91.0% of the nonsense mutations in the pan-gynecologic cancer-associated genes were estimated to be drivers with strong selective advantages in the normal endometrium. On the contrary, the dN/dS ratios of missense and nonsense mutations in genes other than cancer-associated genes were 0.98 (95% CI, 0.63-1.51; P = 0.91) and 1.20 (95% CI, 0.52-2.75; P = 0.67), respectively (Fig. 3g). This result suggests that the mutations in the non-cancer-associated genes were selectively neutral (passengers). Although we randomly collected endometrial glands from the entire uterus, 15 out of 32 subjects had at least one pair of glands that identically shared multiple high-MAF mutations (Supplementary Fig. 5). Glands from two subjects aged 47 showed higher burdens of identically shared multiple high-MAF mutations. Seven out of 27 glands from Subject 27 harbored identical KRAS (p. G12C) and PIK3CA (p. G118D) mutations. Four out of 16 glands from Subject 28 shared KRAS (p. G12D) and SPEG (p. A2522=) mutations. These findings imply that glands with the same clonal origin spread over a wide area of the endometrium, particularly in aged women. Spatially resolved single endometrial gland sequencing. We conducted target-gene sequencing for spatially resolved single endometrial glands, in which surgically resected specimens of normal endometrium from four subjects were subdivided into grids, and endometrial glands were collected from the grids. Thus, spatial information of each gland was retained at the grid level (Figs. 4 and 5, Supplementary Fig. 6 and Supplementary Table 2). We sought informative mutations that were shared among multiple endometrial glands. Then, we performed hierarchical clustering of the MAF profiles of the informative mutations. We divided the endometrium from a 38-year-old subject with endometriosis (Subject 33) into 24 grids with 3.5 mm squares and extracted three glands from each grid (Fig. 4a). We identified two clusters characterized by PIK3CA (p.E545A) and KIF26A (p.P326L) and by KRAS (p.G12S) and PIK3CA (p.E542V). The glands in each cluster were located within the same grid (Fig. 4b, c). For a subject aged 41 years with myoma uteri (Subject 34), we examined five glands from each of 24 grids with 5 mm squares (Fig. 4d). We detected four clusters of glands with distinct mutation profiles (Fig. 4e, f). The largest cluster characterized by two mutations (ZFHX4 [p. A527V] and PPP2R1A [p.S219L]) occupied five adjacent grids (3, 11, 12, 14, and 15), spanning 2 cm in the largest diameter. Glands carrying only ZFHX4 (p.A527V) were present in the central region (grids 7 and 12) linking the abovementioned five grids. We further investigated the clonal origin of these glands with a larger number of somatic mutations by WES. The glands with only ZFHX4 (p.A527V) shared additional somatic mutations with the glands harboring both ZFHX4 (p.A527V) and PPP2R1A (p.S219L) ( Supplementary Fig. 6e). The phylogenetic tree obtained by WES suggests that the glands in these two groups have a common ancestral clone harboring ZFHX4 (p.A527V) from which two descendant clones diversified, one of which acquired PPP2R1A (p.S219L) and then spread to the broad area (Fig. 4g). The anterior and posterior walls of the endometrium from a 46-year-old subject with cervical carcinoma in situ (Subject 35) Fig. 1 Landscape of somatic mutations in normal endometrial glands. a An outline flowchart of isolation of single endometrial glands. First, the samples were collected from endometrial biopsies using suction catheters or endometrial curettage. Second, endometrial tissues digested with collagenase to separate glands from stroma. Third, the solution of collagenase-digested tissue was poured into a cell strainer. The epithelial cell glands remaining on the strainer were transferred into a cell culture dish. Finally, individual glands were picked up precisely under a microscope. Scale bar in microscopic images, 100 µm. b Heatmap of the prevalence of nonsilent somatic mutations in 891 normal endometrial glands from the uteri of 32 subjects. Nonsilent mutations are color-coded as follows: missense SNVs (red), nonsense SNVs (blue), splice-site SNVs (yellow), in-frame indels (green) and frameshifting indels (black). Color density indicates the MAF of each somatic mutation. c Bar charts representing proportions of endometrial glands with somatic mutations in 15 cancer-associated genes stacked by mutation type: SNVs (black), indels (gray), and both (white). were resected. Each of them was divided into 24 grids with 4 mm squares (Fig. 4h). The sequencing of three glands from each grid showed six and nine clusters with distinct mutations in the anterior and posterior walls, respectively ( Fig. 4i-k). Glands in nine clusters were distributed among multiple neighboring grids, suggesting a high level of mosaicism in the endometrium of this subject. Glands in six clusters had the same mutation of FGFR2 (p.S252W). However, WES showed that the glands in different clusters did not share any mutations other than FGFR2 (p.S252W) ( Supplementary Fig. 6f). Additionally, glands in these six clusters were localized in spatially separated regions of the endometrium: Cluster 3 was distributed in the center of the anterior wall, cluster 12 was in the upper right of the posterior wall, and clusters 8, 13-15 were in the lower left of the posterior wall ( Fig. 4i). It is not likely that the FGFR2 mutation initially occurred in a single ancestral clone and the ancestral clone expanded to these three separated regions as depicted by the phylogenetic tree (Fig. 4l), because glands with the FGFR2 mutation were not detected in regions connecting the three regions. Therefore, a plausible explanation may be that at least three mutational events of the FGFR2 mutation occurred independently in the three separated regions of the endometrium and the mutant clones diversified at each region by acquiring region-specific mutations. More intensive screening was conducted in the endometrium from a 50-year-old subject with myoma uteri, where the 1 cmsquare region was divided into four grids (A to D), and one of them (A) was further partitioned into four subgrids (A-I to A-IV) (Fig. 5a, Supplementary Fig. 6g-i). We extracted 13 or 15 glands from each of the subgrids and 20 glands from the remainder of the grids (B, C, and D). We identified five clusters: two were limited to within a grid or subgrid, and the other three were prevalent among multiple grids and subgrids (Fig. 5b, c). Two clusters (1 and 3) accounted for 18 glands spreading across three grids (B, C, and D) and one subgrid (A-IV) and 16 glands spreading across two grids (C and D), respectively. Cluster 3 had a frameshift insertion in PTEN with high MAF (>0.5), implying an allelic imbalance at PTEN, which is known to occur frequently in endometrial carcinoma 42 . Using WGS, we confirmed that glands in the same clusters originated from common ancestral clones (Fig. 5d, e). Additionally, we identified two arm-level copyneutral loss-of-heterozygosity (CN-LOH) events at chromosomes 3 and 10 in all glands of cluster 3 ( Fig. 5f and Supplementary Fig. 7). As reflected in the branch lengths of the phylogenetic tree, the number of private mutations unique to single glands was larger in cluster 1 than in cluster 3 (Fig. 5d, e). We leveraged the intracluster heterogeneity to infer the chronological ages at which genomic events emerged in the endometrium ("Methods"). The most recent clonal expansions of clusters 1 and 3 were estimated to have occurred at the ages of 38.3 (95% CI, 37.1-39.6) and 49.2 (95% CI, 48.9-49.5), respectively (Fig. 5g). This implies that stem/ progenitor cells giving rise to glands of cluster 1 had spread their territories for more than 10 years and diverged by acquiring mutations, which led to local variability. In sharp contrast, the P a n g y n C G C A ll N o n -c a P a n g y n clone-producing glands of cluster 3 had undergone rapid growth within a short period. We investigated whether the CN-LOHs at chromosomes 3 and 10 contributed to the rapid expansion by examining the MAF profiles of public SNVs that were shared among all the glands in the cluster and located in the regions affected by the CN-LOHs (Fig. 5h). The rationale behind this analysis is that the MAFs of mutations that occurred before a CN-LOH are expected to become predominant, whereas mutations that occurred subsequently are expected to be half as prevalent because they are present in one of the two copies of the duplicated chromosome. Therefore, we can estimate the timing of the CN-LOH based on the proportion of public mutations with predominant status to the overall public mutations in the CN-LOH region 43,44 . Surprisingly, the burden of somatic SNVs that preceded the CN-LOH at chromosome 10 was lower than that of SNVs that occurred subsequently. On the other hand, the burden of somatic SNVs that preceded the CN-LOH at chromosome 3 exceeded that of those that occurred subsequently. Consequently, the CN-LOHs at chromosomes 10 and 3 were estimated to occur at the ages of 17.3 (95% CI, 11.7-22.8) and 35.5 (95% CI, 30.8-40.1), respectively (Fig. 5i, j), both decades earlier than the most recent clonal expansion. 3D imaging of normal endometrial glands. To elucidate the mechanisms by which endometrial glands with the same clonal origin expanded across spatially distant regions, we performed 3D imaging analysis on normal proliferative endometrial tissues from four middle-aged women who underwent hysterectomy due to gynecological diseases (Supplementary Table 3) by using a recently developed tissue clearing technique combined with lightsheet fluorescence microscopy 31,45 . By examining continuous tomographic images, we visualized the plexus structures linking a set of glands with their root at the basal layer (Fig. 6a, b and Supplementary Video 1). Specifically, the glandular structure at the bottom of the endometrium ran horizontally along the muscular layer, similar to a rhizome of grass (hereinafter called the "rhizome structure"), and several branches rose from the rhizome structure toward the luminal epithelium ( Fig. 6c and Supplementary Video 1). The rhizome structures expanded linearly rather than radially. These properties were found in all four cases, suggesting that the presence of rhizome-sharing glands is common in middle-aged women ( Supplementary Fig. 8). To quantify the space occupied by a continuous set of rhizome structures and glands derived from the rhizome, we selected a representative rhizome from each patient. The number of rhizome-sharing glands ranged from 4 to 19 (Supplementary Fig. 8a-d). We determined the XYZ coordinates of the tips of rhizome-sharing glands to measure the longest distance between glands and the area occupied by the glands (Fig. 6d and Supplementary Fig. 8e-l). The rhizome-sharing glands reached a length of 1.3-3.8 mm and occupied an area of 0.3-4.7 mm 2 , indicating that they occupied a substantial part of the endometrium. WGS for selectively isolated endometrial glands based on 3D images. To analyze the genomes of rhizome structures and the vertical glands derived from them, we examined 70 serial cryosections (12 µm thick) from the proliferative-phase endometrium of a 43-year-old woman who underwent hysterectomy for endometriosis ( Fig. 7a-c). We detected two independent groups of glands that were in close proximity but separated from each other; the first group comprised rhizome G1 and vertical glands G2-G7, and the second group comprised rhizome G8 and vertical glands G9-G12 (Fig. 7b, c and Supplementary Fig. 9). Additionally, we found a single vertical gland (G13) that was independent of the two gland groups. Then, the 3D image was reconstructed to validate the spatial locations and continuity of the 13 glands ( Fig. 7d and Supplementary Video 2). Guided by the 3D image analysis, we microdissected the 13 glands for WGS (Fig. 7a). The WGS showed that the first and second groups shared different sets of somatic mutations, but G13 did not share mutations with other glands, suggesting that each of the groups comprising a rhizome and vertical glands originated from a unique mutant clone (Fig. 7e). The profile for cancer-associated gene mutations also showed a group-specific segregation pattern (PIK3CA [p. E453K] in the first group, PTEN [p. Y188 X] and PIK3R1 [p. R574del] in the second group, and KRAS [p. G12V] in G13) (Fig. 7f). We investigated the clonal populations in the 13 glands by using PyClone 46 . The somatic SNVs were classified into 11 clusters ( Supplementary Fig. 10a). Two clusters consisted of public mutations that were shared among all the glands in either of the two groups. Six clusters encompassed partially shared mutations among some glands in a group. Three clusters contained private mutations unique to single glands. Then, we performed clone ordering of the identified clusters with ClonEvol 47 . The fish plots showed that ancestral clones (A and G) characterized by public mutations initially expanded, and then their descendant clones characterized by partially shared or private mutations emerged (Fig. 7g). Some of the partially shared mutations that reached fixation in vertical glands were observed in subclonal states in their respective rhizomes (Fig. 7g, h). The most recent clonal expansions in the first and second groups were estimated to have occurred at ages 34.5 (95% CI, 33.0-36.0) and 35.1 (95% CI, 33.7-36.6), respectively (Fig. 7i). These results imply that the two rhizome structures were independently shaped by clonal expansions of ancestral clones approximately 9 years ago, and stem/progenitor cells with self-renewal and differentiation capabilities residing in different zones of the rhizomes , c ARHGAP35, d FBXW7, e PIK3R1, and f PPP2R1A along with known domain structures of the proteins encoded by these genes. Numbers refer to amino acid residues. The heights of the lollipops correspond to the number of mutations at each amino acid residue. Black bar charts indicate the number of somatic mutations deposited in the COSMIC database. g Estimated values of the dN/dS ratios for missense and nonsense mutations acting on a set of genes. The dN/dS ratios of missense mutations for all 112 targeted genes, 48 genes in the Cancer Gene Census (CGC), 15 genes in the pan-gynecologic cancer-associated genes (Pangyn) and non-cancer-associated genes. Non-cancer-associated genes were defined as a list of genes by excluding the CGC and Pangyn genes and ARHGAP35 from 112 targeted genes. The numbers (n) of missense, nonsense, and synonymous substitutions are 543, 76, and 56, respectively, in all 112 genes; 444, 38, and 29, respectively, in the CGC; 385, 23, and 8, respectively, in the Pangyn; and 81, 7, and 27, respectively, in the non-cancer-associated genes. Data are presented as the estimates of the dN/dS ratio and their 95% CIs. h Estimated values of dN and dS for missense and nonsense mutations at the level of individual genes. The dN and dS values are shown rather than the dN/ dS ratios because the dS values for some genes were zero because of the absence of synonymous substitutions. Significant genes at FDR < 0.1 are shown. i Scatterplot of the estimated dN − dS values for missense mutations against those for nonsense mutations in 16 cancer-associated genes in (h). Oncogenes: red, tumor suppressor genes: blue, and other genes: gray. Source data are provided as a Source data file. Next, we focused on the clonal evolution of three glands (G3-G5) that diverged near the center of the functional layer (Fig. 7c). From the background of an ancestral clone (A), two distinct mutant clones (C and D) proliferated to generate the lower glands (G3 and G4, respectively). Both of these mutant clones coexisted in subclonal states in the upper gland (G5) (Fig. 7g, h). Specifically, the MAFs of mutations observed in either G3 or G4 were close to 0.5 in the corresponding glands but approximately 0.25 in G5 ( Supplementary Fig. 10b). These results suggest that G3 and G4 independently rose from the rhizome (G1), and G5 was formed by the confluence of G3 and G4 on their way to the luminal side. Discussion Here, we showed that the genomes of normal endometrial glands were suffused with mutations caused by clock-like mutational processes and oxidative stress. The burdens of substitutions relevant to these mutation signatures were observed to increase with age and CNMCs, which is consistent with recent studies 16,17 . The highly regenerative nature of the endometrium may be involved in the accumulation of somatic mutations. Most of the genes showing evidence of positive selection in the normal endometrium are frequently mutated in endometrial cancer 39,48 , suggesting that mutations in the genomes of endometrial cancer are inherited from the normal endometrium. Although cancerassociated gene mutations are pervasive in endometrial glands, 3% of women are estimated to be diagnosed with uterine cancer during their lifetime, suggesting that cancer-associated gene mutations alone are not sufficient to initiate cancer. DNA quality control pathways are thought to be permissive of mutagenesis in normal cells because the repair of DNA lesions requires time and cellular resources; therefore, cells maintaining comprehensive repair would sacrifice their cellular survival 49 . Moreover, there is a possibility that some cancer-associated gene mutations might enhance cellular proliferation and benefit endometrial regeneration after menstruation, as shown in liver regeneration 50 . We demonstrated that glands originating from the same mutant clones were distributed across spatially separated regions of the endometrium. The extent of the colonization by mutant clones differed across endometrium samples. In the endometrium from a 50-year-old woman, two distinct mutant clones occupied substantial areas, one of which had a frameshift insertion in PTEN accompanied by arm-level CN-LOHs and had undergone rapid clonal expansion. Seemingly, this clone might have been only a few steps away from turning malignant. However, there was a long latency between the CN-LOHs and the most recent clonal expansion, implying that even arm-level copy number alterations are not sufficient to immediately trigger tumorigenesis. Our finding corroborated recent studies in kidney and lung cancers showing structural rearrangements that occurred during childhood and adolescence, many decades before disease diagnosis 43,44 . Nongenetic factors such as epigenetic alterations and complex interplays with stromal cells, hormonal influence and the immune system may also contribute to the transformation of normal cells into malignant cells 8 . The missing piece is how incipient mutant clones colonize normal tissues 26 . Clonal expansions in normal tissues may be constrained by anatomical features of the corresponding tissues; therefore, tissue-specific mechanisms should be elucidated 51 . Clonal expansion by crypt fission is a striking anomaly in the unraveled tissue-specific mechanism, where mutations that occur in a colonic stem cell are fixed in a whole crypt by monoclonal conversion and then the wholly mutated crypt divides into two mutated daughter crypts 52,53 . To approach the endometriumspecific mechanisms of clonal expansion, we leveraged 3D imaging analysis to inspect the continuum of glandular epithelium in the normal human endometrium. Although the presence of basal glands running horizontally and parallel to the luminal surface has been documented since at least the 1920s 54,55 , plexus rhizome-like structures, in which multiple vertical glands were linked with the horizontal basal gland, were discovered only recently with the advent of 3D imaging analyses 30,31 . Tempest et al. have proposed that rhizome-structures assist self-preservation, self-renewal, and scarless regeneration of the human endometrium as a niche of endometrial epithelial stem/progenitor cells 30 . In this study, we demonstrated that the continuum of a rhizome and vertical glands had a monoclonal origin. Taking into consideration the widely accepted idea that the functional layer regenerates from the glands remaining in the basalis after menstruation 56,57 , we propose a plausible model of clonal expansion in the normal endometrium. Residual basal glands extend horizontally along the muscular layer to shape monoclonal rhizomes. Then, the rhizome gives rise to vertical glands that have the same clonal origin. Some rhizome structures persist for many cycles of repair and regeneration during menstrual cycles and further expand their territories. Several rounds of clonal conversion might occur when new clones acquire selective advantages by cancer-associated gene mutations. At the same time, stem/progenitor cells resident in the rhizome acquire unique mutations, which leads to local variability of the vertical glands. The clonal expansion through rhizome structures proposed in this study is by no means the only mechanism. Since rhizome structures are thought to be specific to animals that menstruate [30][31][32][33] , the development of rhizome structures and clonal expansion through rhizome structures might be byproducts of the evolution of menstruation. The elucidation of the mechanisms of clonal expansion in both menstruating and nonmenstruating species will provide fundamental clues for understanding homeostatic tissue remodeling and oncogenesis. Another important finding is the admixed genomic composition of branched glands, which implies that two glands from a rhizome join together and open into the superficial layer in a bottom-up manner. A previous study reported individual endometrial glands containing both cytochrome c oxidase (CCO)positive and CCO-negative cells 30 . Our findings give an important perspective on the monoclonal and polyclonal cellular compositions of single endometrial glands. The mechanisms of rhizome structure formation in the human endometrium are totally unknown. Anatomical, embryological, and physiological studies are required to elucidate when and how the rhizome structures develop. For this purpose, it will be helpful to clarify the spatial relationship between rhizome structure and vascular network by 3D imaging. The biological and medical significance of the rhizome structures also remains obscure. We speculate that the rhizome structures act as a double-edged sword. The presence of the rhizome structures might be beneficial for post-menstrual endometrial repair by protecting the endometrial stem/progenitor cells from shedding at the menstrual phase 30,31 . We presume that clones with cancer-associated gene Fig. 4 Sequencing of spatially resolved single endometrial glands. The results for three subjects are represented in different panels: a-c 38-year-old woman with endometriosis (subject 33), d-g 41-year-old woman with myoma uteri (subject 34) and h, i 46-year-old woman with cervical carcinoma in situ (subject 35). The anterior and posterior walls of the endometrium from subject 35 were used. a, d, h Macroscopic images of uterine tissue samples obtained from three subjects who underwent a total hysterectomy, in which the endometrium is highlighted in light blue. Schematic layouts show the partitioning of the resected endometrium into square grids with their identification numbers. R: right side of uterus. L: left side of uterus. b, e, i Clusters detected by hierarchical clustering analyses mapped by projecting on the schematic layout of the endometrium sample. Pie charts show numbers and proportions of glands grouped in the clusters. The colors of clusters are the same as in (c, f, j, k). c, f, j, k Hierarchical clustering of the mutation profiles of endometrial glands based on MAFs for a set of high-MAF mutations shared among at least two glands in the target-gene sequencing. Glands marked with an asterisk were used for WES. Color density indicates the MAF of each somatic mutation. g, l Phylogenetic tree for binary mutation profiles based on WES. NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-022-28568-2 ARTICLE NATURE COMMUNICATIONS | (2022) 13:943 | https://doi.org/10.1038/s41467-022-28568-2 | www.nature.com/naturecommunications mutations may confer a proliferative advantage and contribute to stable tissue regeneration by expanding the area of rhizome structure. On the other hand, the rhizome structures might confer deleterious effects to predispose women to endometrium-related diseases by accumulating cancer-associated gene mutations. A recent study showed that adenomyosis and histologically normal endometrium adjacent to the adenomyotic lesions had identical KRAS hotspot mutations, suggesting that KRAS-mutated adenomyotic clones originate from normal endometrium 58 . In the WES study for a patient with ovarian clear cell carcinoma with concurrent endometriosis, we showed that many somatic mutations including cancer-associated gene mutations were shared among epithelium samples from uterine endometrium, endometriotic lesions distant from and adjacent to the carcinoma, and the As clonal hematopoiesis increases the risks for blood cancer and cardiovascular disease 10 , there is a possibility that the overrepresentation of endometrial glands derived from a single clone raise the risks for endometrium-related diseases. Long-term follow-up studies are needed to determine whether mutant clones with cancer-associated gene mutations extends their territories over time and whether the situation where the population of endometrial epithelial cells is dominated by a small number of mutant clones increases the risks for endometriumrelated diseases such as adenomyosis, endometriosis and endometrial cancer. Several lines of evidence shows that oral contraceptives decrease the risks for endometrial and ovarian cancers 60 and endometriosis 61 . The biological mechanism behind these preventive effects may be partially explained if we demonstrate that the regulation of the menstrual cycle by oral contraceptives alleviates the mutational burden and the aberrant proliferation of the rhizome structure in the endometrium. Further efforts are needed to unlock the mystery of the rhizome structures. We demonstrated the somatic evolutionary dynamics of mutant clones in the human endometrium at a microscopic spatial resolution. Three-dimensional mapping of mutant clones in the endometrium will illuminate the path toward a more precise understanding of the mechanisms of endometrial regeneration during the menstrual cycle and the development of therapies for the prevention and treatment of endometriumrelated diseases. Methods Human sample collection. This study was approved by the institutional ethics review boards of Niigata University (G2017-0010 and G2019-0038), Nagaoka Chuo General Hospital (331), National Institute of Genetics , and Sasaki Institute (ER2020-04). We recruited study participants at the Niigata University Medical and Dental Hospital and the Nagaoka Chuo General Hospital between December 2015 and September 2019. All subjects provided written informed consent for the collection of samples and analyses, and for the publication of their clinical information. We collected 1087 single uterine endometrial gland samples from 32 patients (aged 21-53 years) with no endometrial gynecological disease (Subject No. 1-32). Among them, 196 glands were excluded at the quality control step of sequence data analysis. The samples were from endometrial biopsies using suction catheters in gynecological patients (n = 23) or endometrial curettage in patients undergoing hysterectomy (n = 9). Endometrial suction was performed under lumbar anesthesia or general anesthesia during surgery for gynecological disease. Endometrial curettage was performed on surgically resected uteri using a disposable scalpel. Diagnosis of the sample cases included myoma uteri (n = 11), ovarian dermoid cyst (n = 7), adenomyosis or ovarian endometriosis (n = 9), cervical intraepithelial neoplasia (CIN)3 or carcinoma in situ (CIS) (n = 5), cervical cancer (n = 1) and ovarian clear cell carcinoma (n = 1) (there was some overlap). Clinicopathological information is shown in Supplementary Table 1. We performed multisegmental sampling for a total of four patients (aged 38-50 years) who underwent hysterectomy (Subject No. [33][34][35][36]. Diagnosis of the sample cases included myoma uteri (n = 2), ovarian endometriosis (n = 1), and CIN3 or CIS (n = 1). We divided the endometrium into 7-48 segments and selected 451 single endometrial gland samples, with a sampling depth range of 3-20 samples per segment. The area of each segment was 6.25-25 mm 2 . Among them, 44 glands were excluded at the quality control step of sequence data analysis. Clinicopathological information is shown in Supplementary Table 2. For the 3D imaging analysis, we collected four uterine endometrial samples from 30 to 52year-old premenopausal women who underwent hysterectomy (Subject No. [37][38][39][40]. Diagnosis of the samples included myoma uteri (n = 2) or cervical cancer IA1 (n = 1) or IB1 (n = 1). Clinicopathological information is shown in Supplementary Table 3. For the WGS analysis using laser microdissected tissues, we collected uterine endometrial samples from a 43-year-old premenopausal woman who underwent hysterectomy because of endometriosis (Subject No. 41). We confirmed that the endometrial tissues collected from all the individuals were clinically and histologically normal. Peripheral blood samples were also collected from each patient and used as matched controls for the delineation of somatic versus germline variations. Isolation of single endometrial glands. The isolation of single endometrial glands from bulk endometrial tissue was performed as described in our previous studies 15,29 , with some modifications. Briefly, minced fresh endometrial tissues, consisting of endometrial glands and stroma, were placed in Dulbecco's modified Eagle's medium (Thermo Fisher Scientific) containing 180 U/ml collagenase type 3 (Worthington Biochemical Corporation). After shaking gently at 37°C for 40 min, this solution of collagenase-digested tissue was poured into a 40-micron EASYstrainer (Greiner Bio-One), and cold phosphate-buffered saline (PBS) was poured over the screen to wash away the digestion-loosened endometrial stromal cells. The tissue remaining on the strainer, mostly epithelial cell glands, was transferred into a cell culture dish. After gentle pipetting, the separated endometrial glands sank to the bottom of the dish. Individual glands were picked up precisely under a microscope. Each gland was collected into a 0.2 ml microtube containing ATL buffer (QIAGEN) and stored at −80°C until DNA extraction. A total of 1538 glands from 36 patients were used for target-gene sequencing analyses. Thirteen glands from subject No. 34 and 15 glands from subject No. 35 were also used for WES analysis. Thirteen glands from subject No. 36 were also used for WGS analysis. Laser-microdissection of each entire gland. All endometrial tissues were immediately separated from the surgical specimen, embedded in Tissue-Tek O.C.T. compound (Sakura Finetek) in a Tissue-Tek Cryomold (Sakura Finetek), frozen in liquid nitrogen, and stored at −80°C. We cut 12-µm-thick serial frozen sections with a Cryotome FSE (Thermo Fisher Scientific) and mounted them on PEN-Membrane Slides (Leica). For laser microdissection, the cryosections were fixed with 100% methanol for 3 min and then stained with toluidine blue for 30 s. Before laser microdissection, all images of cryosections were stored using the Specimen Overview function of the LMD7 laser microdissection microscope (Leica) to assess glandular continuity. We performed laser microdissection using LMD7 (Leica), distinguishing the vertical and horizontal glands, which often branched from or connected with each other (Fig. 7). The isolated epithelial tissues of each gland were collected in the caps of 0.2 ml PCR tubes (Axygen). The median number of frozen sections for sampling each gland was 13 sections (range: [8][9][10][11][12][13][14][15][16][17][18]. DNA extraction. DNA extraction from the isolated endometrial glands and the laser-microdissected glands was performed using a QIAamp DNA Micro Kit (QIAGEN) according to the manufacturer's protocol. Case-matched control DNA was extracted from peripheral blood with a QIAamp DNA Blood Maxi Kit (QIAGEN) according to the manufacturer's instructions. All purified DNA samples were stored at −80°C until subsequent analyses. Target-gene sequencing. The target-gene sequencing of single endometrial glands for 112 genes was performed as described in our previous studies with some modifications 15,59,62,63 . Briefly, 112 genes were selected (Supplementary Data) based on WES data for ovarian endometriosis and normal uterine endometrium 15 , the mutation profiles in endometriosis-related ovarian cancer 64 and in endometrial cancer 48 , and genes involved in DNA repair pathways 65,66 . DNA samples were fragmented using a KAPA Frag Kit (KAPA Biosystems). Sequencing libraries were constructed with a NEBNext Ultra II DNA Library Prep Kit for Illumina (New England Biolabs). Libraries of up to 96 samples were pooled in equimolar amounts and then hybridized to probes of a SeqCap EZ Prime Choice System (Roche Diagnostics) in a single enrichment reaction. The DNA probe set was selected by using NimbleDesign (Version 3.8) (http://design.nimblegen.com). The quantity and size distribution of the captured libraries were assessed by a Qubit 2.0 Fluorometer (Thermo Fisher Scientific) and Bioanalyzer 2100 (Agilent Technologies), respectively. The libraries were then sequenced via the Illumina HiSeq 2500, HiSeq 4000 or NovaSeq 6000 platform with 2×100-bp or 2×150-bp paired-end modules (Illumina). Data preprocessing. As a quality control step, the Illumina adapter sequences were trimmed by using Trim Galore (Version 0.6.3) (https:// www.bioinformatics.babraham.ac.uk/projects/trim_galore/). Low-quality sequences were excluded or trimmed with Trimmomatic (Version 0.39) 67 . The filtered sequence reads were aligned to the human reference genome (GRCh38) containing sequence decoys and virus sequences generated by the Genomic Data Commons (GDC) of the National Cancer Institute (NCI) using BWA-MEM (Version 0.7.17) 68,69 . The sequence alignment map (SAM) files were sorted and converted to the binary alignment map (BAM) file format with SAMtools (Version 1.9) 70 . The BAM files were processed using Picard tools (Version 2.20.6) (http:// broadinstitute.github.io/picard/) to remove PCR duplicates. Base quality recalibration was conducted using GATK (Version 4.1.3.0) 71,72 . The average depths and the coverages of the target regions were calculated with SAMtools. BEDOPS (Version 2.4.36) 73 and BEDTools (v2.28.0) 74 were used in the handling of FASTA, VCF and BED files. We used endometrial glands for subsequent analyses if more than 70% of the target bases were covered by at least 20 reads. For 32 subjects used in the analyses year-old woman (subject 40) with myoma uteri who underwent a total hysterectomy. Scale bar, 500 µm. b Reconstructed 3D image of the endometrial tissue. The red object represents the 3D structure of the glands sharing a rhizome. c 3D and XY cross-sectional images capturing a rhizome structure that runs horizontally along the muscular layer and gives rise to multiple vertical glands. From the reconstructed 3D image (left panel), XY cross-sectional images (right panel) with a slice thickness of 200 µm are extracted. The XY slice is enclosed by orange. d 3D image quantifying the longest distance between glands sharing the rhizome (red line) and the area occupied by the glands (light blue area). Red object: 3D image of the structures of the glands sharing a rhizome. Yellow objects: the tips of the glands. 3D images were obtained by light-sheet fluorescence microscopy. Autofluorescence and CK7expressing endometrial epithelial cells were measured by excitation with 488 nm and 532 nm lasers, respectively. Red and yellow objects were made by the Surface module in Imaris software. Autofluo, autofluorescence; CK7, cytokeratin 7. Source data are provided as a Source data file. of mutational burden, mutational signatures and selections acting on cancerassociated gene mutations, the means of the average sequencing depth and the coverage at ≥20 reads for the target regions were 87.7 and 93.5%, respectively. For four subjects used in the analysis of spatially resolved single endometrial gland sequencing, the means of the average sequencing depth and the coverage at ≥20 reads for the target regions were 80.9 and 93.1%, respectively. Variant calling. Somatic SNVs and short indels were called in each pair of endometrial gland and matched blood samples by using Strelka2 (Version 2.9.10) 75 . For somatic indel calling, we utilized the information about candidate indel sites provided by Manta (Version 1.6.0) 76 . The variants whose empirical variant scores provided by Strelka2 were greater than 13.0103 (= −10 × log 10 0.05) were used for subsequent analyses. In addition, we excluded variants whose frequencies were greater than or equal to 0.001 in any of the general populations from the 1000 Genomes Project 77 , the National Heart, Lung, and Blood Institute (NHLBI) GO Exome Sequencing Project 78 , and the Genome Aggregation Database (gnomAD) 79 to prevent false-positive variant calls. Functional annotations for the protein-coding and transcription-related effects of the identified variants were implemented by Ensembl Variant Effect Predictor (VEP) 80 . Curated information Sample taken time Mutational signature analysis. We used the identified somatic SNVs with high MAF (≥0.25) from the 32 subjects for mutational signature analysis. The somatic SNVs with high MAF were classified into 96 mutation classes defined by the six pyrimidine substitutions (C>A, C>G, C>T, T>A, T>C, and T>G) in combination with the flanking 5′ and 3′ bases. In our mutational signature analysis, we fitted the 96-mutation catalog to a predefined list of known signatures 81,82 . We did not select an approach for de novo signature extraction because the number of somatic mutations was not large enough in this study. As a reference set of known mutational signatures, we used the COSMIC mutational signatures version 3 34 . We selected a total of eleven SBS signatures (SBS1, SBS2, SBS3, SBS5, SBS10a, SBS10b, SBS13, SBS15, SBS18, SBS40, and SBS44) whose activities were estimated to be present in at least 10% of samples in any of three gynecologic cancers (cervical cancer, endometrial cancer and ovarian cancer) based on the COSMIC mutational signatures (https://cancer.sanger.ac.uk/cosmic/signatures/). We implemented a fitting approach by using sigfit 83 . We ran four Markov chains with a total of 50,000 iterations, including a burn-in of 25,000 samples. We estimated the highest posterior density (HPD) interval for each of the SBS signatures. If the 90% lower end of the HPD interval for a SBS signature was above the threshold (0.01, default value), we considered that the SBS signature was a significantly active signature. Calculation of mutational burden. Mutational burden (b i ) was calculated for the i th subject over endometrial glands as follows: where n ij and l ij are the number of somatic mutations and the number of bases within the target region that were covered by at least 20 reads in the j th gland of i th subject, respectively. We assessed the association between the burden of somatic mutations and the clinical features of the subjects (Supplementary Table 1). The Pearson correlation coefficient and one-way analysis of variance were used for quantitative and categorical clinical variables, respectively. Linear regression analysis was conducted to assess the effects of clinical features on the burden of somatic mutations with adjustment for age. The statistical tests were performed as two-tailed tests using the stats package of the R software (https://www.Rproject.org/). Calculation of the burden of driver mutations. We selected cancer driver genes that were included in the Cancer Gene Census 40 and the pan-gynecologic cancerassociated genes 41 from the 112 genes (Supplementary Data). As a result, the nine genes were selected: ARID1A, CTNNB1, FBXW7, KRAS, PIK3CA, PIK3R1, PPP2R1A, PTEN, and TP53. The burden of driver mutations is defined as the number of non-silent mutations in these nine genes per gland in each subject. The difference in the average age or CNMCs between carriers of KRAS mutations exhibiting allelic imbalance (MAF ≥ 0.8) and non-carriers was examined by Wilcoxon rank-sum test with the exactRankTests package of the R software. Identification of genes under positive and negative selection. We searched for genes under the pressures of positive and negative selection based on the dN/dS ratio. To estimate the dN/dS ratio, we patterned our approach after the Poisson framework developed by previous studies 39,84 . We modeled the numbers of SNVs resulting in missense (n m i ), nonsense (n n i ) and synonymous (n s i ) substitutions in the i th trinucleotide context as follows: where t is the baseline substitution rate per site, r i is the relative substitution rate in the i th trinucleotide context, L i is the number of sites at which the i th substitution results in (m)issense, (n)onsense, and (s)ynonymous mutation, and ω is the dN/dS ratio. The number of identified somatic mutations in this study was not large enough to estimate substitution rates in trinucleotide contexts. Therefore, we relied on the COSMIC database as an unbiased catalog of somatic mutations. We downloaded the file "CosmicNCV.tsv" of v94 (May 2021). First, we retrieved noncoding somatic SNVs from the file by using the following filtering criteria: (i) inclusion of SNVs confirmed to be somatic, and (ii) exclusion of known single nucleotide polymorphisms. Then, we selected the SNVs if the primary site of the sample with the SNV was cervix, endometrium or ovary. As the consequence, about 1.1 million somatic SNVs in noncoding regions from the COSMIC database were classified into 96 mutation classes constituted by the six pyrimidine substitutions in combination with the flanking 5′ and 3′ bases. Then, the relative substitution rate for each trinucleotide context (r i ) was calculated. The baseline substitution rate per site (t) was selected to satisfy the following equation: The values n m , n n and n s were evaluated for several sets of genes or for each gene. We considered three sets of genes: (i) all 112 genes, (ii) 48 genes included in the Cancer Gene Census 40 , and (iii) 15 genes included in the pan-gynecologic cancer-associated genes 41 . Poisson regression analysis was implemented, where n m (or n n ) and n s were modeled by including the log-transformed value of their expected numbers (i.e., ∑ i t r i L i ) as the offset and the ω term. When examining the sets of genes, the significance of ω was assessed by Poisson regression analysis. According to a previous study 39 , the fraction of genuine driver mutations in a group of genes was calculated as follows: f ¼ ω À 1 ð Þ =ω. When the signatures of positive selection were evaluated at the level of individual genes, we identified 27 genes with at least five SNVs with high MAF in their coding sequences. Additionally, we considered genes if the observed numbers of missense or nonsense mutations (n m or n n ) were larger than their expected numbers calculated by using t, r i and L i . Then, the likelihood ratio test was conducted to compare the two Poisson regression models with and without the ω term. To account for multiple hypothesis testing, genes satisfying a false discovery rate based on Benjamini-Hochberg procedure < 0.1 were considered to be significant 85 . The Poisson regression analyses were performed by the glm function from the MASS package in the R environment. Analyses of spatially resolved single endometrial gland sequencing. To evaluate the extent to which somatic mutations were shared between glands located in spatially separated regions of the endometrium, we compiled MAF profiles of all the mutation sites for all the glands by counting the sequence reads supporting the reference and mutant alleles with SAMtools mpileup 70 . For this analysis, the reads mapped with high confidence (mapping quality > 30) were used. Then, the allelespecific counts were measured by using only high-confidence base calls (base quality > 20) at the mutation sites. We excluded sites whose MAFs in the matched blood sample exceeded 0.05. We selected informative mutations based on the following criteria: (i) a set of mutations that were shared among a group of glands at MAF of greater than or equal to 0.10; and (ii) mutations with MAF values greater than or equal to 0.25 in at least two glands. The MAF profiles of the informative mutation sites were analyzed by hierarchical clustering analysis with the superheat R package 86 . Experimental procedures for WES and WGS. The libraries were prepared as described above. For WES, we used a hybridization capture method with IDT xGen Exome Research Panel v2 (IDT xGen Exome Research Panel v2 (Integrated DNA Technologies), in which precapture libraries from at most six samples were pooled Fig. 7 Genomic evolution of endometrial glands connected through rhizome structures. a Toluidine blue-stained images of endometrial glands from a 43year-old woman after laser microdissection. Scale bar, 500 µm. b Evaluation of the continuity between glands by serial section images before laser microdissection. Top images, continuity between glands 1 and 6. Middle and bottom images, continuity between glands 8 and 11. Scale bar, 300 µm. c Schematic illustration of glands isolated by laser microdissection. According to their continuity, the first group (rhizome: G1, and vertical glands: G2 to G7), the second group (rhizome: G8, and vertical glands: G9 to G12), and an independent vertical gland (G13) are color-coded in red, yellow, and light blue, respectively. d Reconstructed 3D image of glands isolated by laser microdissection, based on serial section images. The color code assignment is as described in (c). e Sharing pattern of somatic SNVs based on WGS among 13 glands. Color density indicates the MAF of each somatic mutation. f Heatmap of the prevalence of representative cancer-associated gene mutations in 13 glands. Color density indicates the MAF of each somatic mutation. g Fish plots showing the clonal evolution of 11 mutant clones (A-K) within 13 glands. h Branch-based consensus clonal evolution tree of 11 mutant clones (A-K) among 13 glands. Nodes and edges correspond to mutant clones and somatic mutations accumulated during evolution between the connected clones, respectively. The identifiers of glands are indicated beside a node if the clone was observed in the corresponding glands at the time when the sample was taken. i Chronological ages at which clonal expansions occurred in mutant clones of the first and second groups of glands. The dashed line corresponds to the average proportion of public mutations to the overall mutational burden in each cluster. Source data are provided as a Source data file. and then hybridized in a single enrichment reaction. For the WGS of endometrial glands isolated by laser microdissection, DNA samples were repaired by using NEBNext FFPE DNA Repair Mix (New England Biolabs). The precapture libraries were prepared as described above and used for a subsequent sequencing step. The libraries were sequenced via the Illumina NovaSeq 6000 platform with a 2 × 150-bp paired-end module (Illumina). Bioinformatics pipeline for WES and WGS. In WES, we used target regions in the BED format provided by the manufacturer. The means of the average sequencing depth and the coverage at ≥10 reads for the target bases over 30 WES data were 49.0 and 96.6%, respectively. To filter somatic mutations in WGS, we used a "universal mask" outlined in previous studies [87][88][89] . The universal mask encompasses regions in which false positive variants are recurrently detected. Low-complexity regions in the human reference genome (GRCh38) were determined by mdust (https://github.com/lh3/ mdust). DNA satellites and low complexity regions were based on the RepeatMasker track from the UCSC Genome Browser (https://genome.ucsc.edu/). Homopolymeric stretches (≥7 bp) were sought by seqtk (https://github.com/lh3/seqtk). The lowmappability mask was generated by following the SNPable Regions procedure (http://lh3lh3.users.sourceforge.net/snpable.shtml). Briefly, at each position in the human reference genome (GRCh38), all possible 75-mers overlapping the position were extracted and mapped back to the reference genome with BWA. Low mappability regions were defined as genomic positions for which 37 or fewer overlapping 75-mers were mapped elsewhere with at most one mismatch or gap. The means of the average sequencing depth and the coverage at ≥10 reads for the target bases over 30 WGS data were 27.2 and 96.1%, respectively. To identify somatic variants with high confidence, we used the following criteria: (i) the empirical variant score provided by Strelka2 was greater than 13.0103; (ii) the frequencies in the abovementioned general populations were smaller than 0.001; (iii) the sequencing depth was greater than or equal to 20; (iv) eight or more reads supported the mutant allele; (v) the MAF was greater than or equal to 0.25; and (vi) the MAF in the matched blood sample did not exceed 0.05. Detection of somatic copy number alterations. Somatic copy number alterations were sought by using FACETS based on the information about the total sequence read count and allelic imbalance in endometrial glands and the matched blood samples 90 . Germline polymorphic sites were retrieved from the VCF file generated by the 1000 Genomes Project 77 . The absolute value of the log odds ratio of the variant allele read count in the gland and blood pair was used as the degree of allelic imbalance. After excluding the regions affected by somatic copy number alterations, the mean of the absolute value of the log odds ratio over the germline heterozygous SNVs in the genome was calculated. Then, the mean was subtracted from each of the absolute value of the log odds ratio to normalize the data to have a mean of zero. Finally, the moving averages of the normalized absolute value of the log odds ratio over 100 heterozygous SNVs were calculated and used for the visualization. Reconstruction of phylogenetic trees. Based on the binary mutation profile according to the presence or absence of somatic SNVs, genetic distances between endometrial glands were computed as the pairwise Hamming distance. The neighbor joining method and the maximum parsimony method were used to reconstruct phylogenetic relationships between the endometrial glands in each subject by using the APE and phangorn packages in the R environment [91][92][93] . Identification of putative clonal populations. The clonal populations present in endometrial glands were explored by clustering somatic SNVs with PyClone 46 . The SNVs were selected by the following criteria: (a) the depth was greater than or equal to 10 in all endometrial glands from a patient; (b) the MAF was greater than or equal to 0.25 in at least one endometrial gland; (c) the mutations did not overlap with the somatic copy number alterations detected by FACETS; and (d) SNVs with low MAF (<0.1) in multiple samples were excluded as putative false positives. In WES, we excluded TTN, MUC3A, TAS2R19, TAS2R31, EDEM2, PABPC3, and OR8U1 because the SNVs in these genes had low MAF values, and the mutant alleles were found in the matched blood sample, indicating false-positive variant calls. For the mutation sites satisfying these criteria, we compiled MAF profiles of all the mutation sites for all the glands by counting the sequence reads supporting the reference and mutant alleles as described above. The MAF profiles were used for PyClone. Then, clusters with ≥15 mutations were used for subsequent analysis. Then, the results of the SNV clustering together with their MAFs were used as the input to ClonEvol 47 . The polyclonal seeding model was implemented. The number of bootstrap samples was set to 10,000. Estimation of timing of genomic events. Motivated by previous studies 43, 44 , we estimated the timing of clonal expansions and somatic copy number alterations. The somatic SNVs detected in each group or cluster of glands in a subject were divided into three types: (i) public mutations shared by all glands in a group or cluster, (ii) partially shared mutations shared by a part of the glands in a group or cluster, and (iii) private mutations that were detected only in single glands. Public mutations are thought to precede the most recent clonal expansion that gave rise to a group or cluster of glands followed by diversifying events, including partially shared mutations and subsequent private mutations. We assumed that the mutation rate for somatic SNVs differed across groups or clusters of glands but remained constant between birth and age at sampling. By using somatic SNVs in regions that were not affected by somatic copy number alterations, the timing of the most recent clonal expansion was estimated as the age at sampling multiplied by the proportion of public mutations to the overall mutational burden in the respective group or cluster of glands as follows: Let m ¼ MB pub þ MB ps þ MB priv and p ¼ MB pub = MB pub þ MB ps þ MB priv , 95% CI of p was calculated as: p ± 1:96 q . Next, we analyzed the timing of two CN-LOH events at chromosomes 3 and 10 in the endometrium of a subject, which were detected by FACETS. We sought public mutations in the regions affected by these two events. In principle, the MAFs of public mutations that occurred before a CN-LOH are expected to be 1, whereas the MAFs of public mutations that occurred after CN-LOH are expected to be 0.5 when the cellular fraction of the CN-LOH (ρ) is 1. We computed the joint probabilities of observing d mut i;j reads supporting the mutant allele in sequence depth d i;j at the j th public SNV site (j ¼ 1; ; l) in the i th gland (i ¼ 1; ; n) given that the public SNV occurred before (P pre j ) or after (P post j ) a CN-LOH as follows: and P post j The value of ρ i was estimated for each gland by FACETS. Then, the relative ratios of these joint probabilities were used as the weights for the j th public SNV: w In order to estimate the timing of the CN-LOH and its 95% CI by incorporating the uncertainty of the assignments of the SNVs to before or after the CN-LOH event, we conducted a simple simulation study with 10,000 iterations. By generating random numbers between 0 and 1, we assigned whether each of the variants occurred before or after the CN-LOH event based on the probabilities: burdens before and after the CN-LOH were MB pre ¼ 2 number of somatic SNVs occurred before the CN-LOH and MB post ¼ number of somatic SNVs occurred after the CN-LOH, respectively. Somatic SNVs that occurred before the CN-LOH resided on the retained allele, and somatic SNVs on the other allele were lost during the CN-LOH. This indicates that the number of somatic SNVs was expected to decrease by half. Therefore, we doubled the number of SNVs occurred before CN-LOH. The proportion MB pre = MB pre þ MB post and its variance were calculated for each iteration. Let m ¼ MB pre þ MB post and p ¼ MB pre = MB pre þ MB post , the variance of p was calculated as: p 1 À p À Á =m. Next, we estimated the timing of the CN-LOH as follows: Let MB pub = MB pub þ MB ps þ MB priv and MB pre = MB pre þ MB post be X and Y, respectively, the variance of XY can be written as: Based on the variance, we calculated the 95% upper and lower limits of T CNLOH in each iteration. After the simulation study, the means over the 10,000 iterations were calculated as the estimate for T CNLOH and its 95% CI. Whole-mount 3D staining of endometrial tissue. For the whole-mount 3D staining of human endometrial tissue, we used the updated clear, unobstructed brain/body imaging cocktails and computational analysis (CUBIC) protocols described in our previous studies 31,45 . Briefly, endometrial blocks (55 to 635 mm 3 ) were stored in formalin until use. The tissue blocks were washed with PBS for 6 h before clearing. Then, the tissue blocks were immersed in CUBIC-L [T3740 (mixture of 10 wt% N-butyldiethanolamine and 10 wt% Triton X-100), Tokyo Chemical Industry] with shaking at 45°C for 6 days. During delipidation, the CUBIC-L was refreshed once. After the samples were washed with PBS for several hours, the tissue blocks were placed in 1 ml of immunostaining buffer (mixture of PBS, 0.5% Triton X-100, 0.25% casein, and 0.01% NaN 3 ) containing 1:100 diluted Alexa 555-conjugated CK7 antibody (ab203434, Abcam) for 14 days at room temperature with gentle shaking. After washing again with PBS for several hours, the samples were then subjected to postfixation by 1% PFA in 0.1 M PB at room temperature for 5 hours with gentle shaking. The tissue samples were immersed in 1:1 diluted CUBIC-R+ [T3741 (mixture of 45 wt% 2,3-dimethyl-1-phenyl-5pryrazolone, 30 wt% nicotinamide and 5 wt% N-butyldiethanolamine), Tokyo Chemical Industry] with gentle shaking at room temperature for 1 day. The tissue samples were then immersed in CUBIC-R+ with gentle shaking at room temperature for 2 days. Microscopy. Macroscopic whole-mount images were acquired with a light-sheet fluorescence (LSF) microscope (MVX10-LS, Olympus) using a ×0.63 objective lens [numerical aperture = 0.15, working distance = 87 mm] with digital zoom ×3.2. Voxel resolution was set as follows: x = 3.27 µm, y = 3.27 µm, and z = 5.0 µm. The LSF microscope was equipped with lasers emitting at 488 nm and 532 nm. When the stage was moved in the axial direction, the detection objective lens was synchronically moved to the axial direction to prevent defocusing. The Alexa555 signals of CK7-expressing endometrial epithelial cells were measured by excitation with 532 nm lasers. Autofluorescence was measured by excitation at 488 nm. 3D image analysis. All raw image data were collected in lossless 16-bit TIFF format. All CK7 fluorescence images were obtained by subtracting the background and applying an unsharp mask using Fiji software 94 . Three-dimensionally rendered images were visualized, captured and analyzed with Imaris software (Version 9.5.1, Bitplane). The image analysis by Imaris software was performed as described in our previous study 31 . Briefly, TIFF files were imported in the Surpass mode of Imaris. The reconstituted 3D images were cropped to a region of interest using the 3D Crop function. Using the channel arithmetic function, the CK7 signal was removed from the autofluorescence signal to create a channel with only endometrial epithelium and gland signals. To identify glandular continuity, we observed the shapes of glands using continuous tomographic images from the XY, XZ, and YZ planes. Then, we selected one unit of the glands sharing the rhizome. The glands sharing the rhizome were traced manually, and their 3D structures were reconstructed by the Surface module. The 3D surface object was pseudocolored and separated into a new channel. Thus, the glands sharing the rhizome were visualized. To calculate the surface area of the endometrium occupied by the openings of the glands sharing a rhizome, the XYZ coordinates of the tips of the glands of the 3D surface objects were measured using the Measurement Point module. We formed triangles by selecting three points of the tips of the glands so that the resulting triangles did not overlap with each other. Then, the surface area was calculated as the sum of the areas of the triangles. Among the surface areas for all possible combinations of the triangles, the smallest one was selected. Because subject No. 40 had a large number of glands sharing the rhizome, six glands located on the outside edge were selected. The snapshot and animation functions were used to capture images and videos, respectively. 3D modeling of laser-microdissected glands. To create the 3D model of laser microdissected glands, 62 serial cryosection images before laser microdissection were imported into Photoshop 2020 (Adobe). The first image in the image stack was selected as a reference image and used to align subsequent images. Thus, all images were aligned to their neighbors. The shapes of endometrial glands were drawn manually in new layers with a transparent background on each 2D serial cryosection image, which were exported individually as TIFF images. Threedimensionally rendered images were visualized and captured with Imaris software (Version 9.5.1, Bitplane) as mentioned above. Briefly, TIFF files were imported in the Surpass mode of Imaris. Using the channel arithmetic function, the signals of the gland depicted were distinguishable from the background and used for 3D reconstruction by the Surface module. The snapshot and animation functions were used to capture images and videos, respectively.
2023-02-09T15:20:30.931Z
2022-02-17T00:00:00.000
{ "year": 2022, "sha1": "d52b48ea65a5275a9527c613195e7e8f31c9dc25", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-022-28568-2.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "d52b48ea65a5275a9527c613195e7e8f31c9dc25", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
234488949
pes2o/s2orc
v3-fos-license
A review: iron and nutrient supply in the subarctic Pacific and its impact on phytoplankton production One of the most important breakthroughs in oceanography in the last 30 years was the discovery that iron (Fe) controls biological production as a micronutrient, and our understanding of Fe and nutrient biogeochemical dynamics in the ocean has significantly advanced. In this review, we looked back both previous and updated knowledge of the natural Fe supply processes and nutrient dynamics in the subarctic Pacific and its impact on biological production. Although atmospheric dust has been considered to be the most important source of Fe affecting biological production in the subarctic Pacific, other oceanic sources of Fe have been discovered. We propose a coherent explanation for the biological response in subarctic Pacific high nutrient low chlorophyll (HNLC) waters that incorporates knowledge of both the atmospheric Fe supplies and the oceanic Fe supplies. Finally, we extract future directions for Fe oceanographic research in the subarctic Pacific and summarize the uncertain issues identified thus far. Iron study in the subarctic Pacific The subarctic Pacific ( Fig. 1) is located at the end of the global ocean conveyor belt circulation. A correct understanding of the biogeochemical dynamics in the region is critical for clarifying the global carbon cycle. It has been well-known since the 1980s that there are significant differences in the seasonal cycles of the production processes of the lower trophic level between the subarctic Pacific and the subarctic Atlantic. Parsons and Lalli (1988) indicated that there are clear spring phytoplankton blooms in the subarctic Atlantic, and the seasonal cycle of primary productivity and phytoplankton growth in this area are limited by the depth of mixing (light availability) in spring and nitrate depletion in late summer. In contrast, it was believed that the seasonal cycle of primary productivity in the subarctic Pacific is controlled by low temperature and macro-and microzooplankton grazing in the spring and summer and possibly by an unidentified nutrient (Parsons and Lalli 1988). That is, in the subarctic Pacific, surface waters have much lower biomass at the lower trophic levels than would be expected Ocean Mixing Processes (OMIX): Impact on Biogeochemistry, Climate and Ecosystem based on the macronutrient (i.e., nitrate and phosphate) concentrations, similar to the Southern Ocean and the eastern equatorial Pacific Ocean. Oceanographers call these areas "high nutrient low chlorophyll (HNLC) regions". This unexpected result was one of the main issues in oceanography, and oceanographers debated why very large areas with high nutrients appeared in the ocean throughout the year. In the last three decades, studies to understand the reason for the formation of HNLC water in the global ocean have significantly progressed, and the key to this advancement was understanding "iron (Fe) as a micronutrient" (Martin et al. 1991). Many scientific programs have studied Fe in the ocean and atmosphere since the 1980s (Fig. 2). The scientific improvements gained from these Fe studies were recently well-reviewed in Stoll (2020) and Coale et al. (2015), while Takeda (2011) reviewed the research from the subarctic Pacific. Here, we briefly explain "The progress in Fe studies in the ocean". First, Hart (1934Hart ( , 1942 reported the idea that phytoplankton growth is limited by Fe availability in the modern ocean. The Geochemical Ocean Sections Study (GEOSECS) project was conducted from 1976 to 1979, and we obtained the first comprehensive global dataset on the distribution of chemical parameters in the ocean (Craig and Turekian 1980;Moore 1984). However, oceanographers could not correctly measure key trace metals, including Fe, in seawater samples because of contamination problems (Moore 1984). Therefore, at that time, the importance of Fe as a limited micronutrient had not been well-described. In the 1980s, John H. Martin, Director of the Moss Landing Marine Laboratory, and his group first determined dissolved Fe concentrations in oceanic seawater by using a rigorous clean sampling technique (Bruland et al. 1979) and reported that the dissolved Fe in the eastern subarctic Pacific had nutrient-like vertical profiles and that the surface concentrations seemed to be low enough to limit phytoplankton growth (Gordon et al. 1982;Martin et al. 1989). Martin's group conducted onboard bottle incubation experiments, controlling for extremely low Fe concentrations using a clean technique, and these experimental results demonstrated significant phytoplankton growth upon the addition of Fe to HNLC surface waters (e.g., Martin and Fitzwater 1988;Martin et al. 1989Martin et al. , 1990. Then, he first argued in the part of his "Iron hypothesis" published in Paleoceanography (Martin 1990) that Fe is another nutrient that limits biological production at the lower trophic level, and he claimed that surface phytoplankton growth in HNLC waters, i.e., in the Southern Ocean around Antarctica, the eastern equatorial Pacific and the subarctic Pacific, is limited by Fe availability. After that, oceanographers accelerated to debate "what controls phytoplankton production in the HNLC area" (Chisholm and Morel 1991). Some oceanographers claimed that "HNLC water is caused by natural overgrazing of algae by zooplankton" (Cullen 1991). To understand "the roles of Fe in regulating biological processes in the marine environment", it was necessary to clarify the response of the entire planktonic ecosystem community to Fe-enrichment. To tackle this issue, Watson et al. (1991) proposed the "in situ mesoscale experiment", which artificially manipulated the Fe concentration in seawater to investigate the response of the whole phytoplankton ecosystem to in situ Fe additions to surface water. These experiments were called the Fig. 1 Map of the subarctic Pacific with general circulations and gyres. "WSG" indicates the western subarctic gyre, and "AG" indicates the Alaskan Gyre. The currents and circulations are drawn by referencing Nagata et al. (1992), Harrison et al. (1999), Ohshima et al. (2002, Stabeno et al. (1999) and Hunt et al. (2010). Harrison et al. (1999) indicated that the Subarctic Boundary (Read and Laird 1977) separates the subarctic Pacific region to the north "mesoscale iron fertilization experiment (IFE)", and they have been performed more than 13 times in HNLC waters worldwide Martin et al. 2013). Most of these experiments demonstrated that Fe availability strongly influences phytoplankton growth and carbon and nutrient biogeochemistry in all HNLC regions (de Baar et al. 2005;Boyd et al. 2007). In the subarctic Pacific, three IFEs (SEEDS, SEEDSII, SERIES) have also been conducted (Figs. 1,2), and the results from these experiments have clearly revealed that phytoplankton biomass is limited by Fe availability in both the western and eastern subarctic Pacific Boyd et al. 2004). One of the IFEs also demonstrated the simultaneous occurrence of Fe limitation and grazing control on phytoplankton responses in the western subarctic Pacific ). In modern oceanography, it has become common knowledge that Fe is an essential nutrient that plays an important role in the control of phytoplankton growth and oceanic biogeochemistry. Since this discovery, many oceanographers have been dedicated to investigating the biogeochemical cycles of Fe in the natural environment. In the last three decades, backed by Martin's Fe hypothesis, marine geochemists have also made significant progress in the sampling and analytical techniques used to study the biogeochemistry of trace metals in the ocean (Boyd and Ellwood 2010;Anderson 2020). Our ability to study trace metals in seawater has been largely improved with contamination-free sampling techniques (clean techniques) (e.g., Patterson 1965;Boyle and Edmond 1975;Bruland et al. 1979;de Baar et al. 2008;Measures et al. 2008;Cutter and Bruland 2012) and increased sensitivity, accuracy and precision of analytical methods (e.g., Landing and Bruland 1987;Obata et al. 1993;Measures et al. 1995;Bowie et al. 1998;Wu and Boyle 2002;Bruland and Rue 2001;Sohrin et al. 2008;Sohrin and Bruland 2011). These advances have made it possible to measure Fe concentrations in seawater over a broad range in the field (e.g., Bruland et al. 1994). Oceanographers called the decade of the 1990s the "Iron age in oceanography" (Coale et al. 1999), and oceanographers realized that determining the distribution of Fe in the global ocean, including the processes involved in biogeochemical cycles, was important for understanding the biological production of the ocean and its impact on the carbon cycle and climate (Coale et al. 1999). Oceanographers discussed "what controls dissolved Fe concentrations in the world ocean? (Johnson et al. 1997)". To answer this question, a comprehensive dataset of Fe distributions in the global ocean is necessary. This became one motivation for conducting basin-scale transect observations to investigate the marine biogeochemical cycles of trace metals and their (Duce 1989), GEOSECS (Craig and Turekian 1980), Fe hypothesis (Martin 1990), GEOTRCES (Anderson 2020), OPES (Takeda et al. 1999), SEEDS , SERIES (Boyd et al. 2004), SEEDSII ), W-PASS (Uematsu et al. 2014), Amur-Okhotsk project (Shiraiwa et al. 2012), and OMIX project (this study) isotopes. , an international GEOTRACES program (GEOTRACES Planning Group 2006 was launched and is ongoing worldwide by marine geochemists (Anderson 2020) (Fig. 2). The Japanese-GEOTRACES program has conducted three extensive transect observations of trace metals, including dissolved Fe, in the Pacific Ocean, and two of these have been conducted across the subarctic Pacific from west to east Kim et al. 2017;Zheng et al. 2017;Zheng and Sohrin 2019;Nishioka et al. 2020). These studies clearly revealed the distribution of dissolved Fe in the subarctic Pacific, and the results improved our knowledge of the biogeochemical cycle of Fe and its roles in phytoplankton ecosystems in that area. Physical water structure and phytoplankton production in the subarctic Pacific At the wind-driven circulation depth, the subarctic Pacific consists of the western subarctic gyre (WSG) and the Alaskan gyre (AG) (Nagata et al. 1992;Harrison et al. 1999) (Fig. 1). The WSG is formed by the East Kamchatka Current (EKC), the Oyashio Current and the Subarctic Current (Nagata et al. 1992). The Oyashio Current and the EKC, which are western boundary current, are strongly influenced by origin water that is discharged from the Bering Sea and the Sea of Okhotsk (Nagata et al. 1992), and this discharged water has cold, fresh, nutrient-rich characteristics. The Kuroshio, as the western boundary current of the subtropical North Pacific, transports warm, saline, nutrientpoor water into the midlatitudes of the North Pacific. On the other hand, the AG is a large cyclonic upwelling gyre that is formed by the eastward Subarctic Current and the westward Alaskan Stream (AS), which recirculate near 170°W into the Subarctic Current (Bograd et al. 1999) (Fig. 1). To the east, the gyre is bounded by the Alaska Current (AC) along the continental slope of Alaska (Favorite et al. 1976;Bograd et al. 1999). These two gyres are connected by the eastward Subarctic Current and westward AS (Fig. 1). Comparisons of biogeochemical properties between the two subarctic gyres have been made in previous studies and have been reviewed by Harrison et al. (1999Harrison et al. ( , 2004. They compared the biology and biogeochemistry between two gyres and indicated that despite their general similarities, such as Fe deficiency, there were clear differences between the two gyres, especially in the lower trophic level. In the WSG, seasonal amplitudes in biogeochemical parameters, e.g., nutrient concentrations, and pCO 2 , were greater than those in the AG (Shiomoto et al. 1998;Shiomoto and Asami 1999;Harrison et al. 1999Harrison et al. , 2004, even though both gyres had HNLC characteristics. Time series studies in both gyres (Whitney and Freeland 1999;Tsurushima et al. 2002) and basin-scale data-mapping analyses (Whitney 2011;Yasunaka et al. 2014Yasunaka et al. , 2020 have demonstrated that higher nutrient concentrations accumulate in the surface water of the WSG than in the AG in winter. Yasunaka et al. (2014) and Yasunaka et al. (2020) indicated that higher seasonal drawdowns of nitrate, phosphate, silicate and dissolved inorganic carbon concentrations occurred in the Oyashio and Oyashio-Kuroshio transition zones than in the AG. Moreover, the effect of biological drawdown on the seasonal amplitude of pCO 2 in the surface water was higher in the WSG than in the AG (Takahashi et al. 2002;Chierici et al. 2006). A higher photochemical quantum yield (Fv/Fm), which responds to the Fe supply Fujiki et al. 2014), and higher chlorophyll a concentrations (1-2 mg/m 3 ) (e.g., Shiomoto et al. 1998;Imai et al. 2002) were observed in the WSG waters than in the AG waters. In addition, a massive spring phytoplankton bloom was observed every year in the Oyashio waters at the southwestern edge of the WSG, and the massive bloom had significantly higher chlorophyll a concentrations (over 5 mg/m 3 ) than those in the WSG waters Yoshie et al. 2010;Okamoto et al. 2010;Sugie et al. 2010;Hattori-Saito et al. 2010;Shiozaki et al. 2014;Isada et al. 2010Isada et al. , 2019Kuroda et al. 2019). The increase in the phytoplankton biomass was mainly caused by diatoms Fujiki et al. 2014), and this increase led to an effective biological pump for the extensive transport of organic carbon to the deep ocean (Buesseler et al. 2007;Honda 2003;Kawakami et al. 2015). Therefore, the WSG is an important region in the global carbon cycle (Longhurst et al. 1995;Honda 2003;Schlitzer 2004;Buesseler et al. 2007;Boyd et al. 2008). Similar to the Oyashio region, massive spring phytoplankton blooms were also observed around coastal boundary currents, the AS and AC at the edge of the AG (Whitney et al. 2005;Henson 2007). In contrast to the WSG, blooms in the AS and AC are basically confined to the nearshore area because the boundary current passes the eastern side of the subarctic cyclonic gyre along the west coast of Canada and Alaska. The differences in the Fe supply between the WSG and AG are needed to explain the eastward decrease in the seasonal amplitude of biogeochemical parameters in the subarctic Pacific. The high primary productivity of the western subarctic region should be fueled by supplies of Fe into the euphotic zone. Wealthy fisheries (Sakurai 2007) in the subarctic region should be sustained by the higher primary productivity, which is caused by the Fe supply. The OMIX project and biogeochemical study in the project In July 2015, the Ocean Mixing Processes (OMIX, PI: I. Yasuda) project was launched and conducted as a Japanese oceanographic scientific program and continued until March 2020. The aim of the project was to develop an efficient system to observe ocean diapycnal mixing with biogeochemical parameters, which can quantify the maintenance mechanism of deep water circulation and biogeochemical cycles in the North Pacific. Diapycnal mixing is a fundamental physical process that regulates ocean vertical circulation of water, nutrients, carbon and heat. One of the goals of the program was to understand "why the western North Pacific has one of the largest seasonal amplitudes of biogeochemical parameters, such as the biological CO 2 drawdown (Takahashi et al. 2002(Takahashi et al. , 2009), among the world oceans". To achieve this goal, we addressed "the natural Fe supply processes and nutrient dynamics in the subarctic Pacific". Our understanding of Fe and nutrient biogeochemical dynamics in the subarctic Pacific significantly advanced in this decade, which included the OMIX project era. In addition, the OMIX project promoted collaborative research with global international research programs, such as GEOTRACES (https:// www. geotr aces. org) and SOLAS (https:// www. solas-int. org). The following points (1) and (2) were the remaining major issues related to our understanding of Fe supply processes and nutrient dynamics relevant to biological production in the subarctic Pacific when we started the OMIX project. What are the Fe sources by which seasonal biogeochemical variability is controlled in the subarctic Pacific? Atmospheric dust, which had been well-studied prior to interior oceanic Fe (e.g., SEAREX project (Duce 1989), Fig. 2), was once thought to be the major source of Fe in many oceanic regions (e.g., Duce and Tindale 1991;Jickells et al. 2005). In addition to atmospheric dust deposition, recent trace metal measurements have highlighted the importance of other sources of external Fe, such as river discharge, shelf sediment load, hydrothermal input and sea ice melting (e.g., Johnson et al. 1999;Elrod et al. 2004;Boyd and Ellwood 2010;Conway and Seth 2014;Tagliabue et al. 2014Tagliabue et al. , 2017Resing et al. 2015;Lam et al. 2006Lam et al. , 2012Nishioka et al. 2007;Lam and Bishop 2008;Lam et al. 2012;Lannuzel et al. 2007;Kanna et al. 2014). In the North Pacific, the sedimentary Fe supplied from continental shelves, which is transported by intermediate water circulations, has been highlighted in recent studies (e.g., Lam et al. 2006;Nishioka et al. 2007). Knowledge of the Fe supply processes in the subarctic Pacific was partly compiled and reviewed in Takeda (2011); however, the progress over the last 10 years related to our understanding of the Fe supply processes that control biological production in nutrient-rich waters has been remarkable. 2. Why are there high-nutrient waters at the surface of the subarctic Pacific? In the subarctic Pacific, as described above, highnutrient surface water exists, and nutrients also accu-mulate in old deep water (Broecker et al. 1982;Matsumoto 2007;Matsumoto and Kay 2004). In previous 14 C observations in the North Pacific, the oldest water was clearly observed at depths of approximately 2000-2500 m, and the deep water returned southward below the intermediate water (Broecker et al. 1982;Kawabe and Fujio 2010). These facts indicate that high-nutrient deep water does not directly affect the surface layer in the subarctic Pacific. Previous studies implied that the path of nutrient return to the surface existed in the northwest corner of the Pacific around the Kuril Island chain (Sarmiento and Gruber, 2006;Nishioka et al. 2013). However, the processes by which high-nutrient waters are maintained in the subarctic Pacific surface are not understood due to a lack of knowledge of the whole and detailed mechanisms by which nutrients return to the surface layer. In this review, we look back on both previous and updated knowledge, including new findings from the OMIX project, and describe reasonable explanations of the natural Fe supply processes and macronutrient dynamics in the subarctic Pacific. Then, we discuss the natural Fe supply impact on biological production. Definition of Fe terminology in seawater in this review As described in the Sect 1, the development of trace metal clean techniques and sensitive analytical methods has led to increasing Fe data in seawater over the past three decades. Bruland and Rue (2001) reviewed detailed steps for the determination of Fe and the operational definition of Fe in seawater. In most recent measurements of Fe in seawater, the use of conventional filtration to define the traditional categories of "dissolved" (< 0.2 μm) and "particulate" (> 0.2 μm) fractions was employed for the first physical separation step performed prior to any further chemical analyses. The unfiltered sample and filtrate are acidified (pH ~ 2) by adding ultraclean acid before analysis, and then samples are determined by various analytical methods (e.g., Landing and Bruland 1987;Obata et al. 1993;Measures et al. 1995;Bowie et al. 1998;Wu and Boyle 2002;Bruland and Rue 2001;Sohrin et al. 2008;Sohrin and Bruland 2011). In this review, we use the terms "dissolved Fe", "total dissolvable Fe", "labile particulate Fe" and "bioavailable Fe". The definition of "dissolved Fe" in this review is leachable and detectable Fe in the "dissolved" fraction (in filtrate < 0.2 μm) at pH ~ 2 (Fig. 3). The definition of "total dissolvable Fe" in this review is leachable and detectable Fe in unfiltered seawater at pH ~ 2, which includes "dissolved Fe" and "labile particulate Fe" (Fig. 3). The "labile particulate Fe" is leachable Fe in the "particulate" fraction (> 0.2 μm) at pH ~ 2 (Fig. 3). Some studies conducted additional ultrafiltration to determine the "soluble Fe" and "colloidal Fe" fraction by using smaller pore size filter (0.02 μm, 0.03 μm, 200 kDa, 1000 kDa, e.g., Kuma et al. 1996;Nishioka et al. 2001;Wu and Boyle, 2001;Fitzsimmons and Boyle 2014;Fitzsimmons et al. 2015) before acidifying the sample, and the size-fractionated Fe in the "dissolved" fraction revealed that there are significant portions of "colloidal Fe" [0.02 (0.03) μm-0.2 μm, or 200 (1000) kDa-0.2 μm], which can be separated from "soluble Fe" [< 0.02 (0.03) μm, or < 200 (1000) kDa]. We used this definition of "colloidal Fe" in this review (Fig. 3). We also use the term "bioavailable Fe" for Fe that can be taken up by phytoplankton in seawater in this review. Fe distribution in the North Pacific Data on Fe from the North Pacific collected during the last decade, which were mostly collected in the summer season, clearly show significant progress in our knowledge of Fe sources and distribution. Anderson (2020) reviewed and highlighted the findings of the first decade of the international GEOTRACES program, indicating that "an unexpected finding is the widespread plumes of elevated dissolved Fe concentrations spreading seaward from continental slopes in the marginal seas in the North Pacific". In the last decade, the Japanese-GEOTRACES program conducted extensive transect observations of the dissolved Fe in the Pacific Ocean, and two of those (conducted from August-October in 2012 and June-August 2017 by the R/V Hakuho Maru, KH-12-4 and KH-17-3 cruises) were conducted across the subarctic Pacific, which covers the full west-east section of the subarctic Pacific along 47°N (GEOTRCES line ID: GP02). The results of the basin-scale zonal high-resolution transect profile of trace metals along 47°N have been reported in several published papers (Kim et al. 2017;Nishioka and Obata 2017;Zheng and Sohrin 2019;Nishioka et al. 2020) and provide a comprehensive picture of the dissolved Fe distribution Zheng and Sohrin 2019;Nishioka et al. 2020) (Fig. 4a). The observations showed very low dissolved Fe concentrations ranging from 0.05 to 0.12 nM in surface water (Fig. 4b), with a significant amount of macronutrients, through the transect. These studies confirmed that the surface waters in both gyres were typical HNLC waters, as indicated by previous studies in the AG (Martin and Gordon 1988;Martin et al. 1989). Prior to the GEOTRACES observations, Lam et al. (2006) first reported that the eastern side of the subarctic Pacific has a continental shelf source of dissolved Fe along the AS, and water with high concentrations of dissolved Fe was observed along the eastern edge and northern edge of the AG by several studies (Cullen et al. 2009;Wu et al. 2009;Zheng and Sohrin 2019;Nishioka and Obata 2017;Nishioka et al. 2020). This water with high concentrations of dissolved Fe in the AC and the AS was confined to the nearshore area on the west coast of Canada and Alaska (Fig. 4c) because boundary currents (the AC and AS) pass along the coast. Several previous studies also indicated that at the western edge of the AG, the Fe-rich waters were transported by the AS, which detached from the Alaskan margin toward center of the AG at approximately 165°-170°W (Martin et al.1989;Lam et al. 2006;Takata et al. 2006;Lippiatt et al. 2010;Nishioka and Obata 2017;Zheng and Sohrin 2019;Nishioka et al. 2020). On the other hand, on the western edge of the WSG, the dissolved Fe concentration was well-studied in the Oyashio water and reported by Nakayama et al. (2010) and Nishioka et al. (2011). Both studies indicated that surface dissolved Fe concentrations were controlled by the Coastal Oyashio water (COW) distribution. The COW had a higher dissolved Fe concentration (up to ~ 3 nM) than other Oyashio waters, which had moderate dissolved Fe concentrations (0.3-0.5 nM). Nishioka et al. (2011) In the WSG, the most prominent feature of the dissolved Fe distribution is the existence of water masses with (2019) and Nishioka et al. (2020) (Fig. 4c, d). From these observation data, it was clear that water masses with high-dissolved Fe concentrations existed from the bottom of the surface mixed layer to the intermediate layer across the WSG over 2000 km (Fig. 4c,d). Judging from the horizontal dissolved Fe distributions at the isopycnal surface, the Fe-rich intermediate water likely reflected the location of the Fe source region, the direction of the intermediate water circulation, the strong western boundary current, and the long-distance lateral Fe transport within the North Pacific ( Fig. 7a, b) (we discuss the "long-distance lateral Fe transport" in Sect. 2.3.3). Zheng and Sohrin (2019) determined the importance of lithogenic particulate Fe for understanding the distribution and budget of Fe in the North Pacific Ocean. Based on their measurements of Fe, Al and Mn, they demonstrated that waters near the continental shelf and slope had high labile particulate Fe concentrations. They inferred that boundary scavenging occurred for labile particulate Fe within 500 km off the Aleutian shelf. They also reported that the dissolved Fe was enhanced in a depth range of 400-2000 m at the center of the WSG. They also suggested, with their Mn data, that continental shelf and slope sediments produced the observed unique distribution of the dissolved Fe, and they claimed that the maxima in the intermediate water were a general feature of dissolved Fe because the feature was also observed at the eastern edge of the AG along the west coast of Canada. Zheng et al. (2017) also observed continental margins and hydrothermal sources of Fe around the Juan de Fuca Ridge on the eastern side of the Pacific near the west coast of Canada. The influence of Fe from the continental margin appeared from the surface to a depth of ~ 2000 m. Iron from a high-temperature hydrothermal plume was observed at a depth of 2300 m off the west coast of Vancouver Island. In the GEOTRACES program, other trace metals have been measured, and these results helped advance our understanding of the Fe biogeochemical cycle in the subarctic Pacific. Kim et al. (2017) Fe transport from subpolar marginal seas The study of the subpolar marginal seas (the Sea of Okhotsk and the Bering Sea) is indispensable for understanding the intermediate water circulation in the North Pacific. Compared with open ocean areas, the marginal seas have high productivity and active biogeochemical cycling, which are controlled by separate local processes, such as freshwater discharge, interior current systems, tidal mixing, local upwelling, interactions with the continental shelf, and sea ice production and melting. Water exchange occurs between the North Pacific and the marginal seas through straits. These marginal seas have been shown to have a strong influence on physical and biogeochemical processes in the North Pacific. Therefore, it is important to describe the role of the marginal seas in linking with outer oceanic regions to clarify the whole North Pacific biogeochemical system. Following previous projects, such as CREST (1996)(1997)(1998)(1999)(2000)(2001) and the Amur-Okhotsk project (2004)(2005)(2006)(2007)(2008)(2009), the OMIX project ( Fig. 2) continued collaborative research with the Russian scientist Dr Y. N. Volkov, Director of the Far Eastern Regional Hydrometeorological Research Institute (FERHRI), and conducted joint observational research expeditions in the Sea of Okhotsk, around the EKC, and the western part of the Bering Sea. The dataset from the collaborative cruises, together with the data from the Japanese research cruises, allowed us to collectively create a borderless dataset for the North Pacific, including the marginal seas. The Sea of Okhotsk The Sea of Okhotsk is a marginal sea located on the northwest rim of the Pacific Ocean. The water circulation and ventilation system in this region was well-studied by physical oceanographers in 1997-2002 during the CREST project (PI: M. Wakatsuchi; e.g., Ohshima and Martin 2004). After that, based on the obtained physics knowledge, biogeochemical studies were conducted and continued from the Amur-Okhotsk project (2005-2010) (PI: T. Shiraiwa; e.g., Shiraiwa et al. 2012;Nishioka et al. 2014a) to the OMIX project. We provide an overview of the results of the findings relevant to Fe and nutrient dynamics in this review. The Sea of Okhotsk is the seasonal sea ice area at the lowest latitude in the world (Alfultis and Martin 1987;Kimura and Wakatsuchi 2000). Every winter, large amounts of sea ice are produced along the Siberian coast on the northwestern continental shelf of the Sea of Okhotsk as a result of the cold winter winds that blow from East Siberia coupled with the freshwater discharge from the Amur River. Nagao et al. (2007) reported that the water from the Amur River in summer contained approximately 5.3 μM dissolved Fe, which was four to five orders of magnitude greater than that in coastal seawater. Nishioka et al. (2014b) found that a large amount of dissolved Fe in Amur River waters was lost (more than 99%) through estuarine processes, such as flocculation and precipitation. However, dissolved Fe, which probably bound to organic ligands, remained at the surface and was transported by the southward boundary current. Ohshima et al. (2002) conducted near-surface observations with ARGOS drifters and revealed the existence of a southward boundary (Suzuki et al. 2014). During the winter, the formation of sea ice produces a large volume of cold brine. The brine subsequently settles on the bottom of the northwestern continental shelf and forms dense shelf water (DSW 26.8-27.0 σ θ ) (Kitani 1973;Nagata et al. 1992;Martin et al. 1998;Gladyshev et al. 2000). Nakatsuka et al. (2002Nakatsuka et al. ( , 2004 found that the DSW had an extremely high turbidity caused by sediment resuspension from the bottom. After that, Nishioka et al. (2013Nishioka et al. ( , 2014b showed the vertical profiles of dissolved Fe and total dissolvable Fe (including labile particulate Fe) concentrations in the northwestern continental shelf area in late summer and reported that extremely high Fe concentrations were observed in the DSW layer with high turbidity and negative temperature waters. By using the N* index (calculated from the nitrate and phosphate concentrations; Gruber and Sarmiento 1997;Yoshikawa et al. 2006), which behaves as a suitable tracer for the influence of hypoxic bottom sediment, Nishioka et al. (2014b) concluded that Fe was introduced to the DSW by sediment resuspension (Fig. 8a). The DSW-sourced sedimentary Fe was transported by an intermediate water ventilation system in the Sea of Okhotsk and the North Pacific (Fig. 7a). Fukamachi et al. (2004) reported that the region off the east coast of Sakhalin is an important pathway of the DSW from its production in the northwestern shelf region to the southern Sea of Okhotsk. Observations of chlorofluorocarbons as passive tracers for water masses have also indicated the paths of ventilated DSW water masses Yamamoto-Kawai et al. 2004). The DSW tends to penetrate the upper layer of the Okhotsk Sea Intermediate Water (OSIW) (Fukamachi et al. 2004;Itoh et al. 2003) at a water density of approximately 27.0 σ θ . The OSIW flows southward along the East Sakhalin coast. Nakatsuka et al. (2002Nakatsuka et al. ( , 2004 also noted that an efficient system of sediment material transport exists from the northwestern continental shelf to the open sea via OSIW transportation. Nishioka et al. (2014b), Yamashita et al. (2020) and Shigemitsu et al. (2013) (Itoh et al. 2010(Itoh et al. , 2011Yagi and Yasuda 2012;Ono et al. 2013). Then, the discharged intermediate waters from the Sea of Okhotsk contribute to the formation of the NPIW (Talley 1991;Yasuda 1997;Nakamura and Awaji 2004;Nakamura et al. 2006). The water properties of the Oyashio region are strongly influenced by the intermediate water that originates in the Sea of Okhotsk (Yasuda 1997;Yasuda et al. 2001), and high-dissolved Fe concentrations have been observed in the intermediate water in the Oyashio region and in the NPIW (Figs. 4d, 7a). Other studies have indicated that injections of large amounts of dissolved organic matter (DOM) from the Sea of Okhotsk lead to increased DOM concentrations in the NPIW (Hansell et al. 2002;Hernes and Benner 2002;Yamashita and Tanoue 2008). Nishioka et al. (2007) hypothesized that the intermediate water circulation processes in this region control the transportation of Fe, and the hypothesis was validated by observations in this region (Nishioka et al. , 2014b. Yasuda et al. (2001) used a potential vorticity to identify water masses coming from the Sea of Okhotsk or WSG in the Oyashio region, indicating that the upper intermediate water density range (26.6-27.0 σ θ ) is strongly influenced by the OSIW, and the lower intermediate water density range (27.0-27.5 σ θ ) is influenced mainly by the WSG and the EKC. This knowledge from physics is consistent with the explanation of the dissolved Fe distribution in the WSG and its surrounding waters. Nishioka et al. (2020) reported the results of the isopycnal analysis for dissolved Fe with the compiled Fe dataset. The dissolved Fe-rich water in the upper intermediate water exists mainly west of 155°E ( Fig. 7a; 27.0 σ θ isopycnal surface) with higher dissolved oxygen (DO) than that in the surrounding water, indicating that the waters are derived from the OSIW and propagate along the upper intermediate layer isopycnal surface to the western North Pacific. The Bering Sea In the OMIX project era, from June to July 2014 and August to September 2018, observational studies using a Russian research vessel were conducted around the EKC and the western Bering Sea, which included the Bering Sea Basin and Gulf of Anadyr [the cruise in 2018 was a collaborative expedition with the Arctic Challenge for Sustainability (ArCS) project]. In the OMIX project, the aim of our observational study was to investigate the biogeochemistry in the source waters of the upper stream of the Oyashio Current and the EKC to understand how to set the chemical properties of the intermediate water in the North Pacific. For that, observations of the EKC and the western Bering Sea were essential (Fig. 1). The physical background of the Bering Sea has been wellreported by Nagata et al. (1992) and Stabeno et al. (1999) and is briefly explained from their study as follows. The Bering Sea is a semi-enclosed, high-latitude sea that is bounded on the north and west by Russia, on the east by Alaska, USA, and the south by the Aleutian Islands. The Bering Sea is divided between a deep basin (maximum depth 3500 m) and continental shelves (< 200 m). Cyclonic gyre circulation exists in the basin, with the southward-flowing western boundary current, the EKC, and the northward-flowing eastern boundary current, "the Bering Slope Current". Even though only three passes, Amchitka Pass, Near Strait, and Kamchatka Strait, extend deeper than 700 m, the AS enters the Bering Sea through the many passes in the Aleutian Island chain, and the inflow is balanced by outflow through the Kamchatka Strait. Therefore, the circulation in the Bering Sea basin may be more aptly described as a continuation of the North Pacific subarctic gyre. The flow into the Bering Sea through the major passes in the Aleutian Island chain strongly influences the chemical water properties of the basin, and the basin intermediate water flows out from the Kamchatka Strait and Near Strait, the two most western straits along the Aleutian Island chain, to the Pacific side (Stabeno et al. 1999). Aguilar-Islas et al. (2007) reported surface Fe concentrations in the southeastern region of the Bering Sea. They observed Fe-limited HNLC surface waters in the deep basin of the Bering Sea and nitrate-limited and Fe-replete surface waters over the shelf. They also observed high biomass at the shelf break "Green Belt", and the dominant diatoms at the Green Belt were stressed by low Fe availability. Hurst and Bruland (2007) and Hurst et al. (2010) reported that particulate Fe in resuspended sediment on the Bering Sea shelf contains a high concentration of the labile particulate Fe fraction, and Buck and Bruland (2007) reported that the concentration of dissolved Fe in the southeastern Bering Sea was strongly correlated with the organic ligand concentrations. Cid et al. (2011) also reported dissolved Fe and labile particulate Fe concentrations on the Bering Sea shelf, and they indicated that Yukon river input, sedimentary reduction and biogeochemical cycles influence to the dissolved Fe distribution. Uchida et al. (2013) reported vertical profiles of nutrient and Fe concentrations in the basin area. Sea ice formation occurs in the northern Bering Sea shelf area. In contrast to the Sea of Okhotsk, however, there is no clear evidence that the sedimentary Fe on the shelf has been transported via the water ventilation process to the basin water. The lower intermediate water density range (27.0-27.5 σ θ ) in the WSG is influenced mainly by the EKC and western Bering Sea (Yasuda et al. 2001). In the lower intermediate water density range (27.0-27.5 σ θ ), the dissolved Fe is high across a wide area in the WSG, including the basin in the Bering Sea and around the eastern Aleutian Islands (Nishioka et al. 2020) (Fig. 7b). The depth of high-Fe water in the lower intermediate water was consistent with the DO minimum and nutrient maximum water in the WSG (Fig. 8b, c). Therefore, the high-Fe properties in the lower intermediate water are probably distributed throughout the circulation of the lower intermediate water in the WSG, including the Bering Sea basin (Fig. 7b). The source of high dissolved Fe in the lower intermediate water is still under debate. Long-distance lateral Fe transport To evaluate the Fe supply by oceanic processes in the North Pacific, long-distance transport mechanisms of sedimentary Fe have become the key for understanding its impact on biological production in remote oceanic areas. In the AG, the role of eddies in transporting Fe has been well-studied. It was indicated that coastal water and the Fe trapped in the eddy, which has high total labile particulate and dissolved Fe captured during eddy formation, were transported to the open ocean in the AG (e.g., Johnson et al. 2005;Brown et al. 2012). The Fe content of the eddies decreased rapidly during the first year after the eddy formed; however, the high total dissolvable Fe signature in the eddy could be tracked until 16 months after its formation. Xiu et al. (2011) calculated an averaged upwelling dissolved Fe flux based on observations and models, indicating that the eddy-derived Fe supply rate was comparable to the pulsed Fe input from volcanic ash deposition. In the OMIX study, the roles of the eddy in the WSG, south of the western Aleutian Islands (Aleutian eddy), were investigated by Dobashi et al. (2021), indicating that the observed eddy did not have high Fe concentrations in either the dissolved or the particulate fractions, probably because the age of the eddy was old enough for Fe to be removed. In the WSG, as described above, the water ventilation in the Sea of Okhotsk and intermediate water formation processes transported dissolved Fe at the basin scale. Similar Fe transport was reported in the western Arctic Ocean by Hioki et al. (2014) and Kondo et al. (2016); however, the Fe transport processes by water ventilation from the Sea of Okhotsk observed in the WSG occurred over a significantly larger scale (more than a few thousand kilometers) than those in the western Arctic Ocean. In addition, the waters that had persistently high Fe concentration properties with extremely low DO and high regenerated nutrients in the lower intermediate water in the whole WSG, including the Bering Sea basin, probably have longer residence times. Additional evidence of the long-distance transport of sedimentary Fe has been provided by Conway and John (2015). They presented seawater dissolved Fe isotope ratio profiles in the subtropical Northeast Pacific, indicating that water enriched in sedimentary Fe is likely transported across the Pacific via NPIW. Inorganic Fe solubility is extremely low in oxic seawater (Byrne and Kester 1976;Kuma et al. 1996). Because the inorganic form of Fe reacts with oxygen to make oxyhydroxide particles and is adsorbed onto particles in seawater, Fe tends to be scavenged from seawater (Lohan and Bruland 2008). However, it is well-known that most dissolved Fe exists in complex form with natural dissolved organic ligands in seawater (van den Berg 1995; Rue and Bruland 1995), and Fe solubility has been found to be controlled by organic ligands (Kuma et al. 1996;Tani et al. 2003;Kitayama et al. 2009). Therefore, the mechanisms that make Fe soluble into seawater by organic ligands and the dynamics of the organic ligands in the subarctic Pacific are critical to understand the external Fe input relevant to biological production. Siderophores, humic substances, exopolymer substances and phytoplankton cell breakdown ligands are expected to act as natural organic ligands in seawater; each ligand has a different character and chemical structure (e.g., Hassler et al. 2011Hassler et al. , 2017. Since the source and sink of organic ligands are variable depending on area and depth, a competitive ligand equilibration-adsorptive cathodic stripping voltammetric (CLE-ACSV) method (van den Berg 1995) has been commonly used to determine the concentration and conditional stability constants of mixtures of natural organic ligands in seawater samples (e.g., Gledhill and van den Berg 1994; Rue and Bruland 1995). Although little is known about the distribution of these natural organic ligands in the subarctic Pacific, the accumulation of organic ligands in the biogeochemical cycle of deep waters (below 3000-m depth) has been observed (Kondo et al. 2012). The concentration of dissolved Fe at depths ranging from 1000 to 2000 m exceeded that of organic ligands in this area, suggesting that excess dissolved Fe existed as organic/inorganic colloidal Fe and/ or inorganic complexes that were not detectable by the CLE-ACSV method described in Kondo et al. (2012). The existence of colloidal Fe has also been suggested from the measurement of Fe solubility (Kuma et al. 1996;Nakabayashi et al. 2002;Tani et al. 2003;Kitayama et al. 2009). Yamashita et al. (2020) measured humic-like florescent dissolved organic matter (FDOM H ), which is a quantitative parameter of humic substances, with the dissolved Fe concentration. They distinguished shelf humic substances (allochthonous FDOM H ) from in situ produced humic substances (autochthonous FDOM H ) using apparent oxygen utilization, and they separated chemical species of dissolved Fe into allochthonous FDOM H -Fe complexes, autochthonous FDOM H -Fe complexes, and colloidal Fe. The spatial distribution of individual chemical species of dissolved Fe from the northwestern shelf of the Sea of Okhotsk to the subtropical Pacific through the WSG clearly shows that the shelf humic substances bound to Fe form complexes and carry dissolved Fe at least 4000 km by the circulation of the upper intermediate water. On the other hand, they also found that in the lower intermediate water at the WSG, Fe bound to autochthonous humic-like substances and colloidal Fe dominated. They concluded that humic substances were probably one of the key organic ligands and controlled the dissolved Fe distribution in the ocean interior, as reported by other previous studies (Kitayama et al. 2009;Laglera and van den Berg 2009;Laglera et al. 2011). Misumi et al. (2013) used numerical modeling to represent humic substances as organic ligands to control the factor of dissolved Fe distribution in seawater. Colloidal Fe supplied from the shelf of the Sea of Okhotsk seemed to be transported long distances (at least 3000 km) through the WSG by the circulation of the upper and lower intermediate waters ). In addition, Nishioka et al. (2003) conducted size-fractionated Fe measurements and indicated that colloidal Fe existed at significant amounts in the intermediate water in the western North Pacific and that it is also possible that this Fe form has a longer residence time in seawater. Kondo et al. (2021) investigated size-fractionated dissolved Fe and ligand distributions in the WSG (47°N, 160°E) and AG (50°N, 145°W), and indicated that the concentration of dissolved organic ligands around the lower intermediate water in the WSG was higher than that of eastern intermediate water and that Fe in organic and inorganic colloid formations are potentially essential for Fe transport mechanisms in the subarctic Pacific. Fitzsimmons et al. (2017) measured the chemical speciation and isotopic composition of Fe in a hydrothermal plume and revealed that hydrothermally derived dissolved Fe in the plume is best explained by reversible exchange onto slowly sinking particles, and this process transports Fe up to 4000 km from the vent source. Similar process which includes interaction between colloidal Fe and slowly sinking particles is important for controlling the long-distance sedimentary Fe transport from the Sea of Okhotsk to the North Pacific (Misumi et al. 2021). To fully understand the mechanism of the long-distance transport of Fe, it is necessary to reveal the chemical and physical speciation of Fe and its behavior, including the dynamics of organic ligands, the role of colloidal Fe and the particulate form of Fe, in each source and in the water column. The bioavailability of each Fe form, including particulate Fe (e.g., Sugie et al. 2013;Kanna and Nishioka 2016), is also important for understanding the impact of transported Fe on biological production. Although several numerical modeling studies that represent Fe in the North Pacific, including sedimentary Fe sources from the continental shelf, have been conducted (Misumi et al. 2011;Nakanowatari et al. 2017), further collaborative studies with numerical modeling that incorporate the Fe behavior of each chemical form into ocean biogeochemical cycles are essential. These subjects still require future research. Subarctic intermediate nutrient pool Phytoplankton (a part of biogenic particles) uptake nutrients in the surface layer, and the biogenic particles finally sink toward deep water. The sinking biogenic particles are remineralized and release inorganic nutrients into intermediate/ deep water. Since the processes export nutrients (NO 3 , PO 4 ) from the surface to intermediate/deep water, nutrient supply mechanisms for compensating for nutrient export are required to maintain high nutrients in the surface water in the subarctic region. Broecker (1991) proposed the concept of global deep ocean thermohaline circulation and the great ocean conveyor belt. To date, the North Pacific is recognized as the area where the deep ocean conveyor belt circulation terminates and has been perceived vaguely as the location where deep high-nutrient water rises and where a high-nutrient area is established at the surface. Recent physical oceanographic research has revealed more details about the global water circulation in the North Pacific (Fig. 9). Kawabe and Fujio (2010) indicated that most of the lower circumpolar deep water (Antarctic Bottom Water; AABW) upwells to the upper deep layer in the North Pacific and transforms into Pacific Deep Water (PDW), which shifts southward in the upper deep layer of 2000-2500 m and is modified by mixing with upper circumpolar deep water (Fig. 9). Talley (2013) indicated that the intermediate water meridional overturning cell subsystem in the North Pacific (which is constructed by the NPIW formation and the surface water circulation from subtropical to subarctic region including the Bering Sea and Sea of Okhotsk; see Fig. 9) is mostly unconnected to AABW cells, directly. Broecker and Peng (1982), Matsumoto and Key (2004) and Matsumoto (2007) clearly indicated with the chemical measurement dataset of the global 14 C distribution in seawater that the water that had the oldest 14 C age was distributed in the depths of the PDW, below the intermediate water meridional overturning cell subsystem (below the NPIW) (Fig. 9). Thus, the present physical and chemical oceanographic research indicates that deep nutrient-rich water does not directly outcrop to the surface of the subarctic Pacific. Therefore, the dynamics of the nutrients and the mechanisms by which the nutrients in deep water return to the surface layer to establish high-nutrient surface waters in the subarctic Pacific are not clearly described at the end of the conveyor belt circulation. One of the water masses with the most persistent low dissolved oxygen concentrations and the richest nutrient repository in the world ocean is being observed in the intermediate water in the North Pacific (Keeling 2010;Whitney et al. 2013). As described in Sect. 2.3.2, water that is extremely rich in nitrate and phosphate but low in DO was observed in the wide density range of the intermediate layer (26.8-27.6 σ θ ) in the WSG, including the Aleutian Basin, and the water also propagated in the AG (Fig. 8b, c). Nishioka et al. (2020) proposed the formation of a subarctic intermediate water nutrient pool (SINP) and indicated that a high macronutrient water pool formed in the intermediate water was key for understanding the high-nutrient surface water in the subarctic Pacific. They estimated that the total mass of nitrate accumulated in the whole SINP was 4.2 ± 0.4 × 10 14 mol (calculated by volume of the intermediate water × average nitrate concentration). They also calculated regenerated (reg)-phosphate (PO 4 ) in the SINP. First, oxygen solubility was calculated, and the apparent oxygen utilization (AOU) was then calculated as the difference between the solubility and the measured DO concentration. Then, they calculated reg-PO 4 with "AOU × R PO:4DO (PO 4 DO ratio = 170) (Anderson and Sarmiento 1994)". The definition of preformed PO 4 is "Preformed PO 4 = observed PO 4 − reg-PO 4 ". The percentage of reg-PO 4 out of the total PO 4 in the intermediate water indicated that more than half of the total PO 4 in the intermediate water masses was reg-PO 4 in the WSG and the Bering Sea basin (Fig. 8d). As described in the section below, upward turbulent fluxes of nutrients were reported around the marginal sea island chains, indicating that nutrients were uplifted from the SINP to the surface, which fueled surface organisms and were returned to the SINP by sinking particulate organic matter decomposition during intermediate water transport ). In other words, a large proportion of nutrients were repeatedly recycled within the SINP and surface water in the intermediate water meridional overturning cell subsystem in the North Pacific (Fig. 9). Whereas, the SINP formation cannot be explained by upwelled PDW, which contains higher dissolved oxygen and lower NO 3 and PO 4 concentrations. Nishioka et al. (2020) also indicated that the Sea of Okhotsk ventilation transports newly formed water, which has relatively low PO 4 and high DO (with a higher preformed-PO 4 percentage), onto the upper intermediate water in the North Pacific (a low percentage of reg-PO 4 water is distributed in the Sea of Okhotsk in Fig. 8d). These aspects of the nutrient dynamics in the North Pacific are consistent with the meridional overturning cell subsystem of the NPIW, which is mostly unconnected to AABW cells, as Talley (2013) suggested. Simultaneously, however, the SINP needs to be supplied nutrients to compensate for the amounts that are laterally exported from the intermediate water to low latitudes. The most likely candidate process for compensating for these amounts is that they are fueled from nutrient-rich deep water in the marginal seas. Long et al. (2019) estimated the amount of nitrate that was lost from the SINP by advection of the NPIW as 3.5 × 10 12 mol/year, and the amount of nitrate supplied from the deep layer to the intermediate layer should be complementary to this loss . It also remains unclear how the silicate cycle, which is decoupled from the NO 3 the PO 4 cycles, maintains high concentrations in the meridional overturning cell subsystem of the NPIW. Nutrient return process from the intermediate layer nutrient pool to the surface Vertical mixing is a key process for understanding nutrient return paths from the intermediate water to the surface. There are some candidate processes reported around the subarctic Pacific that can control nutrient return, such as strong mixing around the Kuril and Aleutian Island chains and along the continental shelf slope in the Bering Sea. First, we review previous studies on vertical mixing around the Kuril and Aleutian Island chains. The diapycnal mixing caused by interactions of tidal currents with the rough topography around the straits strongly affects the temperature, salinity, and dissolved oxygen properties from the surface to the deep layer . A previous observational study by Yamamoto-Kawai et al. (2004) showed that strong vertical tidal mixing occurs around the Bussol' Strait, the deepest strait along the Kuril Island chain, and reaches the OSIW. Sarmiento et al. (2004) used the combined concentration of silicic acid and nitrate as a tracer, Si*, for intermediate water nutrient transportation, and they indicated that the NPIW needs to be the main nutrient return path from the deep water to above the thermocline to maintain high Si* water. They implied that tidal mixing at the Kuril Island chain is one of the candidate physical processes that uplifts nutrients from deep water. Over the last two decades, direct observations of the island chain around the Sea of Okhotsk (the Kuril Island chain) and around the Bering Sea (the Aleutian Island chain) were conducted, and turbulence mixing parameters were measured in several studies. Itoh et al. (2010Itoh et al. ( , 2011 and Yagi and Yasuda (2012) conducted direct observations of the turbulence parameter Kρ (= 0.2ε/N 2 ; where ε is the turbulent kinetic energy dissipation rate, where N 2 is squared buoyancy frequency) using a free-fall vertical microstructure profiler (VMP2000 Rockland Scientific International Co.). Goto et al. (2016Goto et al. ( , 2018 also measured Kρ by using CTDattached fast-response thermistors (AFPO7, Rockland Scientific International Co.) for open water in the North Pacific. Goto et al. (2018) conducted a comparison study for both measurement methods and confirmed that the ε values from both measurement methods were comparable and within a factor of 3 (where ε is valid for 10 -10 < ε < 10 -8 W/kg). Figure 7c, d show the vertical section distribution of the dissolved Fe gradient (plus value: dissolved Fe increase with depth) and the vertical diffusive coefficient, Kρ (logarithm), along GP02. Interestingly, the vertical flux of dissolved Fe from the subsurface to the surface calculated by these two parameters was clearly higher in the WSG than in the AG. Previous studies (Itoh et al. 2010(Itoh et al. , 2011Yagi and Yasuda 2012;Goto et al. 2016Goto et al. , 2018 indicated that the Kρ values observed in the Kuril and Aleutian Island chains were two to four orders of magnitude higher than those observed in the open ocean in the subarctic Pacific. Nishioka et al. (2020) also measured the nitrate and dissolved Fe concentrations at the surrounding station where Kρ was measured and calculated the dissolved Fe and nitrate upward fluxes from the intermediate water to the surface layer (Fig. 10a, b), indicating that the order of dissolved Fe and nitrate vertical flux returning from the intermediate layer to the surface layer by turbulent diapycnal mixing around the island chains was several orders greater than that in the open ocean. They concluded that the Kuril and Aleutian Island chains are hot spots that return nutrients from the SINP to the surface layer, and are probably very important areas for maintaining surface HNLC water in the subarctic Pacific. Another possible candidate area for the hot spots that return nutrients from the SINP to the subarctic Pacific surface is along the shelf break in the southeastern Bering Sea. Tanaka et al. (2012aTanaka et al. ( , b, 2017 reported the results of direct measurement of turbulence mixing parameters, and Tanaka et al. (2012aTanaka et al. ( , b, 2013 conducted a numerical modeling study on tidal mixing along a shelf break in the southern eastern Bering Sea, indicating that the strong vertical mixing along the shelf break induced by diurnal and semidiurnal tides played important roles in maintaining the Fe and nutrient supply along the high productive area at the shelf break surface, called the "Green Belt". Large vertical diffusivity was observed over the shelf break (Kρ = 10 -4 − 10 -2 m 2 /s), and strong vertical mixing contributed to the formation of a thick layer over the outer shelf, which induced high vertical fluxes of nitrate and Fe to the surface. These turbulent mixing processes in the subarctic Pacific are very important for setting the chemical properties just below the surface mixed layer to the intermediate layer and are critical for maintaining the high-nutrient surface water accompanying surface winter mixing processes. To understand the nutrient dynamics and biological production in the intermediate water meridional overturning cell subsystem in the North Pacific, we need more quantitative studies to evaluate the whole nutrient budget of the SINP. Impact on the phytoplankton ecosystem Previously, atmospheric dust has been considered to be the most important source of Fe in the North Pacific affecting biological production. Studies by Uematsu et al. (1983), Duce and Tindale (1991) and Mahowald et al. (2005) indicated that there is a longitudinal dust supply gradient across the North Pacific, and the flux of dust Fe over the WSG is an order of magnitude higher than that in the AG. This difference is due to the close proximity to the Gobi Desert, and this has been believed to be the leading cause of the longitudinal differences in biological production between the WSP and the AG ). In addition, aeolian dust from Alaska is important for the eastern subarctic Pacific (Boyd et al. 1998). However, now that other oceanic sources of Fe have been discovered, as explained in this review, a coherent explanation of the biological response in HNLC waters must be made by incorporating both the knowledge of the atmospheric Fe supplies and the oceanic Fe supplies. Since satellite observations have been developed, we have observed several different patterns and phenologies of phytoplankton increases in the subarctic Pacific. These different patterns and phenologies are probably caused by different Fe sources. For instance, Shiozaki et al. (2014) analyzed satellite imagery and oceanographic data collected from 2003 to 2009, and they suggested that the factor that determined the onset of the spring bloom (phytoplankton increase) varied among subregions in the subarctic Pacific and that the magnitude of the bloom was probably controlled by Fe availability. From the present knowledge of Fe supply processes, we can categorize the biological response into three cases. First-case, as we mentioned in the Sect 1, phytoplankton growth which causes seasonal amplification of biogeochemical parameters in open oceanic waters in the two subarctic gyres, which occur every year (e.g., Fujiki et al. 2014;Matsumoto et al. 2014;Shiozaki et al. 2014). Second-case, there are massive spring phytoplankton blooms that occur around the coastal boundary current, such as the Oyashio at the edge of the WSG Okamoto et al. 2010;Shiozaki et al. 2014;Isada et al. 2019;Kuroda et al. 2019) and in the AS and AC at the edge of the AG (Whitney et al. 2005;Henson 2007). Third-case, there are evidences that sporadic and patchy phytoplankton production is occasionally observed in both spring and summer. (e.g., Bishop et al. 2002;Hamme et al. 2010). Each biological phenomenon in subarctic waters has a different timing, magnitude and phenology, probably driven by different sources and supply processes of Fe and is accompanied by different physical conditions, such as light and temperature. Fe supply processes related to the third-case phenomena have been well-discussed in many previous reports and were reviewed in Takeda (2011). Previous studies to date conclude that this third-case biological production is mainly due to the atmospheric dust Fe supply. Briefly, the first clear evidence of a sporadic phytoplankton response was reported by Bishop et al. (2002), who observed an increase in carbon biomass after a dust storm from the Gobi Desert in the surface mixed layer in the AG by robotic profiling float observations. Additional evidence of phytoplankton growth stimulated by volcanic ash supply was reported by Hamme et al. (2010). They captured the volcanic ash spread from a volcanic eruption at the Aleutian Islands in August 2008 and compared it to the biological production obtained by satellite images and shipboard observations, indicating that volcanic ash induced a large-scale diatom bloom in the AG, with decreasing pCO 2 and nutrients (silicate) in the surface waters. These examples clearly demonstrate that atmospheric Fe input episodically induces phytoplankton production in surface water in the subarctic Pacific. Many other studies have observed atmospheric Fe deposits in the surface ocean by observations and estimated them by numerical models. Previously, for instance, Uematsu et al. (2003) reported that a numerical model simulation successfully reproduced the variation in mineral aerosol concentrations and total deposition flux over the western North Pacific. Measures et al. (2005) estimated the dust fluxes based on dissolved Al concentrations in surface water. Iwamoto et al. (2011) observed Fe deposition of Asian dust in the semi-pelagic region of the western North Pacific, and they estimated the amount of bioavailable Fe deposits from the dust event. Ito et al. (2016) developed numerical modeling that represented the delivery of bioavailable Fe from mineral dust and combustion aerosols to the ocean, indicating that anthropogenic aerosols had significant roles in stimulating biological production in the ocean surface. However, overall, conducted a re-examination of evidence from ocean observations, indicating spatial and temporal mismatches between dust inputs and biological activities, and they noted that dust-mediated phytoplankton blooms were rare in the modern ocean. Jickells et al. (2005) also indicated that the role of the Fe dust supply in stimulating biological production has not been quantitatively evaluated well due to a lack of information on the fraction of atmospheric Fe that is bioavailable. Aerosol chemistry is also important for controlling the bioavailability of Fe from atmospheric sources (Landing and Paytan 2010). To estimate the impact of the supply of atmospheric Fe on phytoplankton growth, the dust storm and aerosol transportation scale, frequency, and deposition area, residence time and dissolution rate of atmospheric Fe, and fraction of bioavailable Fe under natural conditions are still uncertain issues. Fe isotope measurements and isotope mass balance analyses are powerful tools to tackle this issue. Kurisu et al. (2016) reported that very low isotope ratio in fine aerosol particles, which probably signifies evaporation of Fe at high temperatures, indicating the contribution of anthropogenic Fe to the surface of the North Pacific. Therefore, Fe isotope measurements with other trace metal isotopes will be very important for quantitative understanding of the relative importance of different sources of Fe to the ocean John 2014, 2015;Kurisu et al. 2016;Pinedo-González et al. 2020). Future research is needed to understand the atmospheric Fe supply. To explain the second-case of massive spring phytoplankton blooms, which cause nutrient depletion, the Fe input into the surface coastal boundary current is a key process. As mentioned in Sect. 2.2, the AC and AS along the Alaskan coast contain large amounts of Fe originating from coastal shelf areas (Cullen et al. 2009). In addition, the AS receives freshwater from the continent, precipitation and glacier meltwater, which contain high Fe (Wu et al. 2009). The Ferich coastal water has massive spring phytoplankton blooms (Whitney et al. 2005;Henson et al. 2007). These coastal waters fuel not only coastal biological production but also offshore open ocean production because the coastal water is detached from the northern coast of Alaska toward the open ocean. On the western side, massive spring blooms in the Oyashio area have been well-studied Sugie et al. 2010), and Kuroda et al. (2019) and Isada et al. (2019) indicated that the COW is a key factor affecting the magnitude of phytoplankton blooms (Fig. 11a). The COW clearly has lower temperatures (Fig. 11b), lower salinities Kuroda et al. 2019) and higher Fe concentrations (Fig. 11c) than the surrounding waters Nishioka et al. 2011), which are probably influenced by the winter surface ESC (including the influence of the Amur River discharge) and by sea ice melt water from the Sea of Okhotsk, which contains high amounts of Fe (Kanna et al. 2014(Kanna et al. , 2018 (Fig. 11c). These Fe sources possibly fuel the massive phytoplankton blooms in the COW a b c Fig. 11 a Satellite chlorophyll data obtained by MODIS/Terra on 31 March 2020; massive phytoplankton blooms were observed in COW. b Satellite sea surface temperature data obtained on 31 March 2020; the COW, which has very low temperature, was observed in the coastal area of east Hokkaido. c Fe supply from the southern Sea of Okhotsk to coastal Oyashio water (COW). The black arrow indicates an image of high Fe water discharge from the Sea of Okhotsk to the coastal Oyashio region (colour figure online) (Fig. 11a); however, we need clearer evidence to connect the water and biogeochemical material circulation between the COW and the Sea of Okhotsk (Mizuno et al. 2018). As described above, Fe that enters the surface layer laterally with fresh water (including sea ice melt water) directly from terrestrial origin is probably important for stimulating the second-case of biological production in the subarctic Pacific. Finally, we would like to discuss the first-case of biological production. This biological production is not like the massive spring blooms that occur in coastal currents, such as the AC, AS, and COW, in the subarctic area. The first-case, which we focus on here, is the biological production that causes the seasonal drawdown of the pCO 2 and nutrient concentrations in the open oceanic area in HNLC waters, which are especially high in the WSG (e.g., Fujiki et al. 2014;Matsumoto et al. 2014;Shiozaki et al. 2014;Yasunaka et al. 2020). This phenomenon is a regular biological production process that occurs every year at the basin-scale range in the western to central subarctic Pacific. This basin-scale steady biological production caused by the steady growth of phytoplankton in the surface layer starts in mid-April in spring, producing organic carbon while consuming nutrients until August in the summer. Fujiki et al. (2014) observed that the potential photosynthetic activity, quantified in terms of Fv/ Fm, is controlled by Fe availability, and the termination of the phytoplankton increase is caused by Fe limitations in the WSG. Matsumoto et al. (2014) revealed that the mixed layer depth and light availability exerted an important control on the seasonal variability of primary production and phytoplankton biomass in the WSG. Shiozaki et al. (2014) indicated that in the formation regions of dense central mode water (D-CMW) and transition region mode water (TRMW) (area 1 in Fig. 12a), bloom onset coincided with possible turbulence weakening but not with MLD shoaling. The observed peak of chlorophyll a in the mode water formation regions was 0.44-0.58 mg/m 3 , and the values were ca. five a c d b Fig. 12 a Satellite-observed annual integrated primary production in the subarctic Pacific and marginal seas. Climatological data for the period between 1998 and 2012. Formation regions of the transition region mode water (TRMW), dense central mode water (D-CMW) and shallow central mode water (S-CMW) described in Shiozaki et al. 2014 We define "area 1", "area 2", and "area 3" as the black circles in panels a, b, and d (colour figure online) times lower than those in Oyashio, although nitrate remained and MLDs became shallow enough at the bloom peak in the regions. They also reported that the "bloom magnitude west of 150°E and in the north Kuroshio extension was increased relative to that in the eastern region, suggesting a chemical property in the water delivered from the Sea of Okhotsk that would influence the western bloom". These observational results probably indicate that through several physical processes, such as eddy diffusion, turbulent mixing, and winter surface mixing, the mixed layer was enriched with macronutrients and Fe (Fig. 12d) brought up from below the surface (area 1 in Fig. 12a), and either the Fe and light availability or solo Fe availability controlled the primary productivity at the mixed layer depth . Therefore, the Fe and nutrient supply from intermediate water driven by ocean mixing (winter mixing and eddy diffusive mixing) possibly contribute to maintaining biological production and dissolved inorganic carbon drawdowns in area 1 in Fig. 12b (Yasunaka et al. 2014(Yasunaka et al. , 2020 from spring to summer in the western subarctic Pacific. Interestingly, higher continuous primary production was observed around the Aleutian and Kuril Island chains (areas 2 and 3 in Fig. 12a). Although Fe and light co-limitation was observed for microsized diatoms in the vicinity of these island chains (Yoshimura et al. 2010;Suzuki et al. 2014), the continuous higher primary productivity is explained by the continuous Fe and nutrient supply from intermediate water driven by strong diapycnal water mixing around the straits of the island chains, as indicated in Sect. 4 (Fig. 10a, b, Nishioka et al. 2020). In addition, a higher dissolved Fe-to-nitrate ratio in the uplifted waters in the Kuril Island chain than in the Aleutian Island chain was observed (Fig. 12c); therefore, biological production around the Kuril Island chain and downstream region in the WSG (area 1 in Fig. 12a, b) might be higher than that around the Aleutian Island chain (Fig. 12a). Nutrient consumption and organic carbon production in the WSG are approximately twice as large as those in the AG. To explain this quantitatively, the Fe flux in the WSP should be a "moderate" value that is higher than that on the eastern side but not sufficient to cause nutrient depletion. Therefore, there is considerable interest in the sources and processes of Fe input that lead to the occurrence of the higher amplitude of biogeochemical parameters in the WSG. The intermediate water Fe transport processes (Fig. 13) are able to explain the significant, twice-as-high and steady increases in phytoplankton biomass in the WSP as a result of several mixing processes that increase the surface Fe concentrations ). As described above, the sedimentary Fe supplied from the continental margin circulated laterally through the intermediate layer and upwelled to the surface by several mixing processes, and this Fe is important for controlling the firstcase of biological production in the subarctic Pacific. Figure 14 shows a schematic diagram of a coherent explanation of the external Fe supply to the western subarctic Pacific and the relevant biological response that incorporates the knowledge of the atmospheric Fe supplies and the oceanic Fe supplies from this review. Conclusion and future directions As we described in the Sect 1, Parsons and Lalli (1988) previously indicated that there are significant differences in the seasonal cycles and production processes of the lower trophic level between the subarctic Pacific and the subarctic Atlantic. Over the last 30 years, led by John H. Martin's "Iron hypothesis", it has become apparent that Fe, previously an unidentified nutrient, is the limiting factor for phytoplankton in the subarctic Pacific, and this nutrient causes the observed differences. The scientific oceanographic projects carried out in the last 10 years, including OMIX, revealed an overview of the different Fe supply processes in the HNLC waters of the subarctic Pacific and their relevance to biological production. Based on all knowledge from this review, we have made significant progress in understanding Fe and nutrient biogeochemistry and phytoplankton ecosystems in the subarctic Pacific. To explain the higher biogeochemical amplitude on the western side of the subarctic Pacific basin, it is important to consider the existence of the subpolar marginal seas, which are located upstream of the western boundary current, the ventilation and the intermediate water circulation system, the formation of the SINP, the western boundary current (the Oyashio current) that is released toward the open ocean, the strong mixing hotspots caused by complicated bathymetric topography, and the westerlies winds. For these reasons, it is understandable that when looking at the entire subarctic Pacific, the effects of land are concentrated on the western side of the subarctic Pacific through Fe transport. The future directions of Fe research are extracted from this review as follows. 1. To quantitatively understand the biogeochemical Fe and nutrient cycle in the intermediate water meridional overturning cell subsystem in the North Pacific, further collaborative research between chemical and physical oceanographers is essential. Additional measurements of turbulent mixing parameters with measurements of Fe and nutrients are needed to estimate the fluxes in several hotspots. 2. We still need a quantitative understanding with a coherent explanation of the biological response in HNLC waters that must be made by incorporating both the knowledge of the atmospheric Fe supplies and the oceanic Fe supplies. 3. Fe isotope measurement and isotope mass balance analysis are powerful tools that could provide a quantitative understanding of the relative importance of different sources of Fe to the ocean. We need more data of Fe isotopic composition at the source regions in the North Pacific. 4. To estimate the impact of the supply of atmospheric Fe (including mineral dust and anthropogenic aerosols) on phytoplankton growth, data on the mineral dust storm and aerosol transportation scale, frequency, deposition area, residence time in surface water and dissolution rate, and fraction of bioavailable Fe under natural conditions are still needed. 5. To fully understand the mechanism of long-distance oceanic transport of Fe, it is necessary to understand the chemical and physical speciation of Fe and its behavior, including the dynamics of organic ligands, the role of colloidal Fe and the particulate form of Fe, in each source and in the water column. Fe isotope measurements will be a powerful tool for revealing transport processes. 6. The bioavailability of each Fe form, including particulate Fe, is still uncertain in terms of understanding the impact of transported Fe on biological production. 7. We need seasonal time series data for surface Fe concentrations in the WSG and AG to understand the details of how Fe controls biogeochemistry. 8. We need clearer evidence for connecting fresh water (the Amur River and sea ice meltwater) and biogeochemical material in the surface circulation between the COW and Sea of Okhotsk systems. 9. Further collaborative research with numerical modeling that incorporates the Fe behavior of chemical and physical species into ocean biogeochemical cycles is essential. Additionally, numerical modeling that represents the biogeochemical Fe and nutrient cycle in the intermediate water meridional overturning cell subsystem in the North Pacific is needed. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-05-14T14:01:25.933Z
2021-05-13T00:00:00.000
{ "year": 2021, "sha1": "918dfa8a600361c7090ee3d1ae44aac8cfbe5e2d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10872-021-00606-5.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9de288dcaadd5c9f3411e68ee5cd65aa6b9af19b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
13582880
pes2o/s2orc
v3-fos-license
Structural and Functional Interactions of Transcription Factor (TF) IIA with TFIIE and TFIIF in Transcription Initiation by RNA Polymerase II* A topological model for transcription initiation by RNA polymerase II (RNAPII) has recently been proposed. This model stipulates that wrapping of the promoter DNA around RNAPII and the general initiation factors TBP, TFIIB, TFIIE, TFIIF and TFIIH induces a torsional strain in the DNA double helix that facilitates strand separation and open complex formation. In this report, we show that TFIIA, a factor previously shown to both stimulate basal transcription and have co-activator functions, is located near the cross-point of the DNA loop where it can interact with TBP, TFIIE56, TFIIE34, and the RNAPII-associated protein (RAP) 74. In addition, we demonstrate that TFIIA can stimulate basal transcription by stimulating the functions of both TFIIE34 and RAP74 during the initiation step of the transcription reaction. These results provide novel insights into mechanisms of TFIIA function. Initiation of transcription by RNA polymerase II (RNAPII) 1 proceeds through the formation of a preinitiation complex containing RNAPII and the general transcription factors (TF) TBP (the TATA box-binding protein of TFIID), TFIIB, TFIIE, TFIIF, and TFIIH on promoter DNA (reviewed in Refs. 1 and 2). The first step in preinitiation complex assembly is the recognition of the TATA element of the promoter by TBP. The binding of TBP to the TATA box induces a DNA bend of ϳ90 o (3,4). TFIIB can associate with the TBP-promoter complex (5). Mammalian TFIIF, which is composed of the subunits RAP74 and RAP30, directly binds to RNAPII and has been shown to participate in recruitment of the enzyme to the preinitiation complex (6,7). TFIIE, which is also composed of two subunits called TFIIE56 and TFIIE34, is involved in the melting of promoter DNA at the transcription initiation site through a mechanism that is ATPindependent (8,9). Finally, TFIIH, which has kinase and helicase activities, mediates the ATP-dependent melting of the promoter DNA in the region of the initiation site and is involved in the transition between the initiation and elongation states of the complex (10 -15). Recent results describing both the structure of the basal transcription machinery and the topological organization of the preinitiation complex have considerably improved our understanding of transcription initiation mechanisms. Determination of the atomic structure of yeast RNAPII at 2.8 angstroms resolution and that of elongating yeast RNAPII at 3.3 angstroms by Kornberg and co-workers (16,17) has revealed key features of both the interaction between the enzyme and template DNA and the basis of its catalytic activity. Analysis of the molecular organization of the preinitiation complex using sitespecific protein-DNA photo-cross-linking has provided insights on the topology of the preinitiation complex containing RNAPII and the general transcription factors (18 -26). Recently, we have proposed a topological model, the DNA wrapping model, which describes transcription initiation by RNAPII (23,26,27). This model accounts for our photo-cross-linking data and several additional data obtained in various laboratories. The DNA wrapping model stipulates that a role for the general transcription factors is to help in the wrapping of the promoter DNA in the preinitiation complex in such a way that a torsional strain is progressively developed upstream of the transcription start site and results in the partial unwinding of the DNA helix. This region of unwound DNA is used as a substrate by the singlestranded DNA helicases of TFIIH that catalyze open complex formation (26). First isolated as a general transcription factor, TFIIA has a rather controversial role in transcription initiation. Human TFIIA is composed of three subunits: ␣ (35 kDa), ␤ (19 kDa), and ␥ (12 kDa) (28 -31). The ␣ and ␤ subunits are encoded by the same gene and are produced by posttranslational cleavage of a precursor (29,30). In yeast, TFIIA is composed of only two subunits encoded by two different genes, TOA1 and TOA2 (32). The N-terminal part of the polypeptide produced from the TOA1 gene is homologous to the human ␣ subunit, and the C-terminal part is homologous to the ␤ subunit (29,30). TOA2 encodes a polypeptide homologous to ␥ (28,31). The posttranslational cleavage of human ␣/␤ has been demonstrated to be non-essential because wild type activity can be recovered with uncleaved recombinant ␣/␤ and ␥ renatured together (33). TFIIA is not essential for basal transcription in vitro, but it has been shown to stimulate basal transcription in a variety of systems (28,(33)(34)(35)(36)(37). TFIIA binds TBP and increases the affinity of TBP for the TATA box (36 -38). TFIIA can displace certain repressors, including Dr1-DRAP1/NC2, topoisomerase 1, HMG1, and Mot-1, from the TFIID complex, indicating that TFIIA is involved in antirepression (39 -43). Human TFIIA also plays a role in activated transcription, being required for the functioning of some activators (28, 29, 31, 33, 34, 44 -47). For example, TFIIA binds to the activator Zta and mediates its stimulation of TFIID binding to the TATA box (28). Similarly, TFIIA enhances the activation of transcription by the activators Sp1, VP16, and NTF1 (34). The activators VP16 and Zta, which bind TFIIA, stimulate the assembly of a TFIIA-TFIIDpromoter complex, consistent with the roles in vivo of these factors in activated transcription (46). The function of TFIIA in transcriptional antirepression and activation have been separated and is associated with distinct subunits of the factor (35). Subunits ␤ and ␥ are essential for antirepression, whereas ␣ is not. Conversely all three subunits are required for activation. Previous photo-cross-linking experiments performed with a TBP-TFIIA-promoter complex have revealed that TFIIA makes promoter contacts both in the region of the TATA box and upstream of it (18,20). We have now determined the position of TFIIA in a preinitiation complex assembled in the presence of TBP, TFIIB, TFIIE, TFIIF, and RNAPII. Our results indicate that TFIIA makes promoter contacts not only on the TATA box and upstream of it in the Ϫ40 region, as is observed in the TBP-TFIIA-promoter complex, but also in the ϩ26 region. Given the two extreme promoter positions approached by TFIIA and the small size of TFIIA (67 kDa), these results suggest that TFIIA is located near the cross-point of the wrapped DNA structure, where it can simultaneously contact nucleotides Ϫ40 and ϩ26. TFIIF, TFIIE, and RNAPII also cross-link to the Ϫ40 and ϩ26 positions (23,26), suggesting that TFIIA may directly interact with these factors. We report here that TFIIA directly interacts with RAP74, TFIIE56, and TFIIE34 in addition to the previously determined interaction with TBP. Furthermore, we use an abortive initiation assay to provide evidence that the stimulatory effect of TFIIA on basal transcription is exerted through a stimulation of the activity of RAP74 and TFIIE34 at the initiation stage of transcript formation. Protein-DNA Photo-cross-linking-The synthesis of the photoreactive nucleotide N 3 R-dUMP, the preparation of the probes, and the conditions for binding reactions were as described (55). Two photoprobes containing the modified nucleotide at positions Ϫ39/Ϫ40 and ϩ26 were used. For each probe, the concentration of poly(dI-dC) in the binding reactions was optimized to favor specific over nonspecific binding. A typical reaction with all the factors contained 200 ng each of TBP, TFIIB, RAP30, RAP74, TFIIE56, TFIIE34, rTFIIA, and RNAPII. UV irradiation, nuclease treatment, and SDS-PAGE analysis of radiolabeled photo-cross-linking products were performed as described previously (55). Protein-Protein Interactions-Protein-protein interactions were analyzed essentially as we previously described (22). RAP74 wt, RAP74 deletion mutants, RAP30, TFIIE34, TFIIE56, RNAPII, TFIIB, TBP, and bovine serum albumin (BSA) were immobilized on Affi-gel 10 (Bio-Rad) at a concentration of 1 to 5 mg/ml resin. Microcolumns were made with ϳ20 l of this resin. A volume of 50 l of nTFIIA (Fig. 2), recombinant TFIIA␣/␤, recombinant TFIIA␥ (Fig. 3), or rTFIIA (Fig. 4), which contained 100 ng of nTFIIA or 200 ng of TFIIA␣/␤, TFIIA␥ or rTFIIA, was then loaded on the different columns. The flowthrough was collected and the columns successively eluted with 50 l of ACB buffer (10 mM Hepes, pH 7.9, 0.2 mM EDTA, 20% glycerol and 1 mM dithiothreitol) containing 0.1 M, 0.3 M, and 0.5 M NaCl. An aliquot of the input, the flowthrough, and the various salt elutions were analyzed on SDS-PAGE and revealed by silver ( Fig. 2 and Fig. 4) or zinc staining (Fig. 3). The intensity of the bands was evaluated using the UN-SCAN-IT software. It was considered that TFIIA, or one of its subunits, was binding to a particular column when the intensity of the 0.3 M salt band was higher than the intensity of both the 0.1 M band and the flowthrough band. In contrast, when the intensity of the 0.3 M band was lower than the 0.1 M or the flowthrough band, we considered that TFIIA was not binding to the column. The specificity of TFIIA binding was assessed by comparing the binding of the TFIIA subunits to the binding of a contaminant polypeptide of the nTFIIA preparation (Fig. 2). Gel Mobility Shift Assay-Plasmid DNA containing the adenovirus major late promoter (AdMLP) was digested with the restriction enzymes BamHI and DraI, and the 110-base pair fragment containing the promoter was filled-in using the Klenow fragment of DNA polymerase in the presence of [␣-32 P]dGTP. Gel mobility shift assays were performed as described previously (56). Complexes were assembled using highly purified TBP (20 ng), TFIIA (120 ng), and, when Transcription Assay-Transcription assays were performed as described previously (57). TBP (120 ng), TFIIB (120 ng), RAP30 (120 ng), RAP74 (260 ng), TFIIE34 (160 ng), TFIIE56 (240 ng), RNAPII (660 ng), and various amounts of TFIIA were incubated with 500 ng of the supercoiled DNA template containing the AdMLP from nucleotides Ϫ50 to ϩ10 fused to a G-less cassette. Under these conditions a 391-nt run-off transcript is produced. Abortive Initiation Assay-Templates were prepared by annealing two 80-base pair DNA oligonucleotides carrying the strands of the TFIIA Is Located Near the Cross-point of the Wrapped DNA Structure in the Initiation Complex-We have previously shown that TFIIA assembled with TBP on the AdMLP (e.g. TBP-TFIIA-promoter complex) cross-linked to positions Ϫ31/ Ϫ29, Ϫ25/Ϫ30, Ϫ39/Ϫ40, and Ϫ42 (18). We have now analyzed the position of TFIIA in a preinitiation complex composed of TBP, TFIIB, RAP74, RAP30, TFIIE56, TFIIE34, and RNAPII using site-specific protein-DNA photo-cross-linking. Both TFIIA␣/␤ and TFIIA␥ cross-linked in the region of the TATA box to photoprobe Ϫ31/Ϫ29 (data not shown), upstream of it to photoprobe Ϫ39/Ϫ40 (Fig. 1A, left panel) and downstream of it to photoprobe ϩ26 (Fig. 1A, right panel). The cross-linking of TFIIA to positions Ϫ39/Ϫ40 required the presence of TBP but not that of RAP30 (Fig. 1A), TFIIB, or RNAPII (data not shown). The cross-linking of TFIIA to position ϩ26, however, required the presence of TBP and RAP30 (Fig. 1A, right panel), as well as that of TFIIB and RNAPII (data not shown). These results indicate that promoter contacts by TFIIA in the Ϫ39/ Ϫ40 region do not necessitate assembly of a preinitiation complex containing TFIIB, TFIIF, and RNAPII (e.g. a TBP-TFIIApromoter complex is sufficient), whereas the promoter contact by TFIIA in the ϩ26 region requires the assembly of a preinitiation complex (e.g. a TBP-TFIIA-TFIIB-TFIIF-RNAPII-TFIIE-promoter complex) in which promoter DNA adopts a wrapped structure (see Fig. 1B for a schematic representation). The cross-linking of TFIIA to promoter regions downstream of the transcription initiation site (ϩ10 to ϩ30) was not unexpected. According to the DNA wrapping model, the DNA helices upstream of TATA (Ϫ40/Ϫ60 region) and downstream the initiation site (ϩ10/ϩ30 region) are juxtaposed in space. Our results provide additional support for the notion that promoter DNA is wrapped in the initiation complex and indicate that TFIIA is localized near the cross-point of the wrapped DNA structure. TFIIA Directly Interacts with TBP, RAP74, TFIIE56, and TFIIE34 -In the context of a preinitiation complex assembled with TFIIA, TBP, TFIIB, TFIIE, TFIIF, and RNAPII, we obtained cross-linking of TFIIA to photoprobes that are also cross-linked by other components of the complex. More specifically TFIIE34, RAP74, and RAP30 cross-link to photoprobe ϩ26, while Rpb2, TFIIE34, RAP74, and RAP30 cross-link to photoprobe Ϫ39/Ϫ40 (23,26). These observations indicate that TFIIA is in close proximity to these factors in the preinitiation complex, suggesting that TFIIA could directly interact with TFIIE, TFIIF, and RNAPII. To test this hypothesis, nTFIIA was chromatographed over different affinity columns containing immobilized RAP74, RAP30, TFIIE56, TFIIE34, TFIIB, and RNAPII. Columns containing TBP and BSA were used as positive and negative controls, respectively because TBP has been shown to interact with TFIIA (36,37). The flowthrough was collected in each case and the columns were successively eluted with buffer containing 0. (␣, ␤, and ␥ subunits) was retained on the TFIIE56, TFIIE34, RAP74, and TBP columns but not on the TFIIB, RAP30, RNA-PII, and BSA columns (Fig. 2). Contaminant bands of the TFIIA fraction were visible in the flowthrough of all the columns. The binding of TFIIA to the affinity columns is most easily visualized by examining the elution of the ␤ and ␥ subunits. To further characterize these interactions, we next used the individual subunits of TFIIA in our affinity chromatography experiments. TFIIA␣/␤ and TFIIA␥ were individually chromatographed on affinity columns containing immobilized RAP74, TFIIE34 and TFIIE56. BSA was added to the input as an internal negative control. Fig. 3 shows that TFIIA␣/␤, but not TFIIA␥, interacts with RAP74 and TFIIE34. TFIIA␣/␤ did not bind to the TFIIE56 column, whereas TFIIA␥ was retarded on the TFIIE56 column, suggesting a weak interaction. This finding is surprising in view of the observation that nTFIIA binds strongly to the TFIIE56 column. Perhaps this is due to the association of TFIIA␣/␤ and TFIIA␥ resulting in a conformational change that favors binding to TFIIE56. Two groups have previously reported interactions between TFIIA and TFIIE. Both used recombinant TFIIA, not the natural protein. In agreement with our results, Yamamoto et al. RAP74 Contains Two Distinct TFIIA-binding Domains-To determine the domain or domains of RAP74 responsible for the interaction with TFIIA, a series of RAP74 deletion mutants were immobilized on different affinity columns and rTFIIA (␣/␤ and ␥ renatured together) chromatographed through each column. BSA was added to the input as an internal control, and a BSA column served as a negative control. Fig. 4A shows some representative data, and a summary is presented in Fig. 4B. All the N-terminal fragments of RAP74, except for RAP74-(1-75), which contains only the first 75 amino acids of the polypeptide, were bound by rTFIIA (Fig. 4A). Because rTFIIA did bind to RAP74-(1-136) but not to RAP74-(1-75), our results define a first domain of interaction between these two proteins that encompasses amino acids 76 -136. We next used the C-terminal fragments of RAP74 in our affinity chromatography experiments. RAP74-(207-517), RAP74-(358 -517), and RAP74-(407-517) were all observed to bind to RAP74, indicating the existence of a second interacting domain. To delineate this domain more precisely, two additional mutants, RAP74-(363-444) and RAP74-(363-409), were used. TFIIA bound well to RAP74-(363-444) but not to RAP74-(363-409) (Fig. 4A). These results define a second TFIIA-interacting domain of RAP74 located between amino acids 410 and 444. The existence of an additional putative TFIIA-binding domain in the central RAP74 region was ruled out by passing rTFIIA on columns containing RAP74 -(136-258) and RAP74-(258 -356). In each case, rTFIIA was not retained on the column. The two TFIIA-interacting domains of RAP74 that we identified are both localized in conserved regions of the protein, domain 76 -136 being localized in conserved region I and domain 410 -444 in conserved region III (see Fig. 4B for a schematic representation). TFIIA Stimulates the Activity of TFIIE34 and RAP74 in Transcription Initiation-Several reports have shown that TFIIA can stimulate basal transcription by RNAPII in vitro, but is not essential for the basal transcription reaction (28,(33)(34)(35)(36)(37). For example, recombinant TFIIA increased the forma-FIG. 6. Stimulation of basal transcription by TFIIA. A, run-off transcription assays were performed on a supercoiled template carrying the AdMLP using TBP, TFIIB, TFIIE, TFIIF, and RNAPII in either the absence or the presence of increasing amounts of TFIIA (100, 200, and 400 ng). The position of the accurately initiated transcript (391 nt) is indicated. B, abortive initiation assays were performed on a synthetic double-stranded oligonucleotide carrying the AdMLP using TBP, TFIIB, RAP30, and RNAPII in either the absence or the presence of RAP74 alone, RAP74 and TFIIE34, and RAP74 and TFIIE56. Increasing amounts of TFIIA were added to the reactions. The positions of the abortive transcripts (4 -10 nt) are indicated. C, quantification of the stimulatory effect of TFIIA on abortive initiation. The intensity of the bands corresponding to the abortive transcripts from 4 -6 experiments were quantitated using a PhosphorImager. The measured intensities were used to calculate the ratios of amount of transcript produced in the presence of TFIIA to that produced in its absence in each case (Fold Stimulation). tion of a 391-nt run-off transcript from a supercoiled template carrying the AdMLP fused to a G-less cassette in the presence of TBP, TFIIB, TFIIE, TFIIF, and core RNAPII (Fig. 6A). To test whether or not TFIIA acts on the initiation step of the transcription reaction, we developed an abortive initiation assay in which an 80-base pair double-stranded oligonucleotide carrying the AdMLP was used to drive transcription initiation in the presence of RNAPII and the general transcription factors. In this assay only abortive transcripts of 2 to 10 nt in length can be synthesized because the transcription reaction is performed in the absence of GTP on a template that harbors a G at position ϩ11 (see Fig. 6B). Abortive initiation was shown to be promoter-specific because the use of DNA templates with mutations in the AdMLP that impair transcription in run-off assays did not support synthesis of abortive transcripts. 2 Under our reaction conditions, abortive initiation minimally requires the presence of TBP, TFIIB, RAP30, and RNAPII (Fig. 6). With this minimal set of factors, the addition of TFIIA does not stimulate the formation of abortive transcripts (Fig. 6, B and C). When either RAP74 alone, or RAP74 and TFIIE34 are added to the system, the formation of abortive transcripts is increased, indicating that these two factors are involved in transcriptional initiation (Fig. 6, data not shown, and Refs. 58,61,62). The addition of TFIIA to reactions containing either RAP74 alone, or RAP74 and TFIIE34, in addition to TBP, TFIIB, RAP30, and RNAPII, had a stimulatory effect (Fig. 6, B and C), indicating that TFIIA can stimulate the initiation stage of the transcription reaction. Because TFIIA stimulates transcription initiation only when RAP74 and TFIIE34 are present in the reaction, our results suggest that TFIIA enhances the functions of TFIIF and TFIIE to stimulate the basal transcription reaction. The role of TFIIA in RNAPII transcription is not completely resolved. However, several reports suggest that an important function of this factor is to act as a co-factor in transcriptional activation (28, 29, 33, 34, 44 -47). In this paper, we establish the existence of structural and functional interactions between TFIIA and both TFIIE and TFIIF within the initiation complex. TFIIE and TFIIF have been shown to be involved in the melting of promoter DNA near the initiation site during open complex formation (9,58). Considering that an activator such as Gal4-VP16, whose full activity requires the presence of TFIIA, can stimulate promoter melting near the initiation site (63), it is possible to envision an activation mechanism in which TFIIA functions as a bridge between the activator proteins and the general transcription factors TFIIE and TFIIF. This connection may help to explain how upstream activators influence and stimulate the biochemical events occurring at the transcription start site.
2014-10-01T00:00:00.000Z
2001-10-19T00:00:00.000
{ "year": 2001, "sha1": "9316446bbaa43875df14552c1a9c291d81458f18", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/276/42/38652.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "ecd04362c00fc008e69a6ba32cfbfb20c799caf0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
2863553
pes2o/s2orc
v3-fos-license
Symmetry of “ Twins ” The idea of construction of twin buildings is as old as architecture itself, and yet there is hardly any study emphasizing their specificity. Most frequently there are two objects or elements in an architectural composition of “twins” in which there may be various symmetry relations, mostly bilateral symmetries. The classification of “twins” symmetry in this paper is based on the existence of bilateral symmetry, in terms of the perception of an observer. The classification includes both, 2D and 3D perception analyses. We start analyzing a pair of twin buildings with projection of the architectural composition elements in 2D picture plane (plane of the composition) and we distinguish four 2D keyframe cases based on the relation between the bilateral symmetry of the twin composition and the bilateral symmetry of each element. In 3D perception for each 2D keyframe case there are two sub-variants, with and without a symmetry plane parallel to the picture plane. The bilateral symmetry is dominant if the corresponding symmetry plane is orthogonal to the picture plane. The essence of the complete classification is relation between the bilateral (dominant) symmetry of the architectural composition and the bilateral symmetry of each element of that composition. Introduction Symmetry is one of the oldest visual, aesthetic, stylish and functional characteristics of architecture.When talking about symmetry in architecture, it primarily refers to the geometric symmetry.Until the modern and contemporary architecture, symmetry was a dominant and universal principle in the architecture of all civilizations.Modern architects have been trying to create distance from all previous historical styles and asymmetry has become acceptable as a new and unexplored field of design.Through postmodern and contemporary architecture symmetry gets new innovative interpretations and visualizations (see [1][2][3]).Symmetry is one of the fundamental principles of the universe and nature and can be found everywhere [4,5].Symmetry from the universe and nature reflects in all kind of human creativity and science [6].When once talking about one bridge, Benjamin said that symmetry evoked a form of affective neutrality [7]. The study of symmetries, patterns, and repetitions is the central topic of the book On Growth and Form [8].Symmetry of an object exists until, after all geometrical transformations are applied, there is at least one unchanged feature or some property of that objects [8].Asymmetry is the complete absence of any type of symmetry (see also [9][10][11]).Kim Williams distinguishes the seven most often used types of symmetry in architecture: bilateral symmetry, rotation and reflection, cylindrical symmetry, chiral symmetry, similarity symmetry, spiral or helical symmetry and translational symmetry.To achieve different types of symmetry, a variety of spatial transformations may be used as well as combinations thereof.Mitra and Pauly analyzed some of those transformations and their application in architectural design, Figure 1 [12].Bilateral symmetry is one of the most common types of symmetry of architectural and urban form, where the halves of a composition mirror each other [13].Bilateral symmetry in plane can be defined as an organizing system that reflects either a plan, or elevation around a central line or axis [14].Often, instead of bilateral symmetry, it is called mirror symmetry [15]."The popular of bilateral symmetry is probably an expression of our experience of nature, and in particular with our experience of our own bodies.As many cultures believe that God created man in His own image, architecture has in turn probably been created in the image of man" [13]. Almost all living creatures are essentially symmetrical and identical individuals (twins) are "symmetrical" to each other.Fascination with the phenomenon of "twins" was one of the main ideas in the architecture of all civilizations.In architectural buildings "twins" are usually characterized by bilateral (mirror) symmetry.We also consider chirality of building object and the chirality of the twin composition itself (Figure 2).Chirality is found in two objects that are each other's mirror image and which cannot be superimposed, such as our hands [13,16,17].Such objects usually have "left" and "right" form.Chiral object cannot possess bilateral symmetry itself as well as any other symmetry transformation, which change orientation of the figure.In two dimensions, every figure that possesses an axis of symmetry is achiral.In three dimensions, every object that possesses a plane of symmetry or a center of symmetry is achiral [18].People have the ability to detect symmetry very fast, and this ability has been a topic of interest to psychologists and philosophers (e.g., [19]).More about human symmetry perception can be found in Tyler's works (e.g., [20,21]).Humans are able to easily recognize bilateral symmetry (e.g., [22]) and they usually connect symmetry of an object with the existence of bilateral symmetry.About perception of chirality, Branndt said: "Studies in psychology of aesthetics have indeed addressed the issue of differences in the perception of left and right objects, and found that such differences do exist" [23]. Symmetry of Twins Notion of symmetry has changed through history [9], but the essence of the idea and geometric settings of twin buildings remained the same.In this paper, symmetry of "twins" is analyzed in terms of contemporary interpretations of symmetry in architecture.Twin buildings, or towers, is architectonic composition of at least two similar elements (building objects), arranged in some geometric relation and placed to each other.This composition can be seen as one complex object.Symmetry relations among twin buildings are usually multiple or composite, in which some dominant type of symmetry can be noticed.We can distinguish the symmetry of each building element and the symmetry of the architectonic twin composition.We analyzed the twin composition having in mind that "twins" are usually characterized by bilateral symmetry.Classification is created from the point of view of an observer, who perceives architectural composition while standing in imaginary symmetry plane of that composition.The dominant (or primary) bilateral symmetry is symmetry of the composition orthogonal to the imaginary "plane of the twin composition", while the bilateral self-symmetry of each building objects is secondary.For defining and analyzing different cases the following relation can be established: Bilateral/non-bilateral symmetry of building elements in architectural composition Architecture is the image we see [24] and an architect's ideas address the observer through that image, since perception of objects in space is based on 2D images view process.An observer is not able to see the complete architectural composition at once.The observer should stand near the imaginary symmetry plane [25] of the composition and to be in a position to have the possibility of symmetry perception.This position can be considered as twins symmetry keyframe of the observer's visual perception.Such approach implies the existence of a mutually parallel plane of symmetry of architectonic twin composition and planes of symmetry of the building objects (intersection of symmetry planes is in the infinity). Analyzing a pair of twin buildings, which is the most common in architecture practice, we start with an image view-projection of the architectural composition elements in (frontal) 2D view.These cases correspond to projection of parallel planes of symmetry, which are orthogonal to the frontal view.As a 2D image, the observer can notice mirror symmetry axes instead of a plane of symmetry, and we distinguish four 2D keyframe cases: case A. Composition is bilaterally symmetric and the objects in the composition are bilaterally symmetric, too (Figure 3A).case B. Composition is not bilaterally symmetric but each object in the composition is bilaterally symmetric (Figure 3B).case C. Composition is bilaterally symmetric, but none of the objects in the composition is bilaterally symmetric (Figure 3C).case D. Composition is not bilaterally symmetric and none of the objects in the composition is bilaterally symmetric too (Figure 3D).Having only these 2D images in mind, we are not yet sure about the symmetry of the whole composition, but from the architectural point of view, we should have some impression.Very often, this view will match the dominant symmetry of the composition, not only in terms of symmetry planes, but also in terms of chirality.For example, an observer sees case D as a chiral composition of chiral objects.As we will see later, this does not have to be true, since there may be some other planes of symmetry, which are not visible to the observer or not dominant in the composition or in a single object.There are many examples of visually chiral in architecture [16,26], which is not chiral from the mathematical point of view. In terms of the geometry, the human body and the bodies of most living creatures are bilaterally symmetrical.Mobility does not interfere with the bilateral symmetry of the human body.Symmetry cases A and B in architecture are comparable with the geometrical symmetry of identical and fraternal twins at inaction.Symmetry case C is comparable with the geometrical symmetry of identical twins in motion and symmetry case D with symmetry of identical or fraternal twins in motion (Figure 4).Identical twins, standing next to one another, can be bilaterally symmetrical at inaction and chiral in motion.About this aspect of symmetry Hargittai said: "Different shapes have different symmetries, and the shapes that develop in nature and appear in human-made objects are closely related to motion.Humans and most, though not all, animals have a left side and a right side.Their bilateral symmetry is a consequence of their mode of motion" [15].More about symmetries of non-rigid shapes (like human bodies) can be found in Raviv's work in this field [27,28].In contemporary architecture, the concept of dynamic architecture has been developed, where each floor of the tower rotates around vertical axes independently.Those buildings can be defined as non-rigid shapes in architecture.3D model of David Fisher's Dynamic tower in Dubai was used for the illustration shown in Figure 5.We can analyze the previously defined four cases of "twins" symmetry, shown through the motion of the floors.As we said, classification of "twins" symmetry in this paper is based on the existence of bilateral symmetry, in terms of the perception of an observer, who is not always able to see a complete architectural composition, or at least, not at once.The observer should be in a position to have the possibility of perception of the necessary symmetries for the proposed classification.Sometimes, this approach results in a wrong perception.For example, in a complex twin composition in Figure 6, it is not easy to get the whole picture of the twin composition due to the diversity of the objects themselves, their symmetries and a well-chosen position of the objects in the composition.If the observer is moving around the building, he then perceives other geometric characteristics of architectural composition and its elements as well, and forms a complete picture of the existing symmetries.Complete classification of twins' symmetry must include both 2D and 3D perception behavior analysis, to exceed illusive perception.We will consider the concept of twin composition based on bilateral symmetry and chirality of the composition and of the building elements itself.All objects that are bilaterally symmetric are achiral, but there are achiral objects with other symmetries that render an object achiral (such as inversion symmetry), although these are less relevant to architecture [16]. In Figure 7 the previous four 2D cases are shown in isometric view, together with the corresponding sub-variants.Here we represent only two sub-variants for each case, based on the existence of mutually orthogonal planes of symmetry.In each sub-variant in row number 1, common symmetry plane of architectural composition and of building objects exists.That plane is parallel with the observer's starting picture plane.All these compositions are achiral.In all these cases the building units are achiral too, since all of them possess a symmetry plane.But notice that this symmetry plane is not the dominant symmetry of the twin composition, since it is in the plane of the twin composition.An observer is not able to notice the whole twin composition and this plane of symmetry at the same time.Similarly, cases B1 and D1 are more likely to be seen as a chiral composition then achiral, since the view where one can recognize the left and the right half of twin composition is parallel with the symmetry plane of the composition.Sub-variants in row number 2 do not posses this (frontal) symmetry plane.A2 and C2 twin composition are achiral, but mutually different, since in A2 building objects are achiral and in case C2 building objects are chiral.Cases B2 and D2 represent chiral twin composition, but in case B2 buildings objects are achiral and in case D2 they are chiral. Case A In this case there is an architectural twin composition that is bilaterally symmetric and each of the building objects is bilaterally symmetric too.Therefore, the composition is achiral as well as its building elements.The symmetry of the architectural composition is bilateral (S b ), as well as the symmetry of building objects (s b ). In 3D perception behavior there are two sub-variants, with (A1) and without (A2) symmetry plane parallel to the picture plane of the observer's twins symmetry keyframe position.That plane is orthogonal to the dominant plane of symmetry of the whole (Figure 8).From the aspect of the dominant symmetry, case A in geometry of the composition-building parts can be described as bilaterally-bilateral symmetry in both sub-variants (and also as achiral-achiral).In addition to the bilateral symmetry, some other types of symmetry may be present, considering multiple symmetries characteristics of architecture.Light beams of the WTC memorial are bilaterally symmetrical to each other and each of them are bilaterally self-symmetrical (Figure 9).There is also cylindrical symmetry of each of the light beams.Similar examples are the Petronas Towers, shown in the same figure.Most historical examples of the architectural composition of twin buildings are based on case A. These examples can be seen in all civilizations and through all styles of architecture.Figure 10 shows the twin elements of the gate of Kalemegdan fortress in Serbia, The Twin Pogoda temple in Taiyuan (China) and Trier Saint Peter's cathedral in Germany. The towers of Pagoda temple are identical twin buildings, Figure 10.Saint Peter's Cathedral is the example of the complex symmetry case, which includes case A and case B, explained above.Two radial tower elements (in the corners of the Cathedral) are identical, with bilaterally-bilateral symmetry, but the composition of two prismatic steeples is not bilaterally symmetrical.In that case, there is none bilaterally-bilateral symmetry of two twin elements of the building. Case B In this case there is an architectural composition of two buildings and each one is bilaterally self-symmetric with mutually parallel planes of symmetries, but architectural composition does not have the plane of symmetry parallel with those planes.Some other types of symmetry may be present.Their context and their purpose make them twin objects. In 3D perception behavior there are two sub-variants, with and without a symmetry plane parallel to the picture plane of the observer's twins symmetry keyframe position, Figure 11.In B1, that plane of symmetry is parallel to the plane of symmetry of the whole composition and does not have the role of the dominant symmetry.Case B in geometry of the twin buildings can be described, from the aspect of dominant symmetry as non-bilaterally-bilateral symmetry.From strictly geometric point of view, B1 is bilaterally-bilateral symmetry, and B2 is non bilaterally-bilateral symmetry.B1 is achiral composition, but for the observer it is rather visual chiral (because of the position of the mirror plane which is not easily visible), while each building object is achiral. The Cathedral of Chartres in France has two bilaterally self-symmetrical towers, which are not mutually bilaterally symmetrical.The West gate of Belgrade in Serbia consists of two towers, similar structures, one of which is higher (Figure 12).In both examples, in their spatial structure bilateral self-symmetry of each part of the building exists.The symmetry of their architectural composition is not bilateral, but because of its structure, function and context can be considered as "twins".Sometimes, this case of symmetry of "twins" is a part of complex symmetrical relations, like in the Cathedral shown in Figure 12.Within this case we also recognize objects that are similar or simply scaled in order to produce more dynamic structures.Sometimes such objects are connected with elements that do not fit with any symmetry and very often are asymmetric, but this has no influence to the observer to see the composition as strongly symmetric. One of the specific examples of case B symmetry of "twins" is Sydney Opera House.The shells are segments of a sphere, thus similar in shape, while differing in size and inclination [13].The extracted symmetries include reflections, as well as general similarities that involve uniform scaling, rotation, and translation.Each of two "twin" elements of Sydney Opera House is bilateral self-symmetrical, but in the geometrical composition of these elements bilateral similarity symmetry does not exist (Figure 13).Instead of that, rotational symmetry composed with scaling is presented.There is a third element of this structure, much smaller and in the background.It can be defined as a third twin element, but observer from the previously defined position cannot see all three of them at once. Case C In this case there is an architectural composition of two buildings or two parts of the same building, which are mutually mirror-symmetrical, but the building objects do not have a mirror plane parallel with the dominant mirror plane of the composition.The whole composition is mirror symmetric and achiral. In 3D perception behavior there are two sub-variants, with and without symmetry plane parallel to the picture plane of the observer's twins symmetry keyframe position, Figure 11.In case C1 each building object is achiral in 3D, but in case C2 these building objects are chiral in 3D (Figure 14).In both sub-variants, the twin composition is achiral, since there is at least one symmetry plane of the twin composition.Case C in geometry of the twin buildings from the aspect of dominant symmetry can be described as bilaterally-non-bilateral symmetry.From the strictly geometric point of view, C1 is bilaterally-bilateral symmetry, and C2 is bilaterally-non-bilateral symmetry.Asymmetrical objects and symmetry breaking can be of a high interest for architects and viewers, introducing a new dynamical component.However, since in this paper we are interested only in twin compositions, there must be some noticeable symmetries or similarities among them.Very often, chiral objects in architecture are based on rotation and translation or scaling, so this produce spiral and helicoidal structures which have orientation, and twins chiral objects can be mirror reflected, so that the observer can easily notice two different orientations. Since mirror reflection changes the orientation of an object, a chiral object does not possess bilateral (mirror) symmetry.In some cases building objects have bilateral self-symmetries, which the observer cannot perceive from the observation position lying in the reflection plane of the whole composition (case C1).An example is Bahrain WTC towers, where each of the towers has bilateral self-symmetry hidden from the observer standing in symmetry plane of the architectural composition.This is therefore an example of case C (C1) of symmetry of "twins".Almaty Twin Towers in Kazakhstan are a typical example of the same case of "twins" symmetry (Figure 15). Case D In this case there is an architectural composition of two buildings, but there is no any dominant symmetry plane of the composition nor of the building objects, so there is no symmetry plane orthogonal to the plane of composition.Some other elements of symmetry (translation, rotation) may be present.This case of symmetry is characteristic of modern and especially contemporary architecture.In 3D perception behavior there are two sub-variants, with and without a symmetry plane parallel to the picture plane of observer's twins symmetry keyframe position, Figure 16.Case D in geometry of twin buildings can be described, from the aspect of dominant symmetry as non-bilaterally-non-bilateral symmetry.From the strictly geometric point of view, D1 is bilaterally-symmetry, and D2 is non-bilaterally-non-bilateral symmetry.D1 is achiral composition, but for the observer it is rather visually chiral (because of the position of the mirror plane which is not easily visible), while each building object is achiral. Figure 17 illustrates one more possibility of twin composition.There is a horizontal symmetry plane, which can be found in example shown in Figure 6-Azrieli Center in Tel Aviv, Israel.Horizontal symmetry plane may be mutual for all elements in an architectural composition.The role of such one plane of symmetry is to connect different building objects in a twin composition.Such a plane of symmetry is rarely used separately of the described cases.It can be considered as a special case of D2, where there are no other planes of symmetries, except for this one.In Figure 18 shows contemporary examples of this case.There is the characteristic absence of building's bilateral self-symmetry and bilateral symmetry of architectural composition, but in addition, there are twins that we call emphatic.Velo towers are maybe the best example of this assertion."Kyssen" Towers in Gothenburg are competition entry for Scandinavia's tallest skyscraper in Gothenburg.Two asymmetrical buildings are unequivocally "twins".Hang Lung Plaza designed for Shenyang in China is a large commercial complex.Depending on the position of the observer, they could be observed as four pairs of "twins", all of which can be classified as the case D. In many of these examples similarity between twins objects can be noticed.In this case the objects have only similarity symmetry, without any kind of bilateral symmetry of architectural composition or bilaterally self-symmetry.This concept of architectural design is the aesthetic characteristics of the largest number of contemporary architectural compositions of "twins".Also it can be defined as contemporary and probably the future trend in architectural design. Conclusions The idea of construction of twin buildings is as old as architecture itself.Twin objects can exist as a complex unit with sub-units (buildings), or as elements of the same object.Frequently there are two objects or elements in architectural composition of "twins" which can be analyzed as a complex unit.In relation to the previous and considering the existence of bilateral symmetry (as the most frequent type of symmetry in architecture) and chirality, the authors have suggested a classification of "twins" symmetry.This classification is created from the point of view of an observer, who perceives architectural composition while standing in an imaginary symmetry plane of that composition, exactly as the designers planned.That position can be defined as the observer's twins symmetry keyframe, with 2D perception behavior aspect.Some other symmetry relations may be present in a twin building architectural composition, which the observer can perceive when walking around the buildings.That is 3D perception behavior aspect.Complete classification of twins symmetry include both, 2D and 3D perception behavior analysis, to avoid illusive perception.Such analysis can distinguish four cases of the symmetry of "twins", which are marked as cases A, B, C and D, with two sub-cases presented in this paper.The essence of these cases is the relation between bilateral symmetry of an architectural composition and bilateral symmetry of each element.That relation is based on the existence or non-existence of the dominant bilateral symmetry identified as: case A. bilaterally-bilateral symmetry case B. non-bilaterally-bilateral symmetry case C. bilaterally-non-bilateral symmetry case D. non-bilaterally-non-bilateral symmetry Figure 1 . Figure 1.Examples of regular structures created by geometrical transformations. Figure 2 . Figure 2. Mirror copies of achiral and chiral objects. Figure 5 . Figure 5. Dynamic twin towers in cases (A), (B), (C) and (D) (3D model of Dubai Dynamic tower was used for the illustration). Figure 8 . Figure 8. 3D perception behavior of sub cases of symmetry case A. Figure 11 . Figure 11.3D perception behavior sub cases of symmetry case B. Figure 14 . Figure 14.3D perception behavior sub-variants of symmetry case C. Figure 16 . Figure 16.3D perception behavior sub-variants of symmetry case D. Figure 17 . Figure 17.3D perception behavior special sub-variant of symmetry case D.
2018-04-03T03:24:00.448Z
2015-02-13T00:00:00.000
{ "year": 2015, "sha1": "53d934ceaab25a14d9c46c966b6b5aa2ea6c3a60", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-8994/7/1/164/pdf?version=1423835004", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "53d934ceaab25a14d9c46c966b6b5aa2ea6c3a60", "s2fieldsofstudy": [ "Art", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
141497081
pes2o/s2orc
v3-fos-license
Genome-wide identification and gene expression pattern of ABC transporter gene family in Capsicum spp. ATP-binding cassette (ABC) transporter genes act as transporters for different molecules across biological membranes and are involved in a diverse range of biological processes. In this study, we performed a genome-wide identification and expression analysis of genes encoding ABC transporter proteins in three Capsicum species, i.e., Capsicum annuum, Capsicum baccatum and Capsicum chinense. Capsicum is a valuable horticultural crop worldwide as an important constituent of many foods while containing several medicinal compounds including capsaicin and dihydrocapsaicin. Our results identified the presence of a total of 200, 185 and 187 ABC transporter genes in C. annuum, C. baccatum and C. chinense genomes, respectively. Capsaicin and dihydrocapsaicin content were determined in green pepper fruits (16 dpa). Additionally, we conducted different bioinformatics analyses including ABC genes classification, gene chromosomal location, Cis elements, conserved motifs identification and gene ontology classification, as well as profile expression of selected genes. Based on phylogenetic analysis and domain organization, the Capsicum ABC gene family was grouped into eight subfamilies. Among them, members within the ABCG, ABCB and ABCC subfamilies were the most abundant, while ABCD and ABCE subfamilies were less abundant throughout all species. ABC members within the same subfamily showed similar motif composition. Furthermore, common cis-elements involved in the transcriptional regulation were also identified in the promoter regions of all Capsicum ABC genes. Gene expression data from RNAseq and reverse transcription-semi-quantitative PCR analysis revealed development-specific stage expression profiles in placenta tissues. It suggests that ABC transporters, specifically the ABCC and ABCG subfamilies, may be playing important roles in the transport of secondary metabolites such as capsaicin and dihydrocapsaicin to the placenta vacuoles, effecting on their content in pepper fruits. Our results provide a more comprehensive understanding of ABC transporter gene family in different Capsicum species while allowing the identification of important candidate genes related to capsaicin content for subsequent functional validation. Introduction the current study, we report a genome-wide identification and characterization of ABC transporter genes in three Capsicum species (i.e., C. annuum, C. baccatum and C. chinense) including sequence alignment, phylogenetic analysis, chromosomal location and expression profile of C. annuum and C. chinense. Our results lay a foundation for further functional characterization of each ABC transporter gene among Capsicum species and provide useful information for better understanding the role and evolution of this gene family in higher plants. Plant material C. annuum cv. CM334, C. baccatum cv. PBC81 and two varieties of C. chinense (Pimenta da neyde and Naga morich) were grown in triplicate samples in an experimental field at West Virginia State University. Fruits at 6, 16 and 25 days post-anthesis (dpa) were collected from all cultivars and stored at -80˚C. Quantitative analysis of capsaicin and dihydrocapsaicin content in green pepper fruits (16 dpa) were determinate with the 1200 series HPLC system (Agilent Technologies, Santa Clara, CA) [5]. Identification of the ABC transporter genes in pepper To identify all members of the ABC transporter gene family in the pepper genomes, the proteomes for the three Capsicum species were downloaded from the pepper genome platform (PGP) (http://passport.pepper.snu.ac.kr/?t=PGENOME) [2]. A local BLASTP search was used to query the full-length amino acid sequences of ABC transporter proteins from Arabidopsis (https://phytozome.jgi.doe.gov/pz/portal.html) [29]. All output genes were collected and confirmed by using the software HMMER3.0 [30]. Capsicum genes were searched with the PF00005 ABC transporter domain, PF01061 ABC-2 transporter domain and PF00664 ABC transporter transmembrane region domain, the ABC transporter domains were confirmed using the Pfam web server (http://Pfam.sanger.ac.uk/) [31]. Genes with E-value > 1E-05 and redundant genes were excluded. Candidate genes were analyzed in the SMART database (http://smart.embl-heidelberg.de/smart/set_mode.cgi?NORMAL=1) [32] to verify the presence of the NBD and TMD domains. Genes with NBD and TMD domains were considered members of the ABC transporter family in pepper, and the coding sequences (CDS) were downloaded from the PGP database. The Jackhmmer tool (https://www.ebi.ac.uk/Tools/ hmmer/search/jackhmmer) [33] was used to classify the ABC transporter gene family in subfamilies by using the UniProt reference proteome database with E-value = 0.01 for sequence matches and 0.03 for hit matches. protein size, molecular weight (MW) and theoretical isoelectric point (pI) of each ABC transporter were computed by using the proteome database and sequence analysis tools on the ExPASy Proteomics Server (http://expasy.org/) [37]. For Cis-element analysis, all promoter sequences (1,500 bp upstream of initiation codon "ATG") of ABCs were extracted from the pepper genome. Then, the cis-regulatory elements of promoters for each gene were identified by using PLACE: A database of plant cis-acting regulatory DNA elements (http://www.dna. affrc.go.jp/PLACE/) [38]. Protein sequence motifs were identified by using Multiple Em for Motif Elicitation (MEME) (http://meme-suite.org/tools/meme) [39]. The analysis was performed with maximum number of motifs 10 and optimum width of motif �50. Discovered MEME motifs were searched in the Expasy-Prosite database with ScanProsite server (https:// prosite.expasy.org/scanprosite/) [40]. Gene ontology (GO) annotation and modeling of ABC proteins The functional annotation of ABC transporters was performed using Blast2GO software (http://www.blast2go.com). The amino acid sequences of ABC genes were imported into Blas-t2GO program to execute three steps: 1) BLASTp against the NCBI non-redundant protein database, 2) mapping and retrieval of GO terms associated with the BLAST results, and 3) annotation of GO terms associated with each query to relate the sequences to known protein function. Identification of syntenic ABC paralogs pairs and gene synteny analysis The syntenic ABC transporter paralogs pairs were identified by searching the gene duplication across all the species with the following criteria: 1) genes with >70% coverage of the alignment length; 2) genes with >70% identity in the aligned region; and 3) a minimum of two duplication events considered for strongly connected genes [41]. For each paralog pair, the non-synonymous substitution rate (Ka), the synonymous substitution rate (Ks) and the ω (= Ka/Ks) of paralog pairs were estimated by using KaKs_Calculator 2.0 [42]. The duplication date of paralog pairs was estimated by the formula T = Ks/2λ, assuming a clock-like rate (λ) of 6.96 synonymous substitutions per 10 −9 years [43]. Transcriptome sequencing of C. chinense green fruits Green fruits (16 dpa) from two different cultivars of C. chinense were used for whole-transcriptome sequencing. Total RNA was isolated from the pooled tissues of three biological replicates for each cultivar with the Plant RNA mini spin kit (Macherey-Nagel). The quantity and quality of the total RNA were analyzed with the Agilent 2100 Bioanalyzer and Qubit 4 Fluorometer (Invitrogen), respectively. The RNA sequencing libraries were prepared by using the NEBNext Ultra II RNA Library Prep Kit according to the manufacturer's protocol. The mRNAs were enriched by using magnetic beads with Oligo (dT), then fragmented into shorter fragments with a fragmentation buffer. The first-strand cDNA was synthesized from the fragmented mRNA with a random hexamer primer. The resulting cDNAs were added to sequencing adapters, and sequencing primers were used for library amplification. The insert size of the library was analyzed with Agilent 2100 Bioanalyzer (Invitrogen), and the Qubit 4 Fluorometer (Invitrogen) was used for library quantification. The RNA sequencing library from each sample was sequenced in the Illumina NextSeq 500 platform with paired-end sequencing. The resulting image files were converted to FASTQ with 2x75-bp reads. The Illumina reads were deposited with the Sequence Reads Archive (NCBI) under the following accession number PRJNA526219. Analysis of C. chinense transcriptome to study ABC transporter genes The sequencing adapters and low-quality reads (Phred score QV<30) were removed by using cutadapt (https://cutadapt.readthedocs.io/en/stable/guide.html) [44] and sickle (https:// github.com/najoshi/sickle) [45] respectively. The quality-filtered reads were mapped to the C. chinense reference genome [2] by using the mem algorithm of the BWA tool [46] to generate SAM alignment. The read count table for genes from C. chinense was created for all the samples by using the SAM alignment and HTSeq R package [47]. The gene expression based on the read counts were studied by reads per kilobase per million (RPKM). The RPKM values for each gene were calculated based on the read count table, the total number of reads and gene length (kb). The ABC transporters in C. chinense (CcABCs) were identified by homology search against the CDS sequences from C. annuum by using a BLASTN algorithm (identity � 98% and coverage � 70%). The gene annotation of the ABC transporter genes identified from C. chinense was confirmed by using the BLASTx algorithm against the NCBI non-redundant protein database. Expression pattern of ABC transporters in C. annuum and C. chinense The RNA-seq gene expression data in placenta tissues (6 dpa, 16 dpa, 25 dpa) from C. annuum cv. CM334 was retrieved from the RNA-seq data published by [2]. A BLASTN search was performed (identity � 98% and coverage � 70%) to identify the orthologs genes between C. annuum ABC (CaABC) and C. chinense (CcABC) transporters. The RPKM expression values for identified CaABC protein genes were extracted from the dataset and a gene expression heatmap was generated for C. annuum and C. chinense orthologs by using the ClustVis web tool (https://biit.cs.ut.ee/clustvis/) [48]. RNA isolation and quantitative real-time PCR (qRT-PCR) Total RNA was isolated from pepper fruits (6, 16 and 25 dpa) by using the Plant RNA mini spin kit (Macherey-Nagel). First-strand cDNA was synthesized with 1 μg total RNA per sample by using the Super Script First-Strand Synthesis system (Invitrogen). To identify in the three Capsicum genomes the orthologs of the markers previously reported by [5] for the ABC transporter family, the CDS sequences for the CA06g14430 and CA11g09150 genes were downloaded from the Sol Genomics database (https://solgenomics.net/) [49] and a BLASTN search was performed (identity � 98% and coverage � 70%) across the three pepper genomes. Genespecific primers for the selected Capsicum ABC transporter orthologs were designed by using Primer3Plus (http://www.primer3plus.com/). The qRT-PCR analysis involved a StepOnePlus Real-Time PCR System (Applied Biosystems, Foster City, CA, USA) with a total volume of 20 μL containing 1 μL cDNA template, 2 μL forward and reverse primers (10 μM), 10 μL SYBR Green PCR Master (ROX) (Roche, Shanghai) and 7 μL sterile distilled water. For each sample, three replicates were run to compute the average Ct values. The data were analyzed by the 2 −ΔΔCt method [50]. Relative gene expression was normalized against that of the endogenous control β-tubulin [51]. Capsaicin and dihydrocapsaicin content in pepper Capsaicinoids are responsible for the hot or burning sensation of chili, pungency and flavor are the primary properties of pepper fruits [52]. About 80% to 90% of capsaicinoids in chili fruit is represented by capsaicin and dihydrocapsaicin, and their accumulation occurs over a relatively short period during the latter stages of fruit development [53]. C. chinense is one of the hottest chili peppers in the world; in general, chili species and varieties contain about 1% capsaicin, but this content can range from 2% to 4% [54]. In this study, the highest capsaicin and dihydrocapsaicin content was for C. chinense cv. Naga morich, with 14.67 mg g -1 and 5.54 mg g -1 dry weight (DW) tissue, respectively. On the other hand, 4.62 mg g -1 and 1.08 mg g -1 DW tissue were reported for C. chinense cv. Pimienta da neyde, and 0.823 mg g -1 and 0.393 mg g -1 DW tissue in C. annuum cv. CM334. The lowest value across all the species was for C. baccatum, with a content of 0.55 and 0.15 mg g -1 for capsaicin and dihydrocapsaicin respectively (Fig 1). Capsaicinoids biosynthesis is carried out principally in the placental tissues of pepper fruits by the action of several enzymes [55,56]. Recently, NGS approaches including genotyping by sequencing (GBS), based GWAS and RNAseq analysis of placenta tissues have been used for the identification of novel genes involved in the capsaicinoids biosynthesis pathway. Moreover, these approaches have allowed the study of the mechanisms involved in the pungency modulations in pepper. Liu et al. [57] predicted the function of three novel genes i.e., dihydroxyacid dehydratase (DHAD), threonine deaminase (TD) and prephenate aminotransferase (PAT) which play key roles in the capsaicinoids biosynthetic pathway. In a recent association mapping study carried out by Nimmakayala et al. [5], it was identified significant SNPs associated with capsaicin content and fruit weight. This study revealed that genes such as Ankyrin-like protein, IKI3 family protein, pentatricopeptide repeat protein and ABC transporter G and C subfamilies are important players regulating capsaicin content. The SNPs associated with the ABC transporter gene family were S6_203416571 and S11_83592400 in the locus CA06g14430 and CA11g09150 respectively (Fig 2). Particularly, the SNP S6_203416571 located in chromosome 6, showed a high allelic effect (Fig 2A). Genome-wide identification of ABC proteins in pepper To identify the ABC protein family in pepper, we performed a BLASTP search of the three pepper genomes from the PGP database. A total of 572 genes potentially encoding ABC proteins were identified: 200 from C. annuum (CaABC), 185 from C. baccatum (CbABC), and 187 from C. chinense (CcABC) ( Table 1). To investigate the evolutionary relationship between Capsicum species and Arabidopsis ABC transporter proteins (AtABC), we performed phylogenetic analysis of the pepper and Arabidopsis ABC proteins. The protein sequences of Capsicum ABC genes and AtABC proteins (119 protein sequences containing the ABC transporter domain) were aligned by using MEGAX, and an unrooted phylogenetic tree was constructed by a NJ method with 1000 bootstrap replications (Fig 3). An extensive research on ABC transporters has resulted in several naming schemes. In most of the cases, the transporters were named on the basis of mutant characteristics. Thus, different names were assigned to the same subfamily or selected members with common characteristics. To conform to plant and animal ABC communities, the Human Genome Organization (HUGO) nomenclature system [10] was adopted to designate all putative ABC proteins as ABCA-G and ABCI to all ABC transporter subfamilies. Overall, Capsicum ABC proteins followed the same pattern as Arabidopsis (Fig 3). Based on phylogenetic association with AtABCs and using the jackHmmer tool, Capsicum ABCs were classified into eight subfamilies previously mentioned. The number of members of ABCs within each subfamily in Capsicum were similar to other plants such as Arabidopsis [13], B. rapa [20] and tomato [59]. In order of abundance, ABCG, ABCB and ABCC subfamilies were the most prevalent groups throughout all species, whereas the smallest number of members were in the ABCD and ABCE subfamilies; for this last subfamily, only one member was identified in all the three Capsicum species analyzed. For convenience, the ABC transporters were named CaABC1 to CaABCn for C. annuum based on their subfamily group and were classified similarly for the other species. The Capsicum ABC proteins vary substantially in size and sequences of their encoded region, as well as in their physicochemical properties across all species. The locations of the ABC domains within the protein also differ. The physical locations, coding sequence length, protein characteristics and topology for ABC transporters identified for each species are in S1-S3 Tables. The domain organizations for ABC transporters are almost as varied as their function: proteins of the ABCA-ABCD subfamilies have a forward direction for domain organization (TMD-NBD), whereas the proteins of the ABCG and ABCH subfamilies contain the reverse domain organization (NBD-TMD). ABCE and ABCF proteins contain only two NBDs and were characterized as soluble proteins. ABCI proteins generally possess only one domain, mainly NBD or TMD. Topological diversity is one of the unique characteristics of ABC proteins. The ABC transporters are divided in three common arrangements: full-sized transporters, half-sized transporters and a third type that has no TMDs but two NBD domains [10]. A typical fullsized ABC protein consists of �1,200 amino acid residues [14]. The 200 CaABC proteins ranged from 52 to 1831 amino acid residues, the CbABCs from 89 to 1864 residues and the CcABCs from 86 to 1965 residues. Nevertheless, it is important to mention that all of them possess at least one NBD, thus, they can be classified as ABC transporters and were included in this study. Some of the pepper ABC proteins with shorter sequences might be thought as pseudogenes or not annotated genes. These shorter sequences were also found in the genome-wide analysis of ABC transporters in tomato, B. rapa and pineapple [20, 58,59]. Among the 572 ABC transporters, 212 lack a TMD and were considered soluble ABC proteins. The remaining 360 members possess TMDs and were considered ABC transporters across all species. Overall, 134 Capsicum ABC proteins are full-sized proteins possessing (TMD-NBD)x2 domains: 46, 40, 48 for C. annuum, C. baccatum and C. chinense, respectively. Among these members, 22, 24 and 28, respectively, exhibit a forward topology (TMD-NBD), whereas 24, 26, and 20 have a reverse topology (NBD-TMD). In total, 135 ABC transporters were classified as half-sized, having forward (TMD-NBD) or reverse (NBD-TMD) orientations. Among the half-sized Capsicum ABC proteins, 26 exhibit a forward and 109 a reverse domain orientation. A total of 233 ABC transporters were considered quarter-sized or single-structure proteins: 184 have an NBD domain, and 49 a TMD domain. Capsicum ABC proteins were also classified under an ABC2 (NBD-NBD) structure: 26 have the NBD-NBD structure and 3 the TMD-TMD structure. In total, 37 ABCs were uniquely characterized, with NBD-TMD-NBD, TMD-NBD-TMD and TMD-TMD-NBD-TMD-TMD structures. The differences in the topology domain orientations might have resulted from gene duplication during evolution or evolved to render specific physiological functions under biotic or abiotic stress [60]. Chromosomal locations and syntenic Capsicum ABC paralog pairs A total of 544 (95.1%) ABC transporters were physically mapped on all 12 chromosomes of pepper, and the other 28 genes were located on unanchored scaffolds (Fig 4). ABCG, ABCB and ABCC subfamilies are unevenly distributed across all chromosomes. ABCD (in chromosome 2 and 12) and ABCE (in chromosome 01) subfamilies are the most conserved across all the Capsicum species. Among all chromosomes, chromosome 3 of C. annuum contains the highest number of ABCs-32 (16%)-followed by chromosome 6 (14.5%). Among all species, chromosomes 3, 6 and 12 contain the highest number of ABCs, with the minimum on chromosome 10. The distribution pattern of ABC transporters on individual chromosomes also indicated certain physical regions with a relatively higher accumulation of multiple ABC gene clusters, such as chromosome 3 and 6 at the lower end of the arms for all species. The distribution of ABC transporters differs among the three genomes. Some ABC gene clusters occur in one species but not in the other genomes; for example, in chromosome 2, ABCs were present in the upper chromosome part in C. annuum and C. baccatum but were absent in C. chinense. On the other hand, ABCG and ABCB family members were found on at the lower part of chromosome 4 in C. chinense and C. baccatum but not in C. annuum. A clear example is at the upper end of chromosome 8, where a cluster of genes corresponding to the ABCB and ABCG members are present in C. chinense, and only one gene appears in C. baccatum, but with no presence reported in C. annuum. Syntenic paralogs are genes that are located in syntenic fragments. The syntenic paralog pairs were identified between and within the three Capsicum genomes. Simultaneously, we (Table 2). Six paralog pairs were within species and the remaining were intra-species. Among the eight intra-species duplications, three segmental duplication gene pairs were intra-chromosomal, located on chromosomes 5 and 3 for C. baccatum and chromosome 6 for C. annuum. Only one segmental duplication CaABCF8-CaABCF2 in C. annuum involved two different chromosomes. Moreover, the duplicated paralog ABC transporter pairs belong to the same subfamily. The Ka/Ks (ω) ratios for segmental duplications ranged from 0.06 to 1.57, with a mean of 0.81. In total, 11 out of 14 of the paralogs pair were under purifying selection, with ω ratios < 1. The ω ratios for 3 syntenic paralogs (21.42%) were >1, which indicates a positive selection on these paralogs. The CaABCF8-CaABCF2 pair had the highest ω ratios with 1.57. The duplication time of Capsicum ABC paralog pairs was estimated by using a relative Ks measure as a proxy for time, and it spanned from 1 to 84 million years ago (MYA), with an average duplication time of~26 MYA. Multiple copies of genes in a gene family could have evolved due to the flexibility provided by events of whole-genome tandem and segmental duplications. Gene duplication, segmental or tandem, has been documented in several plant gene families, such as NAC, MYB, F-box, bZIP and ABC transporters [20,61]. The ω ratios for 3 pairs of paralogs were > 1, representing positive selection and fast evolutionary rates in these ABC paralogs at the protein level. This finding differs from other gene families in plants, such as BURP in Medicago and ACD in tomato, which contain a few or even no paralog pairs undergoing positive selection [62,63]. In our study, a relatively large percentage (~21%) of ABC paralogs pairs underwent positive selection. We assumed that these paralog gene pairs might have evolved in order to acquire new functions and adjust to their living environment. Expression correlation analysis of syntenic ABC paralog pairs across different tissues and under stress treatments could help to reveal their functional roles in evolutionary fates. Motif composition and Cis-elements of Capsicum ABC genes MEME analysis according to domain composition of pepper ABC transporter proteins revealed 10 conserved motifs in ABCA-G and ABCI families (Fig 5 and S4 Table). The lengths of the conserved motifs ranged from 15 to 50 amino acids. Additionally, the number of conserved motifs in each Capsicum ABC transporters ranged from 1 to 8. The information obtained from ScanProsite analysis revealed that the function of most of the motifs was pleiotropic drug resistance related to the ABCG subfamily. All conserved motifs predicted have similar properties as ABC transporters, and the signature motif (LSGGQ) was found in most of the Capsicum ABC transporters. In order to identify putative cis-elements in the Capsicum ABC promoters, 1500 bp DNA sequences upstream of the start codon (ATG) for the ABC transporters for each species were analyzed by using the Plant Cis-acting Regulatory DNA Elements (PLACE) website. The analysis identified 124 different cis-elements in all Capsicum ABC transporters. A total of 23 common cis-regulatory elements were present across all the promoter regions of the ABC transporters and were highly conserved among all Capsicum species (Table 3). Four common cis-regulatory elements, CATATGGMSAUR, ASF1MOTIFCAMV, NTBBF1 ARROLB and ARFAT, were found related to plant hormones including auxin, auxin response factor (ARF) and Small Auxin-Up RNAs (SAUR), which suggests that these plant hormones could affect the expression of Capsicum ABC transporters and can affect the plant growth and development. The WRKY71OS cis-regulatory element is responsive to stresses caused by pathogens. Out of the 23-common cis-regulatory elements, TBOXATGAPB, BOXIIPCCHS, INRNTPSADB, and GT1CONSENSUS are thought to be required for transcriptional regulation by light. Two common cis-elements, CCAATBOX1 and LTRECOREATCOR15 were identified to response to low temperature, cold, drought and heat shock, which suggests that Capsicum ABC transporters might be involved in response to abiotic stress. GO annotation of ABC transporter genes GO analysis performed with Blast2Go suggested the putative participation of ABC genes in multiple biological processes, molecular functions, and cellular component (Fig 6). GO results indicated the putative participation of Capsicum ABC transporters in transmembrane transport as a principal biological process, as well as drug transmembrane transport, xenobiotic transport and DNA integration. ATP binding and ATPase activity coupled to transmembrane of substances were the main activities for molecular function. Most of the ABC transporters were classified in the integral component of the membrane for cellular localization followed by the plasma membrane. In all species, 18 ABC transporters from C. chinense, 12 from C. annuum and 12 from C. baccatum were cellular localized in the vacuolar membrane and plant-type vacuole. In pepper fruit, capsaicinoids are synthesized exclusively in placental tissue and accumulate in vacuoles of placental epidermal cells [64], so ABC transporters might participate in vacuolar capsaicinoid uptake and transport, affecting the capsaicinoid content in pepper fruits. Capsaicinoid levels are highly dynamic during fruit development. Their levels appear to be influenced by the ontogenetic trajectory of the fruit. Capsaicinoids begin to accumulate from the early stages (10 dpa) of fruit development, peak at about 40 dpa, and then it decreases sharply [65]. The late decrease in capsaicinoid content appears to result from high peroxidase activity, which oxidizes capsaicinoids in the presence of hydrogen peroxide (H 2 O 2 ) [66,67]. A gene CcABCC12 from C. chinense was found to have a H 2 O 2 catabolic process as a biological process resulting in the breakdown of H 2 O 2 (S5 Table), which suggests a detoxification process of H 2 O 2 exclusively for C. chinense and a subsequent high content of capsaicinoids. Another factor that can affects the metabolism of capsaicinoids is mineral nutrition. Nitrogen (N) and potassium (K) are the main mineral players. Nitrogen availability in soil directly affects capsaicin accumulation since a single capsaicin molecule synthesis involves three amino acids such as phenylalanine, valine and leucine [68]. By contrast, potassium does not participate in capsaicinoid metabolism, however it has been reported that an increase in potassium concentration significantly decreases the capsaicin levels and leaf nitrogen content in C. chinense [69]. Thus, the level of potassium might indirectly affect capsaicin accumulation via its effects on fruit development [70]. CcABCC1 in C. chinense showed cellular potassium ion homeostasis as a biological process. The principal function of this biological process involves maintenance of an internal steady state of potassium ions at the level of a cell. In fact, C. chinense was found to have the highest values for capsaicin and dihydrocapsaicin, suggesting that cellular potassium ion homeostasis may indirectly affect capsaicinoid levels in pepper fruits. Expression profile of ABC transporters in C. annuum and C. chinense A BLASTN strategy was used to identify the orthologs for Capsaicinoid markers previously identified by [5] for the CA06g14430 gene from the SGN database. The resulted orthologs were CaABCG28, CbABCG26, and CcABCG37 corresponding to each of the species. For CA11g09150, the orthologs were CaABCC9, CBABCC5 and CcABCC20. The main purpose of gene expression profiling is to determine the genes that are differentially expressed within the organism being studied. In the same way, we used a BLASTN search to identify the orthologs between C. annuum and C. chinense to correlate their expression in placental tissues. In order to characterize the expression patterns of individual Capsicum ABC transporters at different stages (6, 16 and 25 dpa), we used publicly available RNA-seq data for C. annuum cv. CM334 [2]. The RPKM values for green fruits (16 dpa) from two varieties of C. chinense (Naga morich and Pimienta de neyde) and C. annuum cv CM334 were plotted in a hierarchical heatmap (Fig 7). The C. annuum CM334 variety showed a similar pattern of expression in all placenta tissue stages (Fig 7A). CaABCG11 was expressed at 6 and 16 dpa with a higher expression at 6 dpa. On the other hand, CaABCB36, CaABCG83 and CaABCG87 were highly expressed at 16 dpa. Most Capsicum ABC transporters presented different expression patterns, whereas a few resulted similar. Some exhibited stage-and species-specific expression, which suggests that these genes may play specific roles in the relevant stages and Capsicum species. Among 74 genes, 32 were expressed across all placenta tissues at different stages (Fig 7B). The ABC transporters previously described as a major marker for capsaicin and dihydrocapsaicin content were mostly expressed in C. annuum cv. CM334. CaABCC9 and CcABCC20 (CA11g01950) were found greatly expressed at 16 and 25 dpa and CaABCG28 (CA06g14430) was found in 25-dpa tissue. By contrast, CcABCG37 (CA06g14430) was greatly expressed only C. chinense cv. Naga morich at 16 dpa. Mainly, the ABCC and ABCG subfamilies were distributed across different stages; however, only ABCA, ABCB, ABCE, ABCF and ABCI members were expressed in C. annuum cv. CM334, and ABCD members were expressed in C. chinense cultivars. C. chinense varieties at 16 dpa shared the expression of eight genes (CcABCC16, CcABCC21, CcABCG45, CcABCG46, CcABCG51, CcABCG68, CcABCG74, CcABCG84). CcABCG54 was exclusively expressed in Pimienta de neyde, whereas CcABCG12, CcABCG16, CcABCG46, CcABCG51, CcABCG59, CcABCG84 and CcABCD4 were highly expressed in Naga morich. Most of the genes expressed in the C. chinense varieties belonged to the ABCC, ABCG and ABCD families. The ABCA subfamily is not yet fully functionally characterized in plants; it has been reported to be related to pollen and seed germination and maturation [71]. The presence of one full-sized ABCA transporter was exclusive to dicots, including pepper, Arabidopsis [13], tomato [59], B. rapa [20], and B. napus [21] but so far it has not been identified in monocots, such as rice [16] and maize [18]. However, Chen et al. [58] reported one full-sized ABCA transporter in pineapple. The ABCB subfamily is composed of a full-sized or multidrug resistance (MDR) protein and half-sized protein, with names such as transporters associated with antigen processing (TAP) and ABC transporter of mitochondria (ATM) [10]. In plants, ABCB is the second largest subfamily. For instance, in Arabidopsis, the ABCB subfamily participates in different processes such as auxin bidirectional transport, phospholipid translocation, stomatal regulation, berberine transport, Fe/S biogenesis and metal stress (Cd and Al) tolerance [72]. AtABCB1, a member of AtABCB, has been proposed to participate in auxin transportation, and AtABCB1-overexpressing plants show long hypocotyls [73,74]. ABCE family members are soluble ABC proteins and are also called RNase L inhibitor (RLI). They possess an N-terminal Fe-S domain, which interacts with nucleic acids [75]. Their main function have been reported to be related to control of translation and ribosome biogenesis [76]. Similarly, ABCE and ABCF family members are soluble proteins and contain an NBD-NBD domain structure. In Arabidopsis, ABCF (AtABCF3) proteins have been reported to play a role in root growth [77]. The ABCG subfamily, also called pleiotropic drug resistance or white-brown complex proteins, is the largest subfamily in plants. It has been reported that ABCGs transport various phytohormones, including abscisic acid, cytokinin, strigolactone and auxin derivatives [78]. The subcellular localization of full-sized ABCGs is the plasma membrane [79], whereas half-sized ABCGs are complex proteins and have been localized in the plasma membrane, mitochondrial membrane, chloroplast membrane and cytoplasm [18]. Full-sized ABCGs of Arabidopsis, AtABCG32 [80], and rice OsABCG31 [81] are involved principally in cuticle formation, while half-sized ABCGs play an important physiological role like cuticle formation, kanamycin resistance, abscisic acid exporter and pollen development [82][83][84]. In cotton, GhWBC1 a half-sized white-brown complex member has been reported to be involved in fiber cell elongation [85]. Shibata et al. [86] reported that ABCG subfamily may play a key role in export of the antimicrobial diterpene such as sesquiterpenoid phytoalexin and capsidiol for resistance to the potato late blight pathogen Phytophthora infestans in Nicotiana benthamiana. The ABCC subfamily is also called MDR-associated proteins (MRP) because of their function in transporting glutathione-and glucuronide-conjugates in drug resistance (Verrier et al., 2008). Pang et al. [18] reported that most plant ABCCs are characterized as vacuolar localized proteins and a few of them have been reported to reside on the plasma membrane. The function of different ABCC members has been found in diverse plants; for example, Arabidopsis AtABCC5 [87], maize ZmMRP4 [88] and rice OsABCC13 [89] are implicated in phytate transport. AtABCC1 and AtABCC4 are involved in folate transport, while maize ZmMRP3 and grape VvABCC1 play an important role in anthocyanin accumulation in vacuoles [90,91]. The high expression of the ABCC subfamily in pepper placental tissue of two species, and its principal function reported in other plants, suggest that the ABCC subfamily function in Capsicum spp. is the transport and accumulation of capsaicin in vacuoles of the placental tissue. Gene expression analysis We selected CaABCG28, CbABCG26 and CcABCG37 (orthologs of CA06g14430); as well as CaABCC9, CBABCC5 and CcABCC20 (orthologs of CA11g09150) for gene expression analysis by RT-qPCR (Fig 8). Gene expression was detected throughout all placenta stages and species analyzed. The orthologs of CA06g14430 corresponding to the ABCG family showed a similar expression pattern in the C. chinense varieties at different stages, but at 16 dpa, higher relative expression was found for the Naga morich variety and the lowest was for C. annuum cv. CM334 (Fig 8A). At 6 dpa, the expression was similar across all cultivars, but it was less in C. baccatum and C. chinense cv. Pimienta de neyde at 16 dpa. The remaining varieties showed an expression pattern close to that of the ABCC family (Fig 8B). At 25 dpa, the highest expression was found for C. annuum cv. CM334, followed by C. baccatum cv. PBC81 and the lowest for the C. chinense varieties. Different patterns in the expression between the orthologs of the Capsaicinoids markers previously mentioned suggest that the expression may be specie-specific for each of the ABC subfamilies in Capsicum species. Conclusion Although the ABC transporter gene superfamily has been widely studied among extant organisms including plants, the present study is the first to report the presence of 572 putative ABC transporter proteins in the entire pepper genome sequences of three different Capsicum species. Our results provide fundamental and exhaustive information about the pepper ABC transporters by performing a comprehensive genome-wide identification and expression patterns of these proteins family. Based on their evolutionary origin, phylogenetic analysis classified the ABC proteins into 8 main subfamilies (designated A to G, and I). Chromosomal mapping revealed that members of ABCG, ABCB and ABCC subfamilies were the most abundant genes, whereas the ABCD and ABCE subfamilies were manifested in a lesser abundance. Our results suggest that the ABC transporters, specifically the ABCC and ABCG subfamilies, interfere in capsaicin and dihydrocapsaicin content in pepper. Indirectly, these two subfamilies may be involved in the transportation of secondary metabolites such as capsaicinoids to the placenta vacuoles for their storage. Moreover, we suggest that the ABBC and ABCG subfamilies play a role in the H 2 O 2 detoxification process to reduce capsaicin degradation, specifically in the C. chinense fruits. Our study will provide clues for further research on the evolution of the ABC transporter gene family and their influence in specific biological functions of Capsicum fruits including plant growth, development and capsaicinoid content in pepper. Organization of ABC transporters in Capsicum spp.
2019-05-02T13:03:00.947Z
2019-04-30T00:00:00.000
{ "year": 2019, "sha1": "9e1dd6399af84ac58555f16eb51d50a81ff176aa", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0215901", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0aaa210c8b9106fef0fdbe7fa7a4060a1a17f79d", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
207847638
pes2o/s2orc
v3-fos-license
Beyond low-inertia systems: Massive integration of grid-forming power converters in transmission grids As renewable sources increasingly replace existing conventional generation, the dynamics of the grid drastically changes, posing new challenges for transmission system operations, but also arising new opportunities as converter-based generation is highly controllable in faster timescales. This paper investigates grid stability under the massive integration of grid-forming converters. We utilize detailed converter and synchronous machine models and describe frequency behavior under different penetration levels. First, we show that the transition from 0% to 100% can be achieved from a frequency stability point of view. This is achieved by re-tuning power system stabilizers at high penetration values. Second, we explore the evolution of the nadir and RoCoF for each generator as a function of the amount of inverter-based generation in the grid. This work sheds some light on two major challenges in low and no-inertia systems: defining novel performance metrics that better characterize grid behaviour, and adapting present paradigms in PSS design. I. INTRODUCTION The demand for the reduction of the carbon footprint has led to an increasing integration of renewable sources. The replacement of conventional power plants, interfacing the grid via synchronous machines (SMs), with wind and solar generation results in significant changes in power system dynamics. Specifically, as these new converter-based sources replace SMs, the amount of rotational inertia in power systems decreases, accompanied with the loss of stabilizing control mechanisms that are present in SMs. As a result of this transition, low-inertia power systems encounter critical stability challenges [1]; EirGrid & SONI, for instance, limited the instantaneous penetration of variable renewable energy sources to 55% [2] and recently increased the limit to 67% and set a goal of 75% of fuel-free generation [3]. As of now, certain grids need to preserve a minimum amount of inertia, which implies higher cost and hinders the penetration of renewable generation. New converter control strategies can potentially address these low-inertia system stability issues. These approaches can be split in two categories [1], [4]: grid-following control, where the converter follows the measured frequency and voltage magnitude in the grid (via a synchronizing mechanism such as phase locked loop), and grid-forming control, where the converter defines the voltage magnitude and frequency. Given the fact that the first strategy relies on the existence of a well-defined voltage waveform, it cannot fully replace the functionality of the SMs. In this work, we focus on grid-forming converters (GFCs) and their critical role in the transition towards a 100% converter-based grid. Different GFC control strategies have been proposed, such as droop control [5], virtual synchronous machine [6], dispatchable virtual oscillator control [7] and matching control [8], among others. To the best of our knowledge, various aspects of the integration of GFCs (e.g., the consequent gradual inertia reduction) in a realistic transmission grid model and in an electromagnetic transient (EMT) simulation environment have not been thoroughly explored. In [9] and [10], different grid-forming and grid-following techniques have been tested in simple network models. However, these studies rely on the IEEE 9-bus system that lacks sufficient granularity and complexity to fully analyze the transition scenario to GFCs. The objective of this work is to explore the limits of GFC integration at the transmission level, using an EMT simulation of a realistic grid model that fully reflects the existing dynamics. Previous studies suggest that systems exhibit instability [3], [10], [11] when the penetration of non-synchronous generation increases to roughly 70%. The study in [11] only considers grid-following converters, and the work in [10] shows that instability can be caused by adverse interactions of GFCs with the power system stabilizer (PSS) and automatic voltage regulator (AVR). Our work suggests that, given the right control strategies for converter-based generation, a minimum amount of inertia might not be required for grid operation, from a frequency stability perspective. Nonetheless, some controllers can no longer be agnostic to the amount of converter-based generation, as the grid dynamics varies drastically depending on the generation mix. Moreover, we question the suitability of standard frequency metrics, such as nadir and rate-of-changeof-frequency (RoCoF), for converter-dominated grids. This paper does not analyze other critical aspects in low-inertia systems such as voltage control, responsiveness to faults, etc. The main contributions of this paper are: first, to show that, from a frequency stability perspective and for a particular grid, it is possible to transition from 0 to 100% converterbased generation. Second, we remark the need for PSS retuning based on the continuously changing amount of nonsynchronous penetration. Third, we explore how the nadir and RoCoF, measured over different time windows, evolve as a function of the penetration of converter-based generation. These results expose new challenges that have been so far overlooked in the mainstream literature and calls for further research to address many open points, such as: are nadir and RoCoF still good descriptors of grid stability? how relevant are fast transients in frequency? can decentralized PSS structures provide adequate damping under different converter-dominated scenarios? II. MODEL DESCRIPTION We start with the description of the grid, SMs, and GFCs models. A. Transmission grid model In this work we adopt the transmission grid model from [12], representing a simplified model of the Quebec region, with a total generation capacity of 26. of the generation can be found in the North and most of the load in the southern part. Even though the generation is mainly characterized by hydro-power plants, it has been selected for this study as the relevant information and the EMT simulation model are publicly available, as provided by Hydro-Quebec. Moreover, it has the right degree of complexity, i.e., being complex enough to explore the interactions of GFCs and SMs at different levels of inertia, and simple enough to understand the system behaviour. Specifically, inside the model there are 7 SMs of different size, ranging from 5.5 GW to 200 MW. Each SM is represented by a set of 8th order, 3-phase dynamical model coupled with hydraulic turbine, governor, an AVR, and multi-band PSS (type 4B). Note that only primary frequency control is implemented in the SM model, and the droop constant of each SM is set to 5%. As explained later in the Section II-C, we extend the model with an HVDC link of 2 GW (existing in the original Hydro-Quebec grid but not in [12]), modeling a contingency that is independent of the penetration level of GFCs. For simplicity, we only consider constant impedance loads in our model. A simplified version of the grid model is depicted in Figure 1. B. Grid-forming converter model The converter-based generation is implemented by means of two-level voltage source converters, stacked in parallel to form large-scale generation units [9, Rem. 1]. The converter DC energy source is a controllable current source, connected in parallel with a resistance (which models the DC losses) and the DC-link capacitance. The switching stage is modelled using a full-bridge 3-phase average model, AC output filter (see Figure 2), and coupled to the medium voltage via a LV/MV transformer. Each converter is controlled as a gridforming unit defining the angle, frequency and voltage. For simplicity, in this work we focus on grid-forming droop control (see [5], [9, Sec. III-C]). It is noteworthy that -under a realistic tuning and for a wide range of contingencies -other techniques such as virtual synchronous machine (VSM) [6], matching controlled GFCs [8] and dispatchable virtual oscillator control (dVOC) [7] exhibit similar behavior to that of the converters controlled by droop control [9, Sec. IV]. The control block diagrams of the droop strategy appear in Figure 3. For the sake of compactness we refer the reader to [9, Sec. III] and [13] for further details on the converter control design. The C. Modeling the SM-GFC generation transition Given that the original model is an aggregated model, the generation transition in this study is carried out in a uniform, gradual way. Each SM is replaced by a collocated combination of SM and GFC, where the ratings of each generation unit is defined according to the penetration level 1 . Formally speaking, the ratio of converter-based generation η ∈ [0, 1] is defined as: . (1) where S SMi denotes the rating of the i-th SM in the combined model and S gfc i denotes the rating for the i-th converter. The individual ratings of the combined SM-GFC model replacing the original SM are then adjusted as a function of η: with S 0 SM being the rating of a given SM in the original model (with no GFCs, i.e., η = 0). For η = 0 (resp. η = 1), the GFCs (resp. the SMs) are disconnected from the model. III. RESULTS We start by defining the contingencies that will be considered. It is expected that, as SMs are being replaced by GFCs, the size of the worst contingency (typically the rated power of the largest SM in the grid) will become smaller, as generation becomes less coarse and more distributed. In our particular case study, this implies the worst contingency is the loss of the largest SM for low penetration levels (SM 1 or 6), and the HVDC link trip for high penetration levels. Nonetheless, for a fair comparison across different integration levels, we consider always the same contingency value for all values of η. Therefore, the worst contingency is chosen to be the simultaneous loss of the combined generation unit SM-GFC 1 (namely, the loss of 5.5 GW generation). For completeness, the disconnection of the HVDC link in the model will be considered as well in Sections III-C and III-E. A. PSS retuning for high penetration levels It has been conjectured that the generation transition from a SM-dominated grid to a GFC-dominated one is challenging [10], [15], [16]. Indeed, we have observed in our initial results that, starting at 80% GFC penetration, stability is lost. However, we found that re-tuning of the PSSs renders the system stable, at least from a frequency perspective. For η ≤ 0.7, the system is stable under the original PSS structure (multiband PSS4B) and parameters, where all PSS blocks have the same parameters for all units. Roughly speaking, this type of PSS structure defines 3 different frequency bands and their corresponding lead-lag compensators. For the original PSS, these 3 frequency bands are set around 0.2Hz, 0.9Hz, and 12Hz, aimed at global, inter-area and local modes, respectively. For 0.8 ≤ η ≤ 0.9, the PSSs have been modified as follows: the second frequency range has been shifted to 1.2Hz, and the high frequency branch has been completely removed, to avoid having the corresponding lead-lag compensator acting on the existing GFC fast dynamics. Likewise, the gains of each branch have been reduced by a factor of 5. Based on this successful retuning, it can then be conjectured that, under the massive presence of GFCs, two aspects need to be considered: the PSS action might need to be reduced accordingly (but not fully removed); the PSS effect on high frequencies, where the response of GFCs is significant, can destabilize the system. Further analysis is needed to derive a more formal conclusion. We emphasize that there should be other re-tuning strategies that successfully stabilize the system, including the more natural choice of different PSS parameters for each SM. Previous works [10], [11] already pointed at AVR and PSS regulators as the possible cause for instability at high values of η. Note that the modified PSS tuning does not stabilize the system for η ≤ 0.7. Finding a unique set of PSS parameters stabilizing the system for all penetration levels is a challenging task. Indeed, it is unclear whether such settings exist, as the system dynamics and oscillation modes drastically vary depending on the amount of GFCs present in the grid. In practice, it is undesirable to continuously retune the existing PSS controllers in a grid depending on the penetration level. Moreover, the real-time ratio of converter-based generation (and its location) is not accurately known at the plant level, unless the transmission system operators discloses this information. Therefore, either novel robust, adaptive or more centralized PSS structures would be required to guarantee stability independently of the amount of converter-based generation present in the grid. Figure 4 illustrates the frequency time series of the SM 2 (the closest unit to the event) for the loss of the largest unit i.e., the SM-GFC 1 (see Figure 1). The increasing integration of GFCs significantly improves the frequency nadir, but it degrades the RoCoF, when computed over a short time window (more on this topic in the next section). Moreover, time at which nadir occurs also is shortened. Although converters do not possess any significant inertia, their fast response curbs the impact of generator trip on the grid frequency. The behaviour for 80% and 90% is qualitatively different from the rest, due to the PSS retuning. The case of a pure converter-based grid is covered later in Section III-E. Note that a similar behaviour has been observed under the other aforementioned grid-forming techniques. B. Frequency performance under the worst contingency Remark 1. By enforcing a slow frequency response for the GFCs -mimicking the slow turbine dynamics -GFCs can be made fully compatible with the time-scales of the SMs and their corresponding PSSs (i.e., reducing the time-scale separation of different generation units [ Fig. 4] [10]). However, fully mimicking the response of a SM would require to slow down the GFC frequency response artificially as well as significantly oversizing the GFCs. A much more viable solution is to adapt the PSS parameters according to the penetration level. The time series in Figure 4 correspond to the mechanical frequencies of the SM 2. For low values of η, these signals are expected to be representative of the bus frequencies across the grid. However, for a GFC-dominated grid, the GFC internal frequencies -being well-defined also in transients -might be more descriptive of the frequencies across the grid. Figure 5 illustrates the post-contingency frequency time series of SMs Interestingly, at low penetration levels large oscillations appear at the GFCs before they synchronize with the SMs. At high integration levels we observe larger oscillations at the SMs. Further analyses are needed to conclude which set of signals is more relevant to describe the frequency behaviour for different integration levels. In any case -regardless of the integration level -the SMs mechanical frequencies are still needed to evaluate potential RoCoF-related issues associated with conventional generation. C. Evolution of the frequency metrics While appropriate retuning of the PSS stabilizes the system, the results presented in the previous section suggest that the system dynamics drastically change depending on the GFCs integration level. To analyze and characterize this effect, we resort to the standard frequency stability metrics e.g., frequency nadir or maximum frequency deviation ||∆ω|| ∞ and RoCoF(T) |ω|, formally defined for the generation unit i as: where t 0 is the time when the event occurs, ω η i the mechanical frequency at unit i under penetration ratio η, and T is the Ro-CoF calculation window. For ease of exposition, we consider in this subsection the HVDC link trip (see Figure 1), and evaluate these metrics based on the SMs frequencies across the grid. Figure 6 depicts the frequency metrics evolution for SM 1, 5 and 6 (representatives of each area in the grid) following the loss of 2 GW generation caused by the HVDC link trip, for different values of η. We consider two RoCoF computation windows, namely T 1 = 0.1s and T 2 = 0.5s (i.e., computing RoCoF using different time windows), denoted as RoCoF(0.1) and RoCoF(0.5). Furthermore, the nadir and RoCoF values corresponding to a particular choice of (η, T 1,2 ) are normalized with respect to the metrics of the all-SMs system with the same RoCoF windows (i.e., η = 0 and T 1,2 ). This removes the effect that RoCoF decreases when computed over a longer horizon. From Figure 6, the following conclusions can be drawn: • For the units SM 1 and 6 (the units far from the event), RoCoF(0.1) deteriorates as η increases, but RoCoF(0.5) improves with respect to the all-SM system. • For the SM 5 -adjacent to the event -the RoCoF is less sensitive with respect to the integration level, since the collocated GFC reacts fast enough and comparable to the SM in the short term. • In terms of absolute RoCoF values, i.e., not normalized against the all-SMs system's RoCoF, SM 5 is the one experiencing the largest RoCoF(0.1) values, as expected (not shown here for space reasons). • Similar observations were obtained in the previous subsection for the loss of SM-GFC 1, where the SMs in the same region (SM 2,3 and 4) exhibit the largest RoCoF(0.1) values (see Figure 5). In other words, as inertia homogeneously decreases across the network, frequency decays faster right after a contingency, leading to larger RoCoF(0.1) values. The GFCs respond slower than the instantaneous inertial response from SMs, but fast enough to arrest the frequency decay rate before T = 0.5s, leading to smaller RoCoF(0.5) values. Remark 2. A similar analysis can be carried out using the GFC frequencies. There is no clear pattern on the evolution of the RoCoF(0.1) for low values of η, since there are large oscillations within this time scale. In this case, the RoCoF(0.1) metric is no longer insightful, and low values might hide large swings. It has been observed that RoCoF(0.5) clearly decreases as η increases. D. Discussion on the frequency metrics The presented results emphasize the relevance of the choice of the RoCoF window T , typically chosen to properly reflect frequency evolution, filter out noise and ignore fast transients, according to the characteristics of a grid [17]. The presence of GFCs leads to new, fast dynamics and therefore the value of T has to be reconsidered for the low-inertia systems. A natural reaction is to reduce the current choices for T (typically between 500ms and 1s) to accommodate for the fast response of GFCs, but, as explained before, it can lead to misleading conclusions. On the other hand, large values of T might be ineffective for protection devices, as dynamics are much faster under high values of η. High RoCoF values represent a challenge for existing settings of RoCoF relays, some load-shedding schemes, and conventional generation, that in general are not able to withstand sudden changes in speed and might disconnect to avoid damage. Nonetheless, fast transients vanishing in less than 200ms are not expected to be meaningful for the SM or RoCoF relays. Nonetheless, their influence on the grid-following converters can be significant, depending on the PLL implementations. Notice as well how nadir is no longer uniform for all SMs under high penetration levels (e.g., see the time series corresponding to η = 0.9 in Figure 5), caused by fast oscillations appearing adjacent to the event location and prior to the GFCs synchronization. For such a system, it might be needed to redefine the nadir metric to filter out these oscillations to obtain a meaningful metric which effectively reflects the severity of the grid contingency. Whether these fast dynamics need to be fully captured, ignored or just partially encapsulated in the metrics requires further in-depth investigations. This would depend on the effect of those fast dynamics across different components in the grid (grid-following devices, conventional generation, industrial loads, etc.). E. All-GFC grid We also explore a possible 100% GFC scenario, without the presence of any SM. The controllers are tuned as in the previous section, that is, no modification has been carried out to stabilize the system. We compare in this case the trip of the HVDC link and the disconnection of generator 1. As shown in Figure 7, after a very quick transient all converters synchronize under both contingencies, reaching a steady state before 300ms. Once again, similar results have been observed under other grid-forming techniques, and combinations thereof. Nadir is largely reduced in comparison to the all-SM grid, as the GFCs are orders of magnitude faster than the hydropower plants. For the case of the disconnection of generation unit 1, all generators in that area (2, 3 and 4) experience the largest values of short time-window RoCoF. On the other hand, for the other generators the response is nearly overdamped, and the nadir is equal to the steady state frequency deviation. This implies that nadir, as defined in (2), is much larger for those units close to the event. Unlike in SM-dominated grids, in all-GFC grids nadir can be reached before the generation units synchronize. Therefore, values are not uniform across all units in the grid, and depend largely on the location of the event. Similar conclusions can be reached for the disconnection of the HVDC link, as the RoCoF and nadir values for generator 5 are much larger than for the rest of the generators. These results question again the adequacy of the metrics in (2) for converter-dominated grids. On one hand, large values of T can render RoCoF useless as a metric, since the system might have reached a steady state 2 , and hence RoCoF would just be proportional to the droop coefficient of the grid. On the other hand, small values of T that capture the first swing (around 50ms for both events) are very impractical and sensitive to noise. Overall, it is unclear whether a metric is required to characterize these fast dynamics, whose effect in the grid might be questionable. IV. CONCLUSIONS AND OUTLOOK While GFCs have already been used at a microgrid scale, there exist serious doubts on the stability of large systems as GFCs replace SMs, especially at high penetration levels. This paper has explored the massive deployment of gridforming converters and its effects on frequency behaviour. The presented results suggest that, under proper controller tuning, it is possible to guarantee frequency stability. Nonetheless, the grid dynamics change drastically, reaching steady state in the sub-second time range, orders of magnitude faster than the original pure-SM system. This has clear implications in terms of nadir and RoCoF, which might imply rethinking tuning of protection devices and load shedding schemes. There is also a need for PSS structures that can deal with a timevarying amount of inverter-based generation. To the best of our knowledge, no guidelines can be found for PSS tuning under high penetration scenarios. Although in this work we have only covered the penetration of converters controlled as grid-forming units, it is expected that a large amount of devices will be operated as gridfollowing units. Large values of short time-window RoCoF might not be meaningful for frequency ride-through schemes in conventional generator or for RoCoF relays. However, grid following devices will try to synchronize to those fast transients, potentially creating large power transients.
2019-11-07T12:10:53.000Z
2019-11-07T00:00:00.000
{ "year": 2019, "sha1": "7624cfa52d698ec01e09d99fa16942a1905bb3fa", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1911.02870", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7624cfa52d698ec01e09d99fa16942a1905bb3fa", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ] }
55911682
pes2o/s2orc
v3-fos-license
The London Workshop on the Biogeography and Connectivity of the Clarion-Clipperton Zone Adrian G Glover , Thomas G Dahlgren , Sergio Taboada , Gordon Paterson , Helena Wiklund , Andrea Waeschenbach , Amber Cobley , Pedro Martínez , Stefanie Kaiser , Sarah Schnurr , Sahar Khodami , Uwe Raschka , Daniel Kersken , Heiko Stuckas , Lenaick Menot , Paulo Bonifacio , Ann Vanreusel , Lara Macheriotou , Marina Cunha , Ana Hilário , Clara Rodrigues , Ana Colaço , Pedro Ribeiro , Magdalena Błażewicz , Andrew J Gooday , Daniel OB Jones , David SM Billett , Aurélie Goineau , Diva J Amon , Craig R Smith , Tasnim Patel , Kirsty McQuaid , Ralph Spickermann , Stefan Brager Introduction Recent years have seen a rapid increase in survey and sampling expeditions to the Clarion-Clipperton Zone (CCZ) abyssal plain, a vast area of the central Pacific that is currently being actively explored for deep-sea minerals (ISA 2016).For signatory nations to the United Nations Convention on the Law of the Sea (UNCLOS), the commercial exploration or exploitation of areas of the seafloor beyond national jurisdiction is regulated in combination by the International Seabed Authority (ISA), established under UNCLOS, and national governments that act as the Sponsoring State to commercial or other organisations that enter into contract with the ISA.There are now 15 contracts signed with the ISA for polymetallic nodule exploration, 8 of these signed in the last 10 years and the most recent by UK Seabed Resources Ltd (UKSRL) for its second contract in March 2016 (ISA 2016).The 6 million km CCZ is the most active area worldwide for deep-sea mining exploration, and the ISA is currently developing a new environmental regulatory framework for mineral exploitation to be published in 'zero-draft' form in 2016 (S Brager, ISA, pers. comm.). Critical to the development of evidence-based environmental policy in the CCZ are data on the biogeography and connectivity of species at a CCZ-regional level.With this in mind, the London Workshop on the Biogeography and Connectivity of the CCZ was convened to support the integration and synthesis of data from European Union (EU) CCZ projects, supported by the EU Managing Impacts of Deep-Sea Resource Exploitation (MIDAS) and EU Joint Programming Initiative Healthy and Productive Seas and Oceans (JPI Oceans) projects, individual EU-based contractors and the Natural History Museum in London, a leading centre for marine biodiversity research. The challenge of biogeography and connectivity in the CCZ The Clarion-Clipperton Zone is so called as it lies between the Clarion and Clipperton Fracture Zones, topographical highs that extend longitudinally across almost the entire Pacific (Fig. 1).There is no strict definition of the region, but it has come to be regarded as the area between these fracture zones that lies within international waters and encompasses the main areas of commercial interest for polymetallic-nodule mining.Exploration licenses issued by the ISA extend from 115° W (the easternmost extent of the UK-1 claim) to approximately 158° W (the westernmost extent of the Chinese COMRA claim).We therefore adopt here a working definition of the CCZ as the box: 13° N 158° W; 18° N 118° W; 10° N 112° W; 2° N 155° W. Conducting regional-level studies of the biogeography and connectivity of the CCZ is an immense challenge for several reasons: e.g., (1) the great depth and long distances to home ports, (2) the great physical heterogeneity of the region (the CCZ is not a homogenous abyssal plain of mud, it is a region of 6 million km2 with a bathymetric variation of at least 1200 m and punctuated by many thousands of seamounts) and ( 3) there are almost no data on habitat characteristics such as food availability or prevailing oceanographic currents.However, these are only minor issues compared with the greatest problem: the almost complete lack of taxonomic synthesis or standardisation across the region based either on traditional morphological data or modern DNA (Glover et al. 2015).An example is given through a simple search of the Ocean Biogeographic Information System (OBIS, 2016) centred on a 300,000 km2 (5°) area of ocean between the Netherlands and the United Kingdom providing 182,939 records of polychaetes, a common benthic animal in all oceans.In comparison, the same size box centred on the UK-1 exploration claim area provides only four polychaete records, none of which are benthic (Fig. 2).The lack of taxonomic synthesis has resulted in a situation where biologists are unable to identify animals collected and successive research cruises, both contractor and academic-led, are unable to provide identified species occurrence data to global database systems such as OBIS or the Global Biodiversity Information Facility (GBIF).In turn, this has prevented a synthesis of data to produce for example a biogeographic map of the CCZ. Despite these problems, new data are now emerging from both academia and contractorled CCZ programs that utilise genetic data in the form of short DNA sequences from typical invertebrate markers such as cytochrome oxidase I mitochondrial gene (COI), 16S mitochondrial ribosomal RNA coding genes and 18S or 28S nuclear genes.Recent publications from the BGR-led and UKSRL-led projects have made these data available on the open data repository NCBI GenBank (Janssen et al. 2015, Glover et al. 2016b, Dahlgren et al. 2016) (Fig. 3).Although full taxonomic descriptions of these fauna are still lacking in most instances, the availability of genetic data does permit a regional-level analysis of putative species as these data become available.In addition, a large number of DNA sequences and morphological data have been obtained from a range of cruises that are currently unpublished. The London Workshop was expressly designed to try and overcome these challenges by bringing together researchers from a range of EU projects working on molecular data from the CCZ to share findings and plan future publications and synthetic activities.There is an almost complete absence of published, databased benthic species records from the Clarion-Clipperton Zone (CCZ), despite over 30 years of intensive oceanographic and geological research in the area.Here illustrated as an example are the current species records for a 300,000 km2 (5°) box centred on the North Sea and Eastern Channel in Europe, where the Ocean Biogeographic Information System (OBIS 2016) provides 182,939 records of polychaetes, a common benthic animal in all oceans.By contrast, the same size box centered on the eastern CCZ provides just 4 polychaete records, none of which are benthic. The London Workshop on the Biogeography and Connectivity of the Clarion-Clipperton ... Goals The London Workshop had three clear goals: 1. To explore, review and synthesise the latest molecular biogeography and connectivity data from across recent CCZ cruises from both contractor and academia-funded projects 2. To develop complementary and collaborative institutional and program-based academic publication plans to avoid duplication of effort and ensure maximum collaborative impact 3. To plan a joint synthetic data publication highlighting key results from a range of planned molecular biogeography/connectivity publications During discussions regarding the workshop it also became apparent that it was important to review historical knowledge of CCZ biogeography based on morphological data, new data from visual survey tools and taxonomic descriptions alongside the new molecular data -this is discussed in more detail in section "Workshop Recommendations" below. London Workshop: The Agenda The agenda is presented here as planned in order to provide transparency of discussion and a useful guide to those planning future similar workshops.Summaries of talks and discussion are provided in the following section "Key outcomes and discussions". Participant composition and overview of workshop goals The workshop opened with participant introductions and a discussion of the proposed agenda.32 participants attended the workshop, based on invitation sent to the MIDAS, JPI Oceans and EU-led contractor programs.The meeting was also advertised via the MIDAS news feed and the INDEEP email alert.The workshop was over-subscribed, with an initial capacity of 25, but the venue was expanded to accommodate the 32 accepted invitees. The meeting was maintained within the EU-led projects in order to achieve a manageable size to facilitate discussion, although it was noted that the potential number of attendees could have been much higher if the workshop had been expanded to include non-EU contractors and Sponsoring State research programs.The workshop participants agreed that this could be the subject of a future workshop or specialist session at an international meeting (see Workshop Recommendations). 23 of the 32 attendees had specialist taxonomic knowledge in particular phyla (Fig. 1), with Annelida and Crustacea contributing 26% of the expertise each, with the remaining groups (Cnidaria, Echinodermata, Foraminifera, Mollusca, Nematoda, Porifera and Bryozoa) forming the remainder relatively evenly.It was noted that there was no experts on fish, microbial communities, or pelagic taxa present at the meeting. In terms of professional status, the meeting was dominated by Principal Investigators (56%), then post-graduate students (25%) and Post-docs (19%) (Fig. 4).Nations represented in order of the number of attendees were the UK, Germany, Poland, Portugal, Belgium, France and Sweden, with 2 attendees representing the United Nations.The gender balance was 50:50. Following participant introductions, the workshop commenced with an overview from Adrian Glover on the workshop goals.Glover pointed out the rapid growth in exploration activity in the CCZ (e.g. the number of exploration contracts has more than doubled in the last 10 years (ISA 2016), and the need to start to bring together contractor and academialed programs to help provide regional synthesis in our biodiversity knowledge of the region. The need for regional-level understanding has been highlighted in a review of the CCZ Environmental Management Plan (CCZ-EMP) (Seascape Consultants 2014) and in a recent methodological overview of DNA taxonomy methods for the CCZ (Glover et al. 2015).It has also been highlighted in three ISA workshops on taxonomic standardization in meiofauna, macrofauna and megafauna from the CCZ (ISA 2013, ISA 2014, ISA 2015).Despite all these past reviews and recommendations, it was noted that the funding available for regional-level taxonomic work is still lacking as it falls outside the remit of individual contractor or Sponsoring State environmental programs, and indeed the London Workshop itself had no funding, with individual attendees finding travel funding from their own institutional program budgets to attend (see Workshop Recommendations).The London Workshop on the Biogeography and Connectivity of the Clarion-Clipperton ... There was agreement on the overarching goals of the workshop: 1. To explore, review and synthesise the latest molecular biogeography and connectivity data from across recent CCZ cruises from both contractor and academia-funded projects 2. To develop complementary and collaborative institutional and program-based academic publication plans to avoid duplication of effort and ensure maximum collaborative impact 3. To plan a joint synthetic data publication highlighting key results from a range of planned molecular biogeography/connectivity publications There was also general agreement on the need for a synthesis paper, although the delivery mechanism for this third goal remained unclear at the start of the workshop, and is discussed further below. Session 1: An overview of recent CCZ cruises and projects of relevance to molecular biogeography and connectivity Extensive collection for DNA taxonomy and biogeographic analysis first started in the CCZ in 2003-2004 as part of the Kaplan project (Smith et al. 2008) and has continued with the BGR, IFREMER, UKSRL and Belgian (GSR) exploration activities.DNA success rates (successful extraction, amplification and sequencing) with the Kaplan project were very low owing to technological and funding restrictions at the time, and coupled with low sample numbers resulted in few useful genetic sequences.Nevertheless, these data were important in advising the ISA during the development of the first marine protected area (MPA) regional management plan for the CCZ, the APEI network (Smith andal. 2008, Smith et al. 2008, Fig. 1).The Kaplan DNA success rates were in the region of 11-29% for polychaetes (Smith and al. 2008), in comparison with the UKSRL ABYSSLINE project, which is now 96% (Glover et al. 2015).The majority of CCZ projects are now achieving good DNA success rates, particularly following careful preservation protocols (Glover et al. 2015) and the use of new sampling devices such as the epibenthic sledge (Brenke 2005). At the London Workshop, Pedro Martinez provided an overview of the recent German (BGR) environmental work in their exploration claim area.Martinez explained that BGR have two claim areas, one in the eastern CCZ and one in the western CCZ, and the site in the eastern CCZ has been much more intensively explored (Fig. 5).The first cruise was in 2010 and they have continued since then on a more-or-less annual basis using the R/V Kilo Moana or R/V Sonne.The initial focus of activities was to try and survey the whole area, but the emphasis has now switched to focussing on two sites -one likely important for mining (Impact Reference Area) and another that may form the basis of a Preservation Reference Area.They now have three years of collecting samples and data from these sites, which are likely to be useful to inform future environmental management of the region and the development of guidelines for environmental management. Pedro Martinez also outlined the JPI Oceans program aboard the R/V Sonne (Martinez Arbizu 2015).This is not a BGR project, but in fact a collaborative EU-wide project utilising funding from a range of national instruments, with the sea-time funded by the German government.A major goal of the JPI Oceans program was to re-visit a site in the East Peru Basin (not the CCZ) to investigate the impacts of the Disturbance and Recolonisation (DISCOL) experiment conducted in 1989 (Borowski 2001), with two of the JPI Oceans cruises dedicated to this (SO-242-1 and SO-242-2).A third cruise (SO-239) was dedicated to sampling the BGR, Interoceanmetal (IOM), French (IFREMER), Belgian (GSR) and APEI-3 areas in the CCZ (Fig. 6 ), with a portion of the molecular connectivity work being undertaken and funded by the Swedish Research Council FORMAS through Thomas Dahlgren at the University of Gothenburg.As well as sampling the exploration areas for fauna through collection and imagery, additional goals were to re-visit tracks (assumed to be made by a dredge) in the former USA Ocean Minerals Corporation (OMCO) area that is now part of the French area and to investigate the seamount fauna in the CCZ.Martinez outlined some of the preliminary results from the sampling of the tracks and the seamount fauna, and highlighted the importance of determining baseline conditions in the APEIs, in particular ensuring that the APEIs were similar to potentially mined regions (see Workshop Recommendations).Environmental sampling from the Belgian (GSR) contracted area was reviewed by Ann Vanreusel.Sampling has taken place in the Belgian area in the eastern CCZ (between the German and French areas) during GSR cruises in 2014, 2015 and as part of SO-239 in 2015.Molecular samples were obtained from 'multiple core' and 'box core' samples for meiofauna (DESS preserved) and macrofauna (ethanol preserved) in samples from three areas within the Belgian claim (Fig. 7 ).These samples came only from the 2015 cruise, and the SO-239 cruise (the earlier 2014 cruise did not collect molecular samples). Research has focussed on the nematodes.As some other researchers have found, success rates with COI were quite low (34%), but some sequences were obtained, with success rate for 18S at 61%.There have been very few attempts to sequence nematodes so these data are of potentially great significance.Vanreusel highlighted some interesting trends which were the subject of discussion.For example, the prevalence of rare species (see Workshop Recommendations) and the prevalence of high numbers of congeneric species in nematodes.The concept that mining activities and subsequent plumes might actually enhance connectivity by releasing animals into the water column was reviewed briefly.Results of the BIONOD cruise allowed publication of the first biodiversity assessment of polychaetes and isopods based on a barcoding approach in the CCZ in 2015 (Janssen et al. 2015).The presentation of Menot finished with a discussion of some of the key findings of that publication, including the large number of rare species, the increase in diversity when based on molecular operational taxonomic units (MOTUs) and the challenge of finding locations where rare species might be abundant (see Workshop Recommendations). Daniel Jones presented an overview of the recent MIDAS cruise aboard RRS James Cook (JC120), co-funded by the Natural Environment Research Council (NERC) National Capability funding.The cruise program was dedicated to the characterisation of the APEI to the north-east of the CCZ region, formerly known as 'APEI-4'.It is important to note that this has recently been re-labelled as APEI-6, and the new APEI numbering system is illustrated in Fig. 1 of this Workshop Report (ISA 2016).The JC120 cruise primarily visited the south-east corner of APEI-6, but also made a brief visit to the UK-1 area (Fig. 8 ).The cruise program utilised a wide variety of sampling methodologies, most relevant to the London Workshop were the macrofaunal and megafaunal specimen collection from box core and trawl, the AUV photography collection (see Workshop Recommendations), the collections from the scavenging amphipod traps and eDNA analysis based on sediments. There was some discussion of the issue of standardisation of megafaunal imagery, which is becoming a significant issue across a range of CCZ projects (see Megafaunal Imagery section and Workshop Recommendations).Jones pointed out that NOC were already actively collaborating with the University of Hawaii and JPI Oceans team on this, and in addition developing automated systems for the counting of nodules (Schoening et al. 2016).Some discussion ensued on the importance of nodule density and cover, and the need for regulators to understand that a range of habitat types must be protected, not just single-block areas.For example, many of the CCZ sites are characterised by gentle ridges, troughs and relatively flat areas, in which nodule density, cover and faunal composition may vary. The macrofaunal and megafaunal DNA component of the JC120 samples is being funded directly by the MIDAS through Work Package 4 (Task 4.1) and being led by Sergi Taboada, Gordon Paterson and Adrian Glover in collaboration with Daniel Jones.Taboada provided an overview of the relevant data that is being provided.Samples were obtained from box core, mega core, Hybis ROV, Agassiz trawl and amphipod traps.462 samples were taken for DNA analysis, 90% of them from the APEI-6 and 10% from UK-1.81 species were determined based on barcoding analysis, 15 of them annelids (Fig. 9).The intention is that the sequences will be pooled with samples being analysed in Glover's lab from the UKSRL ABYSSLINE project with connectivity studies published collaboratively.In addition Taboada outlined a major component of the MIDAS-funded project which is to undertake a detailed microsatellite DNA-based project on one of the more common nodule-dwelling sponges, a species currently being described (Timea sp.).Further details are provided in the population genetics section below.A discussion point raised by David Billett that arose from Taboada's talk concerned the relative need for imagery versus collecting, in particular given the high cost associated with ROV cruises.A general consensus was the need for both sampling and imagery, highlighted further in the sections of this report -Megafaunal Imagery and Workshop Recommendations. Adrian Glover provided the final project overview of the day with a discussion on the molecular connectivity and biogeography parts of the ABYSSLINE project, funded by UKSRL.This is a contractor baseline survey that is being undertaken through a collaborative partnership with a globally-distributed academic partner network, coordinated and led by Craig Smith at the University of Hawaii.Partners in the UK include the National Oceanography Centre and the Natural History Museum.The project has been running since mid-2013, with cruises taking place in October 2013 (R/V Melville) and February-March 2015 (R/V Thomas G Thompson).The sampling design consists of a series of 30x30km boxes within which randomised sampling takes place using a wide range of equipment, the most relevant to the DNA work being the epibenthic sledge, 'box core', 'multi core' and ROV (Smith et al. 2013).To date, 3 boxes have been sampled, 2 within the UK-1 exploration area and 1 within the UK-1 reserved area, which is contracted to Ocean Minerals Singapore (OMS) under a joint venture between OMS and UKSRL (Fig. 10 ).The three boxes, UK-1 Stratum A, Stratum B and OMS Stratum A differ to some extent in their seabed habitat, with UK-1 Stratum B having a large number of seamounts (Fig. 11).Molecular (DNA) parts of the project cut across several different PI programs within ABYSSLINE (Table 1).Glover, together with Helena Wiklund and Thomas Dahlgren are leading the work on the macrofauna and megafauna, excluding Crustacea.Pedro Martinez is leading the work on the Crustacea and meiofauna.Andrew Gooday is leading the work Table 1. Summary of Clarion-Clipperton Zone (CCZ) EU-contractor/academia led projects with a significant molecular biology (DNA taxonomy or population genetics) component Session 2: The policy and industry perspective Although primarily a data-discussion workshop, it was thought important to include reasonable time on policy and industry perspectives in order to direct discussion and to provide direct two-way dialogue with the development of the regulatory framework at the ISA.With this in mind, Stefan Brager (ISA) and Ralph Spickermann (UKSRL) were invited to the workshop to contribute from these perspectives. Stefan Brager presented the policy background for contractor investments into the study of biogeography and connectivity in the CCZ.To protect the Common Heritage of Mankind, the Law of the Sea has mandated the ISA to create rules and regulations to ensure protection of the marine environment during deep-sea mining.The 'Environmental Management Plan for the Clarion-Clipperton Zone' (ISBA/17/LTC/7, ISA 2011) is such a binding document.It was adopted in 2012 and is currently up for review.Besides creating the nine existing Areas of Particular Environmental Interest (APEI) (Fig. 1 ), it calls for a suite of additional measures, several of which require biogeographical input such as the designation of the Impact and Preservation Reference Zones within each claim area.A review of the functionality of the APEI will need to look at the size of the areas as well as at the operational and management objectives with sufficient biological connectivity being one of the key issues. Dr Brager's presentation resulted in several discussion points being raised at the London Workshop.There was a short discussion on how aware (or not) the academic scientific community is with regard to the various regulatory developments, in particular important dates for reviews, milestones, adoptions of regulations and changes etc. Examples are the need for review of the APEIs, the deadline for that, and the need for draft exploitation guidelines, due in July 2016.A policy brief for the academic community may be of use (see Workshop Recommendations).Brager challenged the scientists in the room to answer if they felt the goals of the CCZ EMP (ISA 2011) are being met, in particular whether the APEI should be changed or not.Adrian Glover noted, based on anecdotal evidence only, that most scientists he had spoken to felt uncomfortable that there are no APEIs in the middle of the CCZ, and that they are only distributed around the edges of the main contracted areas.He questioned whether there could be justification for putting smaller APEIs in the middle of the CCZ, avoiding contracted areas, but covering different habitat types.Glover pointed out that the APEIs are based on a range of assumptions and models following Wedding et al. (2013).It may be possible that many of these assumptions and models can be updated, but this needs funded work undertaken by the scientific community (see Workshop Recommendations).Daniel Jones pointed out that preservation areas and set-aside regions further afield may be required, and Kirsty McQuaid pointed out the work they are doing with Kerry Howell at the University of Plymouth to use a habitatmodelling approach to determine if APEIs are representative of mining areas.Gordon Paterson noted that access to data from each claim is required by the scientific community to enable this kind of finer-scale habitat modelling, and Steffi Kaiser pointed out that the sediment characteristics between claim areas was often quite different, and these types of environmental parameters need to be recorded. Following this discussion, Ralph Spickermann (UKSRL) addressed the workshop from the industry perspective.His presentation outlined the path from exploration to exploitation from the contractor point of view.Spickermann pointed out that seabed minerals are essentially a capital-intensive new industry, with two key business enablers being the regulatory framework and the environmental responsibility.From an industry perspective, the future exploitation regulations (currently under discussion at the ISA) must be commercially viable and environmentally sustainable, stable and predictable for an infant industry, reflecting of the technical risk relative to terrestrial ventures and with a simple royalty structure.In terms of resource certification (i.e how much of a resource there actually is), this can follow already well-established protocols such as the Canadian NI 43-101 or equivalent.Spickermann finished with a brief overview of the ABYSSLINE consortium environmental baseline work, highlighting the cruises (mentioned by Adrian Glover earlier in his overview) and the broad range of studies being carried out, and emphasising that the value of sample richness is only realised upon analysis and publication, not just collection.This latter point is broadly supported by the science community and particularly relevant to the CCZ where there has been a long history of biological sample taking without appropriate funding models to work up the samples (see Workshop Recommendations). Session 3: Synthesis of DNA taxonomy and biogeography within CCZ exploration claim areas The third session of the London Workshop addressed what has been the main focus of attention in recent DNA work in the CCZ: what we term in this report 'DNA taxonomy'. There is often a mixture of use of the terms DNA taxonomy and DNA barcoding in the literature.In its strictest sense, DNA barcoding refers to the identification of a species by sequencing a known marker gene, often the COI marker, and comparing this against known databases or libraries (Hebert et al 2003).In contrast, DNA taxonomy (at least as we define it) is the creation of that database or library.This is particular important in the deep sea, where we have seen in recent CCZ examples (Janssen et al. 2015;Glover et al. 2016a;Dahlgren et al. 2016) there are no identified, vouchered reference sequences whatsoever, so no 'barcoded' specimens can actually be identified using the currently available libraries.New taxa that are recovered from the CCZ and sequenced, with those sequences made available to the community are effectively being 'described' taxonomically, albeit in an informal or formal sense depending on if a name is provided.This problem has led to the use of the concept of 'Molecular Operational Taxonomic Units' or MOTUs, used in Janssen et al. (2015).In this case, MOTUs can be distinguished from one and another using barcoding gap or phylogenetic analysis, but not necessarily identified to name.The use of the term MOTU is not universal -some authors (Glover et al. 2016a;Dahlgren et al. 2016) refer to phylogentically-distinct clades based on genetic evidence as species, citing the phylogenetic species concept (see Workshop Recommendations). The session started with Pedro Martinez highlighting results from the joint BGR-Ifremer efforts to the German and French exploration areas.Data from the BGR cruise SO205 in 2010 and the BIONOD Ifremer cruise in 2012 have been published in Janssen et al. (2015).The paper reports COI data from 556 polychaete and 150 isopod samples.A typical finding from the study is the large number of singleton species, for example out of 233 polychaete 'MOTUs' determined by gap analysis, 138 were represented by only a single specimen.A similar pattern was observed for the isopods.Also typical was the confirmed presence of a small number of broadly-distributed species -28 polychaete species were found in both the BGR and Ifremer areas, separated by a distance of 1300km and a depth difference of 697m (Janssen et al. 2015.For isopods, only two speci es were found to be present in both areas, which remarkably were thought to be brooding species with limited dispersal abilities (Fig. 12).Given that the vast majority of the species were present at only one site, the question arises as to how many of these species are truly restricted in their distributions and how many have just not yet been found elsewhere owing to undersampling.This is likely to only be answered through collaborative work across the entire region. Helena Wiklund provided an overview of the DNA taxonomy work on the UKSRL ABYSSLINE project.The project team (Wiklund, Adrian Glover and Thomas Dahlgren) are working on a collection of 3312 individually databased and photographed megafaunal and macrofaunal specimens (excluding Crustacea) from the two ABYSSLINE cruises conducted in 2013 and 2015 (Fig. 13).To date all the sequencing has been completed from the first cruise, it is ongoing for the second cruise, and papers have been published on the overview methodologies (Glover et al. 2015), the echinoderms ( At the London Workshop, Wiklund outlined some of the new data from the second ABYSSLINE cruise (AB02) and in addition analysed these data alongside new unpublished data from the MIDAS-JC120 cruise (see above).In general, some evidence for broad species ranges was found in a range of taxa when regional-level data was included from the UK-1, OMS, Ifremer and BGR exploration contract areas.However, it was pointed out that it is only possible to demonstrate presence, not absence, given the likelihood of undersampling.This is a common problem thread that runs through much of the regional-level analyses being undertaken at present (see Workshop Recommendations).Discussion focussed on this, and in addition Lenaick Menot pointed out that it would be very interesting to focus some efforts on these broadly-distributed species to examine their functional traits, for example.Wiklund pointed out that as new data accumulate, new 'target taxa' emerge such as the pycnogonids, which seem to be quite broadly-distributed.Adrian Glover pointed out that we need to at some point stop thinking about the CCZ in terms of the contract boxes, and start to think more broadly about the region and the drivers of heterogeneity in general. Paulo Bonifácio, together with Lenaick Menot and Lenka Neal, presented new data on the diversity, distribution and connectivity of polynoid worms across the CCFZ.The study aims to describe new species of deep-sea polynoids using morphology complemented with molecular data, evaluating monophyly of the subfamily Macellicephalinae and examining the genetic connectivity for widely distributed species found among different sampled areas (BGR, IOM, GSR, Ifremer and APEI#3).Samples were collected using an epibenthic Preliminary results suggested the presence of 40 morphotypes of Polynoidae, with the subfamily Macellicephalinae being the most abundant and species rich.Further, Bathyfauvelia sp.A, was found in 4 of the 5 studied areas indicating no evidence for a biogeographic barrier and the connectivity appears to be high between the areas at least 100 km apart from each other.Bonifácio concluded that their study exemplifies the need for a dual approach in taxonomy, phylogeny and connectivity studies, i.e combining morphological and genetic data. Lara Macheriotou presented molecular barcoding work completed thus far relevant to the biodiversity and population connectivity of free-living Nematodes of the CCZ.These samples were collected during the SO-239 (JPI Ocean EcoResponse, March-April 2015) cruise which visited four contractor licences (BGR, IFREMER, IOM, GSR) and APEI#3, as well as the GSR-led campaign (GSRNOD15A, September-October 2015) to their respective claim.Despite being very preliminary results, these data are providing the first initial insights to the nematofauna of the CCZ based on DNA.Most prominent was the large discrepancy in generic diversity of Nematodes collected using methodologies pertaining to meiofaunal versus macrofaunal taxa, the latter being significantly lower than the former.Data specific to the most abundant and widespread genera (Halalaimus, Phanodermopsis) confirm these are cosmopolitan and tentatively point to the possibility of an endemic deep-sea species from genus Halalaimus.Discussion of the Macheriotou presentation focussed on a couple of issues: firstly the reduced success rate with COI relative to 18S (noted by many other researchers across a range of taxa) and secondly the interesting observations of the large numbers of macrofaunal-sized nematodes found, particularly in the box core samples. Andrew Gooday presented an overview of the UKSRL ABYSSLINE studies on benthic foraminiferal diversity with a focus on the new data based on DNA.Samples were obtained from all the ABYSSLINE survey boxes (see above).Morphological analyses of only eight preserved coretop samples revealed very high levels of species richness, with over 500 morphospecies recognised and many others recorded from shipboard sorting (Fig. 15).Molecular analyses conducted by Maria Holzmann (U.Geneva) have had mixed success.Sequences were obtained from 85 of the 99 xenophyophores analysed, leading to a 14-fold increase in the total number of known xenophyophore sequences and confirming that these giant protists are monothalamous foraminifera.The success rate for other foraminiferal groups ranged from 2 to 53%.Results for the calcareous rotaliids are consistent with global distributions for some species.On another front, High-Throughput Sequencing of DNA and RNA from 2-gram sediment samples by Franck Lejzerowicz (U.Geneva) yielded highquality sequence data for 374 samples that may provide an alternative solution to the timeconsuming sorting and morphological analysis of foraminifera.Discussion focussed on the excitement of these new DNA data which have for so long been missing from groups such as xenophyophores -Adrian Glover pointed out the importance of the study given that only 68 xenophyophore species are described worldwide, and 35 of these are known from the ABYSSLINE study.They are also unusual and good for public engagement in understanding the marine biology of the region. Magda Błażewicz outlined the latest data on the tanaidaceans, a quite common crustacean found in abyssal sediments that lives in self-constructed tubes.Błażewicz is working on samples from the eastern CCZ (JPI Oceans project) together with material from another west Pacific project, the KuramBIO program.Generally, tanaidaceans are thought to have quite restricted distributions owing to their virtually sessile lifestyle (building the tubes) and reproductive strategy that lack planktonic stages throughout their lifetimes.Out of 67 specimens of the genus Pseudotanais (Fig. 15), apparently the most abundant tanaidacean genus from the CCZ, DNA (COI) was obtained from 41.She has emphasised that success ratio in extracting and amplyfing DNA was substantially higher from fresh material (extraction done onboard).Applying automatic procedures to delimit species (e.g.ABGD, GYMS) it has been demonstrated that Pseudotanais is represented by 12 species in the CCZ and that the genus itself is not monophyletic.The molecular results are confirmed by morphological analyses.It was observed that only two species were broadlydistributed in the CCZ (between IOM and BGR) while there was no taxon in common across both the CCZ and KuramBio.Data are in preparation for a publication.Andrea Waeschenbach provided an overview of deep-sea and abyssal Bryozoa, including some preliminary data from some of the first DNA studies to be conducted in the CCZ as part of the UKSRL ABYSSLINE project.A general feature of deep-sea bryozoans is they are usually attached to hard substrates, with colonies raised above the substrate (Fig. 16).Examples of this have been observed in a remarkable new species of cyclostome bryozoan observed in the AB01 samples from the ABYSSLINE project.Waeschenbach provided an overview of the various deep-sea bryozoan projects that are ongoing.With regard to the CCZ, data are so far preliminary, but the majority of samples are returning good sequences and many appear to be new species.A discussion ensued on the importance of getting funding for taxonomic work on the less numerous CCZ taxonomic groups such as the bryozoans, pycnogonids and others (see Workshop Recommendations). Sarah Schnurr presented an overview of the DNA taxonomy and biogeography in the isopods from the CCZ, which are dominated by the Asellota.These are generally understood to be brooding species with limited powers of dispersal, with distribution and gene flow dependent on passive and active migration of adults.Schnurr presented data from the JPI Oceans SO-239 and SO-242-1 cruises that took place in 2015 to the eastern CCZ (SO-239, see above) and the east Peru Basin DISCOL area (SO-242-1, see above, Fig. 17).Several hundred COI, 16S and 18S sequences have been obtained already from these samples, and preliminary data show some shared species between different exploration areas.These data will be the subject of a future publication (see Publication Plans, below).Tasnim Patel spoke about her work on Amphipoda (Table 2) based on samples from the JPIO research cruise SO242-1 covering both the CCZ and DISCOL regions as part of her PhD studies.She is investigating whether habitat type, regional processes, different types and intensities of disturbance affect species diversity, dispersal and connectivity. Approximately 60,000 specimens were collected from the CCZ and DEA.These have been morphologically sorted at RBINS and 27 species have been identified thus far.Tasnim is now analysing the COI gene in two model species; Paralicella caperesca and Abyssorchomene gerulicorbis to test for cryptic diversity with results expected in June 2016.After this, she will attempt restriction-site associated (RAD) Next-Generation-Sequencing to test for population connectivity at various spatial scales.The second major data session of the London Workshop was focussed on individual target taxa, in which detailed population genetic analyses have been carried out.These data are extremely new and nothing has yet been published from the CCZ.In most instances, researchers are investigating a small number of taxa in which sufficient sample numbers are available from individual locations, and analysis can be undertaken using rapidlyevolving markers such as COI on the genetic heterogeneity of the populations, measured by analysis of haplotype diversity and analysis of demographic patterns.These analyses can also sometimes be used to infer patterns of speciation (Glover et al. 2005) or provide more robust evidence of conspecificity over broad spatial scales (Georgieva et al. 2015). Thomas Dahlgren opened the session by providing a historical overview of population connectivity studies in the deep sea.In the early period of deep-sea exploration, the deep sea was considered a broadly stable and homogenous environment that would favour low genetic diversity (e.g.Bretsky and Lorenz 1969).However, early allozyme data soon suggested there were relatively high levels of genetic diversity in some taxa (e.g Ayala et al. 1975).Since the discovery of hydrothermal vents, the efforts of researchers have focussed mainly on hydrothermal vents, ephemeral habitats and hard substrates such as seamounts.Remarkably, Dahlgren pointed out that although there have been dozens of papers on genetic connectivity at these 'island-like' deep-sea habitats, there are only three published studies from soft-sediment abyssal habitats (Etter et al. 2011, Janssen et al. 2015, Gubili et al. 2016).Even more remarkably, only (Etter et al. 2011) and (Gubili et al. 2016) includes detailed population data, and the latter publication is still in press.(Etter et al. 2011) showed that for an abyssal bivalve mollusc, there was remarkably little genetic variation within ocean basins, but some differentiation between basins.(Gubili et al. 2016) showed that the although the supposedly-cosmopolitan species Psychropotes longicauda was in fact a species complex, the separate lineages within the clade could have very broad distributions. The Dahlgren presentation at the London Workshop opened up discussion, chaired by Adrian Glover and some key unanswered questions relevant to the CCZ were proposed (see Workshop Recommendations): 1. How do we go from indirect evidence of connectivity based on genetics to direct evidence based on functional traits and larval biology? 2. What oceanographic data (currents, models) are available to support indirect evidence of connectivity? 3. What genetic markers should we use to infer connectivity, and how does this vary between taxonomic groups? Gordon Paterson raised the point that at what point do cryptic species matter if they perform the same ecological function?Ann Vanreusel pointed out that there are studies that show cryptic species are important at a functional level, and Adrian Glover pointed out that his lab are currently working on looking at functional differences measured by stable isotope analysis in cryptic polychaete species (PhD Student Madeleine Brasier).David Billett raised an interesting question as to whether we should be looking at also north-south patterns across the CCZ as well as east-west, and the effects of depth.Andrew Gooday noted that another variable that may structure populations is the calcium-carbonate compensation depth (CCD). Heiko Stuckas provided an overview of haplotype diversity methods applied to deep-sea taxa (Fig. 18).New data were presented from Stuckas on the population connectivity of polychaetes and isopods from the BGR claim area at scales of 10-100 km.The COI data have been collected from five polychaete species and five isopod species that have been intensively sampled from the BGR Preservation Reference Area and Impact Reference Areas as well as some sites further afield (100-150km from these areas, but within the BGR exploration contract).The data show interesting diversity and demographic patterns that are the subject of a publication in preparation.Discussion focussed on whether these patterns could be seen to be representative of the CCZ as a whole.Lenaick Menot pointed out that isopods would be expected to have lower dispersal capabilities being brooders, and that we don't yet have representation from a range of functional groups.An explanation of the study of haplotype diversity (Hd).Hd is measured by calculating the frequency of different haplotypes (genetically distinct sets of genes defined here by single nucleotide polymorphisms).The measurement of Hd and the frequency distribution of haplotypes can provide information on the genetic diversity of the population, the presence of speciation and demographic patterns.Image: Heiko Stuckas, Senckenberg Institute. Thomas Dahlgren continued the population session with an overview of the study of 'target taxa' from the UKSRL ABYSSLINE project (Fig. 19).The new data come from the UK-1, OMS and APEI-6 sectors of the CCZ, and Dahlgren and the ABYSSLINE team have added in published data from Janssen et al 2015 into their analysis where possible.Currently, five polychaetes, two ophiuroids and one mollusc are being studied in detail, while a separate microsatellite study is ongoing for one of the nodule-dwelling sponges (see Taboada, below).The ABYSSLINE, MIDAS and JPI Oceans data are being studied together by Dahlgren to enhance spatial coverage and bring a more regional-view to CCZ connectivity, and publications are currently in preparation (see Publication Plans).Discussion of the Dahlgren presentation followed a similar pattern to that of Stuckas, with the main issue raised of what we are going to be able to say about non-target taxa -i.e those that are under-represented in samples.Gordon Paterson raised concern about the lack of meiofaunal target taxa, and Adrian Glover raised concern about the lack of megafaunal target taxa (see Workshop Recommendations). Pedro Martinez outlined population studies on ophiuroids being led by the Senckenberg team.Ophiuroids are extremely common in the CCZ, and are often the most recognised metazoan in video transects (Fig. 20).One of the outcomes from the ISA workshop on megafaunal taxonomy is that species ranges cannot be estimated from imagery alone, so Martinez focussed efforts on SO-239 to collect ophiuroids to confirm identity and establish population studies.One of the common ophiuroids, normally referred to as Ophiomusium cf.glabrum (discussed in Glover et al. 2016a) is in fact two species, and further analysis since the Glover et al paper by Tim O'Hara (Museum Victoria) has confirmed this.Adrian Glover pointed out that they left this out of the 2016 paper as O'Hara is planning a taxonomic publication on these taxa.Martinez discussed the new data also in the context An example of a 'target taxon' being studied in the Clarion-Clipperton Zone for detailed population genetic study, Paralacydonia sp.This is a likely new species to science, but is abundant in a range of CCZ exploration areas.Image credit: Adrian Glover, Helena Wiklund and Thomas Dahlgren. The London Workshop on the Biogeography and Connectivity of the Clarion-Clipperton ... of data from the DISCOL area and the KuramBIO area in the west Pacific.Publications are in preparation.Discussion focussed on one of the main questions that has arisen before, as to why some taxa show evidence of both cryptic speciation and also broad species ranges. Uwe Raschka presented an overview of detailed study of a species of harpacticoid copepod, Pseudotachidius bipartitus first described from the Beaufort Sea, Alaska, but recorded from the CCZ in recent surveys.90 individuals have been sampled, with COI sequences obtained from 57 individuals from across the BGR, IOM, GSR and Ifremer contract areas.Publications on the genetic diversity and cryptic speciation are in preparation. Pedro Ribeiro presented a preliminary population genetics analysis of a vent mussel species on the Mid-Atlantic Ridge using next-generation sequencing of RAD tag libraries (RADseq).The main purpose of this presentation was to provide an overview of the advantages, as well as methodological and analytical challenges, of using this technique to investigate population genetic structure in the deep sea.RADseq can be more timeefficient compared to other methods, and yield large numbers of single nucleotide polymorphism (SNP) markers, thus holding the potential to uncover levels of genetic structure usually not attainable by traditional methods.However, this technique requires high DNA integrity of samples to construct good quality RAD tag libraries, which in turn are The species often referred to as Ophiomusium cf.glabrum which is a likely cryptic species complex in CCZ samples, but is present in large numbers and is recognisable (at least to genus) in image surveys.This is a current target taxon for population studies.Image source: (Glover et al. 2016b). instrumental for sequencing success.Furthermore, discovery and validation of SNP markers can be particularly challenging and may require investing some time for developing the bioinformatics skills necessary to properly explore the data and the array of bioinformatics tools available. Sergi Taboada presented preliminary data from a new MIDAS-funded study of the detailed population genetics of a relatively newly-discovered and overlooked species of noduledwelling sponge (Fig. 21).These small but abundant sponges were first noted by the ABYSSLINE team on the R/V Melville cruise in October 2013, and remained for some time in databases as 'Porifera sp.A'.Since that time, the team has been collaborating with colleagues at the National University of Singapore (Swee Cheng Lim) who are working on a subset of samples from the OMS exploration area part of the reserved area of the UK-1 contract also sampled as part of ABYSSLINE.Lim has identified the species as likely belonging to a new species in the genus Timea or Hemiasterella and a species description is in preparation.Meanwhile Taboada and colleagues at the Natural History Museum are conducting a next-generation sequencing (NGS) approach to study genetic connectivity in the animal, as preliminary data show that COI is not a variable region in Porifera.The sponge is an ideal candidate for connectivity studies as it is common, easily identifiable (once a description is published), sessile, filter-feeding, with lecithotrophic larvae and is easily counted to establish population densities (using box cores).Taboada outlined the preliminary data currently being worked on, and publications in preparation.The London Workshop on the Biogeography and Connectivity of the Clarion-Clipperton ... Session 5: Synthesis of the broader biogeographic context based on morphology and molecules The purpose of the final data session at the London Workshop was to integrate the new studies of molecular biogeography and connectivity with the broader biogeographic context, in particular data based on morphology.Historically this has been from collected samples in formalin that are unsuited to DNA work; more recently there has been a surge of interest in trying to understand broad ranges of species based on imagery (ISA 2013). Adrian Glover presented an overview of the 'express Taxonomy' approach used in the ABYSSLINE project to make raw taxonomic data (species morphology, genetic data and natural history observations) available from recent cruises even before species description is possible.These data are being published by the team in open-access data journals throughout the project.Glover highlighted the importance of morphological data in allowing comparisons with historic collections and species records (see Workshop Recommendations).These comparisons have in many instances supported potentially broad 'cosmopolitan' species distributions, but genetic data are in most cases lacking from species type localities (the location of original description).Glover explained how the team had been dealing with this by careful examination of type locality and/or material, and use of open nomenclature abbreviation 'cf.' to highlight cosmopolitanism as an untested hypotheses in many instances.An example was presented that links the new DNA data to historical collections and knowledge -that of Nucula profundorum, a protobranch mollusc that is abundant in abyssal samples.Samples of N. profundorum obtained from the ABYSSLINE program were assigned to N. profundorum based on direct morphological comparison with type material held in the NHM collections.Interestingly, on sequencing the specimens, they did not match the published sequences of N. profundorum on GenBank, and it is likely that the sequences on GenBank (recorded at much shallower depths and a long distance from the N. profundorum type locality) are erroneously identified.These matters will be clarified in future publications, but it highlights the importance of improving the quality of data on GenBank and checking the type locality of species that are identified (see Workshop Recommendations). Daniel Kersken presented an overview of the Senckenberg work on Porifera from the JPI Oceans cruise SO239.The project was focussed not just on DNA work but also on providing an image-based catalogue of the Porifera in the CCZ.Samples were obtained from 15 ROV stations in depths of 1700-5000m in the eastern CCZ from the main SO239 sample areas.A large number of species were recovered, which are the subject of detailed morphological taxonomy coupled with DNA barcoding.Future projects include an attempt at next-generation sequencing and radiocarbon dating of Saccocalyx pedunculatus to determine the age of the sponges, as well as video-based annotation along ROV transects. Andrew Gooday presented an overview of the broader biogeographic context for the Foraminifera (Fig. 22).Foraminifera have a superb fossil record and are thus widely used to reconstruct knowledge of ancient oceans.In this sense, understanding modern foraminiferal biogeography can also provide a 'deep-time' perspective on deep-sea biodiversity.Knowledge from shallow-water studies is suggestive that endemism is high in forams (e.g.Culver and Buzas 1998), and that endemism decreases with increasing depth. In some cases, broad ranges are now backed up by genetic evidence, but there are now so many new deep-sea foraminiferal taxa, in many cases very rare, that making general statements on distributions within the groups is not yet possible (Gooday and Jorissen 2012) Gordon Paterson contributed to the session with a talk on the issue of rarity in the deep sea.Understanding species ranges is critical to the assessment of extinction risk, and part of this issue is related to rare species: are they particularly at risk? Paterson showed how almost all deep-sea biodiversity datasets demonstrate a pattern of rarity -that is large numbers of singleton species.Even when modelled against sampling effort, the number of rare species does not seem to decline.Paterson noted that much better definitions of rarity are needed, and is currently working on a meta-analysis of this problem using CCZ data.These show that rare species are the most represented in CCZ samples (in terms of percentages of species) but locally rare (but widely distributed) species also make up a similarly large percentage.So species may be locally rare, but actually widespread and hence common, irrespective of the abundance. Analysis of megafauna based on imagery Although not part of the original London Workshop brief, discussions at the workshop also focussed to some extent on information that can be gleaned from imagery surveys.These are increasingly being used by contractors and academic cruises to gain broad-scale information on species abundance and diversity.Recent ISA workshops have recommended that imagery surveys are backed up by sampling (ISA 2013).However, the increasing prevalence of these datasets, the continued lack of megafaunal sampling and The London Workshop on the Biogeography and Connectivity of the Clarion-Clipperton ... the urgent need for better understanding of megafaunal distribution patterns suggests that imagery data must be considered in broad-scale biogeographic study, where possible.At the Workshop, Pedro Ribeiro highlighted some preliminary data from the CCZ with regard to imagery survey for corals obtained during the JPI Oceans cruise SO239, both from nodule areas and seamounts in their vicinity.Ribeiro highlighted the low degree of species overlaps between coral faunas from different survey locations and the significant number of singleton observations as clear indications that a higher sampling effort is required to obtain a sharper picture of biogeographic/connectivity patterns in the area.Daniel Jones also highlighted the extensive AUV photographic dataset collected on JC120 for megafaunal analysis.Not present at the Workshop, but invited, were Craig Smith and Diva Amon who are leading a survey of the megafauna based on ROV and AUV sampling associated with the ABYSSLINE project.A summary of their views follows. The megafauna (typically organisms >2 cm in smallest dimension) constitute an important component of the biodiversity in the abyssal CCZ and play a significant role in deep-sea ecosystem function.Knowledge of biogeography and population connectivity are essential for effective environmental management of nodule mining, including the design of marine protected areas. The study of megafaunal biogeography and connectivity requires collection of information on megafaunal biodiversity via imagery as well as physical samples of organisms, over a range of spatial scales within and beyond the CCZ, followed by morphological and molecular comparisons.The first studies now being published based on imagery data suggest there are many species new to science in the CCZ (Amon et al. 2016), likely requiring a major effort for connectivity and biogeographic studies.The extremely limited collection of megafaunal specimens in the CCZ thus far has severely hampered reliable species identifications, and in turn estimation of species richness and species ranges via detailed morphological and molecular analyses.Although a number of imaging surveys have been conducted to characterize megafaunal diversity and biogeography within the CCZ, varying image quality, and lack of voucher specimens, makes comparisons between studies and accurate assessment of species ranges very problematic.Differences in image quality have resulted from variations in equipment used (e.g., camera resolution, energy of light sources) and survey procedures (e.g., altitude and speed), which may be inevitable given the difference in budgets and oceanographic resources across projects.This influences the resolution to which megafauna can be identified, as well as reliable assignment of individuals to morphotypes.Critical needs for advancing our understanding of biogeography and connectivity in the CCZ and the broader Pacific include: (1) the collection of megafauna specimens to provide species vouchers, as well as material for population genetic studies, and (2) development of standardized megafaunal image atlases to facilitate reliable identification of morphospecies across projects and regions (see Workshop Recommendations). Summary tables and discussion The final day of the London Workshop (Figs 23, 24) was focussed on (1) the scope and outline of a potential synthesis paper, (2) the collation of summary tables outlining future publication, cruise and proposal plans from the participants and (3) workshop recommendations.The Principal Investigators met and agreed in general a proposed outline of a synthesis paper, to be taken forward by the lead authors and discussed elsewhere.The summary of proposed publications (Table 4) was an extremely useful exercise, outlining several opportunities for collaboration through sample and/or data sharing and in general helping to align work programs to reduce redundancy.In general, it was felt that there was a highly level of complementarity amongst the proposed work plans and the Workshop participants were in general optimistic about the future knowledge that will be made available from the CCZ in the next 2-3 years. Conclusions Workshop Recommendations The London Workshop on the Biogeography and Connectivity of the CCZ concluded with a short discussion on Workshop Recommendations, summarised here: Funding and Planning • The most critical funding priority for CCZ research on biogeography and connectivity is high-quality taxonomic work that includes species descriptions based on phylogenetic inference, specimen and data archiving to permit future molecular work and equal weight given to all taxonomic groups and size classes • Taxonomic funding proposals should consider international programs that could deliver taxonomic funding across a range of contractor projects, not just within single ones • Future international projects on the CCZ must include funding to ensure integrating workshops are fully costed within those projects as they play a vital role in developing collaborations and minimising redundancy of activities • Future Workshops on CCZ connectivity should be expanded to include non-EU contractors and additional international programs • Areas of Particular Environmental Interest (APEIs), the majority of which are completely unstudied, should be a priority area for research programs on biogeography and connectivity, helping to deliver an improved regional-level understanding of the CCZ Macheriotou L, Cunha M, Hilário A, Rodrigues C, Colaço A, Ribeiro P, Błażewicz M, Gooday A, Jones D, Billett D, Goineau A, Amon D, Smith C, Patel T, McQuaid K, Spickermann R, Brager S (2016) The London Workshop on the Biogeography and Connectivity of the Clarion-Clipperton Zone .Research Ideas and Outcomes 2: e10528.doi: 10.3897/rio.2 Figure 1 . Figure 1.Exploration contract areas for polymetallic nodules in the Clarion-Clipperton Zone, central Pacific Ocean.Areas of Particular Environmental Interest (APEIs) numbered according to the latest International Seabed Authority data (Source: Stefan Brager, ISA), see section in this report: 'Key Outcomes and Discussion'.Image credit: International Seabed Authority, 2015. Figure 3 . Figure 3. Recent publications on the Clarion-Clipperton Zone are making imagery and genetic data available in public databases that will allow future workers access to these data and the voucher materials, for regional-level syntheses and further DNA sequencing if needed.Examples are recent taxonomic data papers on the Echinodermata (Glover et al. 2016b) (image left) and Cnidaria (Dahlgren et al. 2016) (image top right).Data associated with these papers is automatically uploaded to the Global Biodiversity Information Facility (GBIF, bottom right).See Workshop Recommendations. Figure 4 . Figure 4. Composition of participants in the London Workshop on the Biogeography and Connectivity of the Clarion-Clipperton Zone, based on taxon expertise (only estimated in 23 participants), professional status, representing nation and gender (from 32 participants). Figure 5 . Figure 5.The BGR-Germany polymetallic nodule exploration claim area in the eastern CCZ, illustrating the two areas in which survey work has concentrated in recent years -the proposed preservation reference area (PRA) and impact reference area (IRA).Image source: Federal Institute for Geosciences and Natural Resources (BGR, Germany) and Pedro Martinez, Senckenberg Institute (Rühlemann et al. 2011). Figure 6 . Figure 6.Outline of the JPI Oceans 'SO-239' research cruise aboard the R/V Sonne 10 March -30 April 2015 to the eastern CCZ (Image from Short Cruise Report: Martinez, 2015). Figure 7 . Figure 7.The GSR-Belgium polymetallic-nodule exploration claim areas in the eastern CCZ, illustrating the three areas in which survey work has concentrated, and the sampling stations in which molecular data were obtained from GSR cruise in 2015 and the JPI Oceans SO-239 cruise in 2015.Image source: Ann Vanreusel, University of Ghent. Figure 8 . Figure 8.The eastern CCZ, illustrating the cruise track of the RRS James Cook MIDAS JC-120 cruise to sample APEI-6 (green box) and a short sampling station in the northern sector of UK-1 (yellow box).Image source: Daniel Jones, National Oceanography Centre, UK. Figure 9 . Figure 9.The DNA taxonomy field pipeline for the Managing Impacts of Deep Sea Resource Exploitation (MIDAS) cruise aboard RRS James Cook (JC120).Image source: Sergi Taboada, Natural History Museum, UK. Figure 10 . Figure 10.The UK Seabed Resources Ltd (UKSRL) Abyssal Baseline (ABYSSLINE) survey area in the UK-1 exploration area in the eastern CCZ, with the UK-1 and Ocean Mineral Singapore (OMS) reserved areas highlighted.The survey design consists of replicated 30x30km boxes within the exploration area within which randomised sampling using a range of equipment is undertaken (Smith et al 2013).Image source: Craig R Smith, ABYSSLINE Project. Figure 11 . Figure 11.Contrasting seafloor topography in the three Abyssal Baseline (ABYSSLINE) 30x30km survey boxes sampled to date, left to right, UK-1 Stratum A, UK-1 Stratum B and Ocean Mineral Singapore (OMS) Stratum A. Image source: Craig R Smith, ABYSSLINE Project. Figure 12 . Figure 12.Analysis reproduced fromJanssen et al. (2015) highlighting the large number of 'MOTUs' unique to individual claim areas in a study of isopods from the BGR and Ifremer regions, only two taxa were found to be shared across the claim regions. Figure 13 . Figure 13.Selected live images of specimens recovered from the UK Seabed Resources Ltd (UKSRL) Abyssal Baseline (ABYSSLINE) research cruise to the eastern Clarion-Clipperton Zone, February-March 2015.Image credit: Adrian Glover, Thomas Dahlgren, Helena Wiklund. Figure 14 . Figure 14.Nematoda of the Clarion-Clipperton Zone, recovered from abyssal sediments during the JPI Oceans 'SO-239' cruise aboard the R/V Sonne in February-March 2015.Image credit: Lara Macheriotou, University of Ghent. Figure 15 Figure 15 Pseudotanaids of the Clarion-Clipperton Zone, recovered from an EBS sample during the JPI Oceans 'SO-239' cruise aboard the R/V Sonne in February-March 2015.Image credit: M. Błażewicz, University of Łódź, Poland Figure 15 . Figure 15.Xenophyophores of the Clarion-Clipperton Zone, recovered from abyssal sediments during the two UKSRL Abyssal Baseline (ABYSSLINE) cruises aboard R/V Melville and R/V Thomas G Thompson in 2013 and 2015.Image credit: Andrew Gooday and Aurélie Goineau, National Oceanography Centre, UK. Figure 16 . Figure 16.A new species of cyclostome bryozoan from the eastern CCZ UKSRL ABYSSLINE project.Image credit: Adrian Glover, Thomas Dahlgren and Helena Wiklund.Identification by Dennis Gordon via Andrea Waeschenbach. Figure 17 . Figure 17.Sampling stations in the eastern CCZ and east Peru Basin for the joint JPI Oceans cruises SO-239 and SO-242 utilised for phylogeographic studies of the asellote isopods being conducted by Sarah Schnurr, Senckenberg Institute.Image source: Sarah Schnurr, Senckenberg Institute. Figure 21 . Figure 21.A new species of sponge (white object on nodule), first noted on the R/V Melville ABYSSLINE cruise in October 2013, currently in description (Lim et al, National University of Singapore) possibly a new genus related to Hemiasteralla, that is now the subject of detailed investigation by the MIDAS and ABYSSLINE CCZ teams.The scale bar is 1cm.Image credit: Adrian Glover, Thomas Dahlgren, Helena Wiklund. Figure 22 . Figure 22.Some well-known deep-water foraminifera that are considered cosmopolitan based on morphology.Images by Andrew Gooday, Aurélie Goineau, ABYSSLINE project. on the Biogeography and Connectivity of the Clarion-Clipperton ... Figure 23 . Figure 23.The London Workshop on the Biogeography and Connectivity of the Clarion-Clipperton Zone discussing data in the Board Room of the Natural History Museum.Image: Adrian Glover. Figure 24 . Figure 24.The London Workshop on the Biogeography and Connectivity of the Clarion-Clipperton Zone on the main steps of the Natural History Museum below the bust of Charles Darwin.Workshop participants: Back row, left to right: Ralph Spickermann, Stefan Brager.Second from back row, left to right: Pedro Ribeiro, Sergi Taboada, Lara Macheriotou, Ann Vanreusel, Magda Blazewicz, Sarah Schnurr, Helena Wiklund.Third from back row, left to right: Andrew Gooday, Daniel Jones, Lenaick Menot, Pedro Martinez, Steffi Kaiser, Ana Colaço, Sahar Khodami, Marina Cunha, Ana Hilario.Third from back row, left to right: Adrian Glover, Paulo Bonifacio, Kirsty McQuaid, Aurélie Goineau, Heiko Stuckas, Thomas Dahlgren.Front row, left to right: Daniel Kersken, Uwe Raschka, Tasnim Patel, Clara Rodrigues, Gordon Paterson, Amber Cobley.Not present for photo: Andrea Waeschenbach and David Billett.Image credit: T Cruise. The industry perspective: Manganese nodules in the Pacific ocean, path from exploration to exploitation 1730 End of Day 1: Icebreaker social at the Hereford Arms pub, South Kensington.London Workshop on the Biogeography and Connectivity of the Clarion-Clipperton ... 1300 Adrian Glover: Welcome, meeting logistics and participant introductions 1330 Adrian Glover: Overview of meeting goals and agenda, discussion 1400 Session 1: Project Overviews The purpose of Session 1 was to introduce each relevant EU CCZ project from the point of view of molecular biogeography and connectivity data.1400Pedro Martinez: Overview of the German exploration claim (BGR) recent cruises and data collected 1420 Thomas Dahlgren & Pedro Martinez: Overview of the JPI-Oceans cruise program and data collected 1440 Ann Vanreusel: Overview of biological and environmental sampling in the GSR (Belgian) exploration area 1500 Lenaick Menot: Overview of the IFREMER (France) environmental studies 1520 Tea/coffee break 1540 Daniel Jones & Sergi Taboada: Overview of the NERC-MIDAS RRS James Cook JC120 cruise to APEI-6, including overview of molecular collecting 1600 Adrian Glover: Overview of the ABYSSLINE (UK Seabed Resources Ltd (UKSRL)) data on molecular biogeography and connectivity 1620 Session 2: Policy and Industry Perspectives 1620 Stefan Brager: The policy perspective: The International Seabed Authority and environmental management of the CCZ 1640 Ralph Spickermann: that is the formal or informal description of species using DNA barcodes, and the examination of the distribution of those species within exploration areas.0930Pedro Martinez: Molecular taxonomy and biogeography within the German (BGR) exploration area 0950 Helena Wiklund: Molecular taxonomy and biogeography within the UKSRL, Ocean Mineral Singapore (OMS) and Area of Particular Environmental Interest (APEI) #6.1010 Paulo Bonifacio: Diversity and distribution patterns of Polynoidae (Annelida) across the CCZ 1030 Tea/coffee break 1050 Lara Macheriotou: Deep-sea Nematoda of the CCZ -preliminary insights 1110 Andrew Gooday: Foraminifera species diversity within the the UKSRL and OMS exploration areas 1130 Magdalena Błażewicz: Abyssal Tanaidacea -(not) stunning cryptic diversity 1150 Andrea Waeschenbach: Abyssal Bryozoa -first results from UKSRL area and comments on CCZ projects 1210 Discussion session 1300 Lunch break 1400 Session 4: Intra-specific population connectivity in target taxa across the CCZ The purpose of Session 4 was to review new data on population genetics and connectivity within populations, for which new data are starting to emerge from recent CCZ projects.These data are extremely limited but have the potential to offer statistically-robust estimates of connectivity for the first time.Typically, 'target' taxa are chosen that have large enough sample sizes.1600Sergi Taboada: Molecular connectivity of the sponge Tethyida sp.nov.within the CCZ using microsatellites 1620 Pedro Ribeiro: Using RAD sequencing (RADseq) data to investigate population genetic structure in the deep sea 1640 Session 5: The broader biogeographic context based on morphology and molecules The purpose of the final data session was to review data on the broader biogeographic context of the CCZ, particularly based on historical morphological data coupled to new molecular studies.In particular, what new identified species records of actual described species can tell us about ranges at what scales.1640Adrian Glover: Annelida, Mollusca, Echinodermata: Actual species identifications from the CCZ using historical morphological data: some examples 1700 Daniel Kersken: Porifera of the CCZ 1720 Andrew Gooday: Foraminifera, the broader biogeographic context 1740 Gordon Paterson: The problem and challenge of rarity in the abyss 1800 Discussion 1900 Workshop Dinner, Ognisko Restaurant, South Kensington.Thursday 12 May 2016 0800 Principal Investigator break-out session to agree on conceptual framework for a synthesis paper The 1100 Continuation of summary table production 1200 Round-table discussions on future cruises and grant proposals for report 1230 General workshop recommendations -summary table for report 1300 Workshop close Glover et al. (2015)and Matthew Church (University of Hawaii, not present at the workshop) is leading the microbial work.All of these projects involve DNA work.More detailed outlines of the macrofauna, megafauna and foraminifera work from ABYSSLINE are provided later in this report from Glover and Gooday, and a summary of the DNA sampling techniques for the macrofauna provided inGlover et al. (2015). Glover et al. 2016b), the cnidarians(Dahlgren et al. 2016) and megafauna (Amon et al. 2016).Further publications are in progress (see Publication Plans, below).With the collaboration of the Molecular Collections Facility of the NHM and Bioinformatics team, a sample archiving and open-data pipeline has been developed to allow colleagues, scientists, contractors, regulators and future generations access to NHM-housed material collected on project ABYSSLINE.This will also permit molecular-based real taxonomy to be undertaken with adequate curation and protection of DNA extracts and voucher specimens providing type material to new species names. Table 2 . Summary of Clarion-Clipperton Zone (CCZ) projects studying DNA taxonomy and biogeography with key parameters and publications. Table 3 . Summary of Clarion-Clipperton Zone (CCZ) projects studying population genetics with key parameters and publications. Table 4 . Summary table produced during the London Workshop on the Biogeography and Connectivity of the Clarion-Clipperton Zone outlining proposed publications during the next three years, designed to facilitate collaboration and complementarity of activities in the region.The London Workshop on the Biogeography and Connectivity of the Clarion-Clipperton ... The only confirmed cruise plans indicated by the group were for BGR cruises in May 2016 and May 2017.In terms of grant proposals, Adrian Glover highlighted the ongoing work to develop a Strategic Programme Area (SPA) for the Natural Environment Research Council that would focus on deep-sea resource extraction in general, potentially the CCZ could be part of this.At the time of writing, no decisions have yet been made on this proposal.Pedro Martinez indicated that there is interest in developing a second iteration of the JPI Oceans project, but the timeline is not yet known.This may involve technology and mining tests.Some discussion ensued on the idea to develop a broad cross program proposal on taxonomy (see Workshop Recommendations). • Assumptions in the models used for the development of APEIs should be updated as soon as possible with new data with a funded program of research •The CCZ academic community should be better informed of policy developments and regulatory milestones • Any biological sampling programs that are taking place in the CCZ must include a funded program of research to work up the samples through to publication and data/specimen archiving Research • Connectivity (e.g ecological, population, genetic) should be more precisely defined in CCZ projects and proposals • Efforts should be undertaken to examine connectivity in rare species, either through NGS approaches or more broad sampling to determine if rare species are common in under-sampled habitats or refugia • ROV or AUV imagery of megafauna should be improved through (1) collection of voucher materials to confirm conspecificity through DNA sampling, (2) improved resolution, in particular the use of high-quality scientific ROV able to hover and acquire quality imagery and (3) development of regional atlases of megafauna that are available to all contractors and academic programs • Ongoing and future efforts for the development of automated software tools for video analysis are also fundamental and must be encouraged.Given the huge amount of video data gathered by ROV and AUV surveys, automated methods are essential to be able to produce comprehensive community analyses on a timely manner.• Working species concepts for new CCZ fauna should be based where possible on phylogenetic species concepts using molecular data in combination with morphological characterization • New approaches to connectivity study should be developed using new data on larval biology, functional traits (e.g to establish connectivity amongst a range of trait parameters) and physical oceanography (e.g currents and modeled currents) • New genetic markers and methods to study connectivity should be developed that are appropriate to the fundamental questions posed Data and Archiving • Publications on the DNA taxonomy or phylogeography of the Clarion-Clipperton Zone should ensure that raw data in the form of genetic sequences, images, locality information and voucher material access are archived in global online repositories (GenBank, OBIS, GBIF, WoRMS) • Morphological data (e.g type material in museum collections, taxonomic descriptions and new morphological information) must be used, where possible, alongside molecular data to (1) improve knowledge of putative species ranges, (2) improve taxonomic descriptions that rely on DNA data and (3) check the type material of published species and to use a precautionary approach in estimating species ranges European Union Framework 7 'Managing Impacts of Deep Sea Resource Exploitation (MIDAS)' and Joint Programming Initiative Healthy and Productive Seas and Oceans (JPI Oceans).
2018-12-07T12:46:31.677Z
2016-09-16T00:00:00.000
{ "year": 2016, "sha1": "249bd4a6d06d0ec3f323bc8f172f0a9dde2db778", "oa_license": "CCBY", "oa_url": "https://riojournal.com/article/10528/download/pdf/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "249bd4a6d06d0ec3f323bc8f172f0a9dde2db778", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
169630643
pes2o/s2orc
v3-fos-license
Consideration on Promoting Supply-side Structural Reform of Agriculture Taking Zaozhuang as an Example —This paper finds out the problems and constraints in the supply side of agricultural development in Zaozhuang thorough investigation on the status quo of agricultural development in Zaozhuang. For the actual problems of the investigation and study, feasibility suggestion is specifically studied for the supply-side structural reform of agriculture in Zaozhuang so as to promote the sustainable and healthy development of agricultural economy in Zaozhuang. INTRODUCTION "Agriculture is the root of the people as well as the root cause of the turmoil in the world, and the decline and prosperity of the country and its existence." As a big country with a population of 1.3 billion, China always regards agriculture as a top priority for all tasks to pay attention to. Whether the "issues of agriculture, farmer and rural area" can be handled has great strategic significance to the overall situation of China. The central governments regard the supply-side structural reform of agriculture as an important task of agriculture and rural work in the current and in the coming period for China. It is both an inevitable result of the forced transformation in agriculture development and an initiative selection to enhance agricultural competitiveness of China in the new situation. As local government, we should respond positively to the call of the Central Party Committee and comprehensively promote the supply-side structural reform of agriculture so as to achieve the accumulation from quantitative change to qualitative change. II. DEVELOPMENT STATUS OF AGRICULTURAL ECONOMY IN ZAOZHUANG Zaozhuang is accelerating the development pace of modern agriculture, and agricultural economy has made remarkable achievements. At the same time, the problems of low level of industrialization of agriculture and slow increase of peasants' income still exist in Zaozhuang, which seriously restricts the further development of agricultural economy. In 2016, the gross output of agriculture, forestry, animal husbandry and fishery industry in Zaozhuang City reached 32.537 billion yuan, an increase of 4.6%. In terms of industries, the gross output of agriculture was 21,158 million yuan, increasing by 4.1%; the gross output of forestry was 283 million yuan, decreasing by 1.8%; the gross output of animal husbandry was 8.490 billion yuan, increasing by 4.9%; the gross output of fisheries was 837 million yuan, increasing by 3.9%; the gross output of services for agriculture, forestry, animal husbandry and fishery was 1.769 billion yuan, an increase of 9.6% as shown in " Fig. 1". According to the survey, the area of grain crops in the city decreased from 4.357 million mu to 3.81 million mu in 2012-2016. The total output decreased from 1.9878 million tons to 1.6325 million tons and the per unit yield reduced from 456.57 kg / mu to 427.58 kg / mu. In 2016, the total output of grain in the whole year reached 2.0284 million tons, an increase of 0.9%. The comprehensive per unit yield was 476.1 kilograms of mu, an increase of 0.8%. Both per unit yield and total output hit a record high. The annual vegetable planting area was 130.52 mu, an decrease of 0.8%; output was 4.6949 million tons, an increase of 0.6%. The oil plants area in the city was 33.01 hectares, an decrease of 1.1%; total output was 91,900 tons, an decrease of 4.1%. The orchard area in the city at the end of the year was 242,000 mu, an decrease of 4.3%; garden fruits was 24,500 tons, an decrease of 7.1%. In 2016, the added value of planting industry was 12.294 billion yuan, an increase of 4.1% as in " Fig. 2". After five years of afforestation, the forest stock has increased from 5.73 million cubic meters to 6.41 million cubic meters. In 2016, the newly increased afforestation area was 129,000 mu, the newly growing seedlings area was 15,900 mu, and the newly built farmland forest net reached 47,000 mu. Newly developed economic forest was 4.5 million mu, of which the constructed demonstration base for pomegranates, long red dates, and walnut and so on was 1.35 hectares. 2 provincial-level leading enterprises were newly added, and the number of leading forestry enterprises above the provincial level reached 35. 28 forestry economy industry demonstration base, model village and demonstration households were newly added. In 2016, the added value of forestry was 163 million yuan, and the forestry production declined slightly as shown in " Fig. 2". The output of major Animal products both increased and decreased while the total output of meat decreased. In 2016, the added value of animal husbandry reached 3.23 billion yuan, an increase of 4.9% as in " Fig. 2". In 2016, the total output of meat in Zaozhuang reached 254,000 tons, a decrease of 3.8%. It is expected that with the decline in the population of cattle and sheep breeding, weak shock period of beef and mutton consumption will gradually pass, and prices of cattle and sheep will gradually go upward. The development of fisheries has been steadily rising. In 2015, the output of aquatic products decreased slightly, but it shows the growth trend overall in 5 years. Due to the frost damage and floods in the south, the situation of fishery aquaculture was good in 2016, with the prices of some fish keeping rising. The enthusiasm of fishermen was higher, and the major economic indicators increased steadily. In 2016, the stocking area of fisheries reached 192,000 mu, an increase of 2.6% over the same period of last year. Output of aquatic product was 96,000 tons, an increase of 5.5%. In 2016, the added value of fishery was 514 million yuan, an increase of 3.9% as shown in " Fig. 2". A. Tension in Support Factors for Agricultural Development Results in the Increase in Pressure on Stable Increase of Agricultural Production At present, the cost of seeds, fertilizers and labor costs for food crops by farmers has been rising, which causes the year-by-year increase in the cost of growing grain and affects the enthusiasm of farmers in growing grain. In 2016, due to the decline in grain prices, especially the price of corn, grain yield decreased significantly. Fluctuations of market prices reduced the farmers' income from grain growing, so enthusiasm for grain growing shows fluctuations. The graingrowing households' enthusiasm for the field management of grain is not high, and they even give up watering some plots, leading to a large drop in per unit yields of some plots. Judging from the integrative development of primary industry, secondary industry and tertiary industry, agricultural development of Zaozhuang has not got rid of the single concept of agricultural production, without forming a complete industrial system. It mainly presents as follow: 1. The industrial chain of agricultural products is developing slowly, the deep processing of agricultural products is insufficient, the processing and transformation ability is poor, and the market competitiveness is not strong. At present, the number of leading enterprises with the scale above the city level in Zaozhuang only accounts for 3.5% of the province, with sales revenue accounting for only 2% of the province. 2. The construction of agricultural products circulation and market coordination mechanism lags behind, lacking unified planning, without reasonable layout, and its quantity and scale are small, the facilities are backward and functions are incomplete; 3. The rural logistics and distribution network is not perfect. Due to the fact that agricultural products are generally produced in rural areas where logistics outlets are not perfect, which increases logistics costs invisible and thus restricts the development of "Internet + agriculture" in Zaozhuang. 4. There are hidden dangers in quality and safety of agricultural products, and the market competitiveness needs to be improved. B. The Proportion of Forestry Output Value Is small, and the Fund for the Forest Culture and Management Is Insufficient In 2016, the output value of forestry accounted for only 0.87% of the total output value of agriculture, forestry, animal husbandry and fishery. The value added of forestry increased steadily in 2012-2015 but decreased in 2016. Construction fund for forestry is the key factor that directly affects the production scale, quality and efficiency of the entire forestry. Incomplete link of investment in the forest culture and management, lagging funds for basic construction and the lack of management of capital costs will affect the enhancement of renovation of forestry equipment and forestry benefit. C. Problems of Animal Husbandry Production Is Centralized with Relatively Large Breeding Risk First is financing difficulties and high costs. The risk of aquaculture is relatively big, so the lending threshold of financial institutions to aquaculture enterprises is high. More than 70% of respondents report that borrowing loans is difficult and the interest is high. Farmers mostly raise funds through personal loans and personal financing. Although the interest is relatively high, the procedures are simple. Second is business fluctuation and unstable earnings. In recent years, due to the impact of domestic supply and demand, the prices of live pigs and eggs in Zaozhuang fall as another rises. The phenomena of "pig cycle" and "roller coasters" of egg prices are frequently appear, which affects the sustainable and stable development of animal husbandry. Since April 2015, price of pig have been relatively high at the peak of the swine cycle. However, the sluggish price of pig, which lasted for nearly three years before April 2015, is still on the alert. Third is the problem of livestock fecal pollution. With the rapid development of rural economy and the popularization and application of fertilizers, the gap between convenience and efficiency of livestock excrement used in the past was directly revealed. Therefore, livestock manure was arbitrarily piled up. Especially in rainy days, it causes serious pollution to the surrounding environment. D. The Quality of Rural Labor Is Low, which Restricts the Further Development of Economy First, with the expansion of scale of peasants' employment outside, a large number of young peasants with certain cultural qualities flow from rural areas to cities. The quality of agricultural labor force continues to be weakened. Their ability to receive external information is weak, and they are not good at using the agricultural science and technology. In most areas, the level of specialization, scale and intensification and socialization of agricultural production and management is the low, the degree of parttime agricultural production is high, and the moderate scale operation of is still in initial stage. Second, inadequate attention is paid to rural technology and professional and technical personnel are less. In rural areas, there are only a few personnel with professional qualifications or professional skills. Some of the veteran cadres who have grown up in rural areas have a low culture quality. Some grassroots cadres are busy with day-to-day affairs and do not have enough energy. In some towns and villages, the number of agricultural technical personnel is low, which affects their work enthusiasm, so it is hard to retain talents. These factors have seriously affected the development of rural economy. A. To Increase the Efforts to Benefit Farmers and Strengthen Market Supervision "It is the most important task to solve the people's food problem in the eight tasks of governing the country." Through the ages, food security has always been the primary task of governing the country and bringing peace to the country. We will continue to improve the farmer-benefiting policy of grain production, increase the scale of agricultural subsidy funds, implement the subsidy policy for grain farmers, expand the scale of subsidies for agricultural machinery purchase, and gradually expand the pilot scope of subsidies for big farming household to ensure that the city's grain acreage maintains long-term stable, and the peasant grain receipts don't decrease; we should focus on the protection of agricultural land and stabilize the area of fertile land to ensure food production safety. On the basis of implementing various subsidy policies, we can increase investment in infrastructure construction in rural areas, continue to strengthen the basic construction of farmland and water conservancy, enhance capacity for disaster reduction, disaster mitigation and natural disaster prevention, and comprehensively enhance the capacity for sustainable development of agricultural production. We can vigorously promote policy-oriented agricultural insurance, and actively expand the variety of agricultural insurance premium subsidies to provide protection for farmers to avoid production risks. B. To Develop Efficient Agriculture and Increase Effective Supply With the improvement of people's living standards, people are more likely to pursue the improvement of living quality, and are paying more and more attention to food safety and environmental protection issues. We should actively develop modern agriculture with high efficiency and constantly explore new potential for agricultural development. First, optimize the industrial layout. For example, for animal husbandry, taking into account that livestock manure can be used as high-quality fertilizers for farmland and food and its by-products are also sources of livestock feed, it is better to choose the area where the plantation is developed with enough farmland, gardens and nursery around the farms as farm, to achieve the organic combination of agriculture, forestry, and animal husbandry; we can combine agricultural production and leisure travel organically to vigorously develop tourism leisure agriculture. Second, we must improve the food safety and environmental protection laws and regulations, so that there is law to follow. The certification work of organic food, green food and pollution-free agricultural products are included into the orbit of law, and certification standards and legal status of various certifications are made clear. Food raw materials are produced in the environment we rely on, so the merits of the environment affect the quality of food. Therefore, emphasis on environmental protection and food safety supplement each other. C. To Speed up Industrial Integration and Improve the Industrial System Efforts should be made to promote the extension of agriculture to both ends of the pre-producing and postproducing period and encourage social forces to extensively integrate into the chain of industrialized operations to form a development model that integrates supply and marketing and planting and breeding and accelerate the optimization and upgrading of traditional agriculture. We can develop a number of leading enterprises with wide radiation area and strong driving ability and farmer professional cooperative around the leading industries. We will focus on cultivating ten leading agricultural enterprises such as Xianghe Dairy, Longzhen Farming and Animal Husbandry and Yingge Food, and support them to become bigger and stronger through mergers and acquisitions, shareholding and listing. We should vigorously develop the modern logistics of agricultural products, make efforts to cultivate large-scale agricultural marketing enterprises and accelerate the logistics distribution network from county to village and and construction of village distribution network. D. To Improve Product Quality and Focus on Brand Building In 2015, the Central Rural Working Conference proposed that supply-side structural reforms of agriculture should be strengthened. The "reducing stock", "lowering costs" and "supplementing the short board" of agricultural products were put on the agenda. "Reducing stocking" is to change the simple quantitative growth model in the traditional model which takes meeting the "adequate food and clothing" as the goal, taking adapting to demand, leading demand and creating demand as the objective to guide the supply structure focus of agricultural products to transform to adapting to the resource environment and catering to the needs of consumers. And the regional landmark products can lead the market demand. The implementation of the regional landmark brand strategy is an effective way to promote the supply-side structural reform of agriculture, guide and support the cooperation between farm families with the same type and similar quality, specialized cooperatives and production and processing enterprises, integrate the brand resources of agricultural products, and unify register or use trademarks, unify packaging and logo to build regional brand of agricultural products. "Lowering costs" and "supplementing the short board" are inseparable from the scale economies, so Zaozhuang should focus on creating a number of leading enterprises and industrial clusters of largescale agro-processing and vigorously support the construction projects of leading enterprises in the base development, brand building, product testing and quarantine and technological innovation. E. To Cultivate New Farmers and Promote Technological Innovation Technological innovation is the core driving force for transforming agricultural production mode and promoting supply-side structural reform. The innovation and application of agricultural information technology is the information support and technical support for the development of modern agriculture. On the basis of perfecting the construction of rural information infrastructure, the "Internet +" program will be implemented to effectively reconstruct the agricultural industrial chain from the aspects of technology, production, processing and sales to innovate the processing and distribution of agricultural products and establish a new format for the development of modern agriculture. Technological innovation depends on talent. We must increase the investment in rural human resources and effectively transform the advantage of labor resources into the advantage of human capital. First is to attach importance to the cultivation of new types of professional farmers. The cultivation of new type of professional farmers should adopt the method of "going out" and "inviting in". The government should strengthen employment skills training, introduce free vocational education and training for farmers, and raise their knowledge level of agricultural science and technology and management capabilities. The second is to broaden the channels for agricultural technology promotion, strengthen cooperation with radio and television stations, and use Zaozhuang Agricultural Information Network and Zaozhuang Agricultural Technology Promotion Network to release technical guidance information. F. To Adopt Market-oriented Approach and Broaden the Development Ideas Green and efficient modern agriculture is the goal of our reform. We need to reinforce the support for entrepreneurial incubation base and entrepreneurial "DreamWorks" to create Advances in Social Science, Education and Humanities Research, volume 205 a number of provincial and municipal entrepreneurship Demonstration Park. We can also found agricultural industrial park, develop rural tourism, ethnic customs tourism, leisure agriculture and traditional handicrafts according to market demand and local resource endowment to promote the integrative development of the first, second and third industry in rural areas. Yiyun cupboard tribe in Shanting combines the pastoral landscape and container tourism characteristics as a typical representative of creative agriculture. We should vigorously develop the leisure agriculture and rural tourism in Zaozhuang by taking advantage of carrying out the national, provincial and municipal leisure agriculture and activities of establishing rural tourism demonstrations, actively guide the agricultural leading enterprises such as Nine Summit Lotus Mountain, Shifo Mountain Village, Longzhen Animal Husbandry and Xianghe Dairy, extend forward their own standardized raw material bases and develop eco-leisure agriculture based on sightseeing and picking. At the same time, we should drive the circulation industry and catering industry of rural areas to promote the further integration of the three major industries so as to promote agricultural development by integrative development and promote supply-side structural reform of agriculture. V. CONCLUSION With the supply-side structural reform of agriculture becoming a research hot spot, the theoretical research results are becoming more and more abundant. With the goal of "reducing inventory, lowering costs and supplementing the short board", combining with the existing problems of agricultural supply side in Zaozhuang City, this article puts forward a feasible way for the supply-side reform of agriculture in Zaozhuang as in " Fig. 3" with the development concept of innovation, coordination, green and open. The feasibility of specific countermeasures and suggestions needs to be tested by practice and continuously improved in practice. In addition, the peasants are the main force in the development of the rural economy. The enthusiasm and initiative of the peasants to participate will actually affect the anticipated effect of the reform. How can we effectively increase the participation of peasants in the reform to make peasants better share the fruits of the reform is another important topic of the practical research made by us.
2019-05-30T23:44:21.738Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "b17e68cfa49fdccc95dc8f31eb4d6273eb26fa09", "oa_license": "CCBYNC", "oa_url": "https://download.atlantis-press.com/article/25894041.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "916ab34ff16ec39acd87426cda2c9b83abf9424e", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
11292441
pes2o/s2orc
v3-fos-license
Mapping carcass and meat quality QTL on Sus Scrofa chromosome 2 in commercial finishing pigs Quantitative trait loci (QTL) affecting carcass and meat quality located on SSC2 were identified using variance component methods. A large number of traits involved in meat and carcass quality was detected in a commercial crossbred population: 1855 pigs sired by 17 boars from a synthetic line, which where homozygous (A/A) for IGF2. Using combined linkage and linkage disequilibrium mapping (LDLA), several QTL significantly affecting loin muscle mass, ham weight and ham muscles (outer ham and knuckle ham) and meat quality traits, such as Minolta-L* and -b*, ultimate pH and Japanese colour score were detected. These results agreed well with previous QTL-studies involving SSC2. Since our study is carried out on crossbreds, different QTL may be segregating in the parental lines. To address this question, we compared models with a single QTL-variance component with models allowing for separate sire and dam QTL-variance components. The same QTL were identified using a single QTL variance component model compared to a model allowing for separate variances with minor differences with respect to QTL location. However, the variance component method made it possible to detect QTL segregating in the paternal line (e.g. HAMB), the maternal lines (e.g. Ham) or in both (e.g. pHu). Combining association and linkage information among haplotypes improved slightly the significance of the QTL compared to an analysis using linkage information only. Introduction Pig breeding programs aim at improving pigs for economically important traits. Carcass quality has been successfully improved in most selection programs because phenotypes are easy to obtain on live animals via ultrasonically measurements of backfat and because these traits show a relatively high heritability. However, although breeding for meat quality has received much attention over the past two decades, it has not been the priority in most selection programs [1][2][3][4] because meat quality traits can only be measured on the relatives of selection candidates and late in life. Successful improvement of meat quality may be possible by combining molecular information and traditional measurements because marker data can be obtained on all animals at an early age [5]. Molecular information, i.e. genes and QTL, has rapidly become available via genome scans of experimental crossbred populations (see review by Bidanel and Rothschild [6] and PigQTLdb [7]). In many cases, favourable QTL cannot be exploited due to the poor performance of these exotic breeds with respect to commercially relevant traits. However, the number of QTL studies using commercial populations is increasing [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22]. Identification of QTL using commercial lines requires a large number of families because fewer heterozygous founders are expected especially for traits under selection such as carcass quality traits. Most of the studies mentioned above use 'paternal half sib regression' as the statistical method to associate genotypes with phenotypes, which models the segregation of paternal QTL [23]. Variance component methods, based on the theory developed by Fernando and Grossman [24], are currently becoming the method of choice in association studies because they allow for much greater flexibility in the modelling of QTL in arbitrary pedigrees while adjusting simultaneously for systematic environmental effects [13,25]. A preliminary analysis using eight half-sib families, detected putative QTL on SSC2 [15]. Based on these results, nine additional families were genotyped and analysed to increase the marker density in regions of interest. The goal of this paper is to map QTL affecting meat and carcass quality of commercial finishers and located on SSC2 using variance component methods. Population and phenotypes The 1855 commercial finishers were a cross product of 17 boars of a synthetic sire line (Large White/Pietrain, TOPIGS, The Netherlands) and 239 unregistered hybrid sows. The piglets were born during a two-month period in 2002. Piglets were individually tagged at birth and males were castrated three to five days after farrowing. Pigs were weaned on average at 17 days of age and raised till an average weight of 22.7 kg before being moved to the finishing barns. Diets comprised commercial available feeds and free access to water. Pigs were loaded in three batches per compartment at an average weight of 118 kg live weight and kept overnight in a lairage at the slaughterhouse. The average age (AGE) of each batch was 164, 172 and 185 days, respectively. During a 70-day period, pigs were slaughtered on 17 different days. Measurements on the carcass were recorded on one half of the carcass. Backfat (BF) and loin depth (LD) were measured at the 10 th rib using the Hennessy grading probe HGP Systems Ltd, Auckland NZ). Lean percentage (PLEAN) was calculated as: PLEAN = 58.86 -(0.61 × BF) + (0.12 × LD). Cold carcass weight (CCW) was recorded after temperature equalization. Primal cuts of ham (HAM) and loin (LOIN) were weighed and further dissected into boneless subprimals and individual muscles. Skin and fat were removed from hams removed and four subprimals were weighted: inside ham (IHAM), outer ham (OHAM), knuckle ham (KHAM) and the lite butt ham (LBHAM,i.e. part of the gluteus medius muscle). Together they summed to boneless ham muscle weight (BHAM). Loins were processed to a boneless loin without the fat cover (DLOIN). Meat quality measurements were taken both on the loin and the ham. Ultimate pH (pHu) was measured in the boneless loin 24-28 h post mortem. Loin Minolta L*, a* and b* (LOINL, LOINA and LOINB) were taken on the fresh cut surface of a 2.5-cm chop removed from the sirloin end using a Minolta CR 300 (Minolta, Osaka, Japan). The same chop was used for a subjective colour score (score 1 to 6, with 1 = pale and 6 = very dark) using the Japanese colour scale (JCScut). The side view of the loin was also scored using this scale (JCSrib). A subjective marbling score (LMARB; 1 to 5, with 1 = devoid and 5 = overly abundant) was given to the chop based on marbling standards of the National Pork Producers Council [26]. Cores were taken from a second 2.5-cm chop using a 25mm coring device to determine drip loss percentage (DRIP). Samples were weighed and put in pre-weighed tubes and stored in a cooler. After 24 h samples were reweighed and drip loss was calculated [27]. Purge loss (PURGE, %) was determined by weighing a 7.5-to 10-cm piece of the remainder of the boneless loin, cooling it for 5 days in plastic bags and reweighing. Subjective firmness scores (FIRM; 1 to 3, 1 = soft and exudative and 3 = firm) were evaluated using NPPC standards [28]. Meat quality measurements taken on the ham included Minolta L*, a* and b* values on the fresh cut surface of the inside ham muscle (HAML, HAMA and HAMB). A subjective marbling score (HMARB; 1 to 4; 1 = devoid and 4 = abundant) was assigned to the outside ham muscle. General statistics regarding the data is given in van Wijk et al. [29]. Genotyping and linkage map DNA was extracted from ear or loin tissue samples using the Puregene ® DNA Isolation kit (D-70KA, Gentra Systems, Minneapolis, USA). Isolated DNA was tested on 1.2% agarose gel for quality and adjusted in NaCl-Tris-EDTA (STE) buffer to a final concentration of 15 ng/μL. Genotyping was performed in two batches. First eight half-sib families were typed for 10 microsatellite markers on SSC2 [15]. Next, nine additional families were genotyped for eight markers (out of the 10 markers previously used). Subsequently 16 microsatellite markers were added to fine-map regions on SSC2 based on preliminary analyses. All boars were genotyped for IGF2 and they were homozygous (A/A). The markers included in the statistical analysis are shown in Table 1. Genotypes were scored in duplicate and checked against pedigree information. Crimap 2.4 [30] was used to construct a sex-average linkage map. Resulting recombination fractions/cM distances were used in Simwalk version 2.89 [31] to reconstruct haplotypes, which were used in QTL analyses. Distances calculated with the Haldane linkage function were used in QTL analyses while distances calculated with the Kosambi linkage function are reported for comparison with QTL locations given in the literature [7]. Statistical analysis QTL were mapped based on a combined linkage disequilibrium and segregation analysis using the variance component method because this method uses both the segregation from the sires and the dams, uses linkage disequilibrium among haplotypes in the founders, allows for simultaneously estimation of polygenic-, QTL-, litter-and fixed-effects and allows for complex pedigrees (half-and full-sib structure). Identity by descent (IBD) probabilities of haplotypes, using reconstructed haplotypes, were calculated using the LDLA package [32], which is based on the theory developed by Meuwissen and Goddard [33]. IBD probability matrices were calculated at the midpoint of each bracket of flanking markers. The likelihood at each evaluation point was determined using ASREML [34]. For comparison reasons, models were also fitted ignoring the linkage disequilibrium (LA-only). Phenotypes were analysed according to the following model. Since the pedigree of the sows was not available a sire-dam model was used (one component model): where y is a vector containing phenotypic values, b is a vector containing non-genetic effects, s is a vector containing polygenic sire effects, c is a vector containing common litter and dam effects, v is a vector containing haplotype effects due to a putative QTL and e contains the residual effects. Non-genetic effects considered were a barn-groupbatch, and sex as class variables and 'cold carcass weight' and 'days in the finishing barn' as linear covariables. The random effects of s, c, v and e were assumed to be normally distributed with zero mean and variances Aσ 2 s , Iσ 2 c , G p σ 2 v and Iσ 2 e , respectively where A is the genetic relationship matrix among the sires including five generations of known pedigree, G p is the IBD matrix among the haplotypes at evaluation point p and I is an identity matrix. X, Z, S and W are incidence matrices relating effects to phenotypes. To relax the assumption of equal variance among the paternal and maternal haplotypes in model 1 the following model (2) was applied: In model 2, a separate variance component is fitted for the paternal (v s ) and maternal (v d ) haplotypes (two-component model). Since the sires and the anonymous hybrid dams originated from different populations different QTL-alleles may be segregating at the QTL. Test statistic and significance threshold To test the hypothesis of the presence of a QTL (H 1 ) versus no QTL (H 0 ) the likelihood ratio test (LRT) was applied. The LRT statistic at each midpoint between adjacent markers was calculated as twice the difference between the log likelihood of model 1 (or 2) minus the log likelihood of a model without a QTL effect. The test statistic plotted along the chromosome gave a LRT-profile. Given this profile, thresholds were calculated which take multiple testing across the chromosome into account using the method described by Piepho [35]. Since different likelihood profiles were obtained for each model and trait specific threshold values were obtained for each combination, significance was tested using this specific threshold. Map construction Genetic linkage maps are presented in Table 1. The order of the markers and the distance among markers is in close agreement with the USDA-MARC.2 genetic linkage map [36] except for marker pair SWR1910-SWR783, which is reversed and separated by 14 cM instead of 1 cM. The average distance among the markers is 6 cM. QTL The LRT statistics for traits that exceeded a Piepho-corrected threshold value of 0.05 and the position of their maximum value are given in Table 2. Depending on the trait analysed, the 0.05 threshold obtained corresponded with a nominal p-value of around 0.005. Few false positive QTL will be found at the expense of false negatives using these strict thresholds. Use of a commercial population that has been under selection for several decades might be another reason for the number of QTL observed in this study. Results are shown for the model applying a single variance component as well as for the two-component model, i.e. allowing for different variances among paternal and among maternal haplotypes. The LRT statistics and position of the QTL were very similar for both models. LRTprofiles for meat quality traits with significant QTL are shown in Figure 1 using LRT-values from the two-component model. In Figure 2 similar profiles are shown for carcass quality traits. Applying an analysis using linkage information only (LA-only) showed fewer and less significant QTL (Table 2). Especially for ham-related traits linkage disequilibrium information seems to be of added value. Colour A significant QTL was observed for HAML. The estimated location differed slightly for the two models: 17 cM for the one-component model and 26 cM for the two-component model. Malek et al. [11] also found a QTL for this trait on SSC2 but they were located at 72 and 116 cM. (The locations of QTL from other studies are taken from the pigQTLdb [7] where all the map distances are converted to the USDA-MARC map). For HAMB no QTL have previously been reported on SSC2. The JCSrib QTL is in accordance with the QTL for a similar subjective colour score observed by Malek et al. [11] although their score was on the cut surface of the loin instead of the side-rib-view. The QTL for HAMB and JCSrib were found at almost the same position, which might indicate that it is the same QTL affecting both traits. pH The QTL for pHu on SSC2 was observed between markers Sw1686 and Sw2167 (65 cM). Lee et al. [37] observed two QTL for pHu on SCC2 at 42 and 64 cM in a F2-cross between Meishan and Pietrain. Su et al. [38] observed a QTL for pHu at 67 cM. The ultimate pH is usually a good predictor of water holding capacity. Malek et al. [11] showed that two QTL are segregating for this trait on SSC2 (around 75 and 114 cM). However, in this study no significant QTL were found for drip or purge. Figure 2 suggests that more than one QTL on SSC2 affect the amount of loin muscle (DLOIN). The most significant QTL for DLOIN at 73 cM on SSC2 has never been reported. Several studies have reported a QTL involving amount of loin at the beginning of SSC2, which is most likely associated with the IGF2-gene [22,39,40,37,17]. However, Varona et al. [41] and also Lee et al. [37] have reported a QTL for loin depth and percentage lean cuts around 65 cM. Carcass traits Total kg of ham (HAM) as well as part of this ham (OHAM) showed a significant QTL at 103 cM, but the significant QTL for knuckle ham (KHAM) was situated at the end of SSC2. Duthie et al. [17] have detected a QTL for ham weight on SCC2 at 15 cM like Vidal et al. [14] but this latter study does not give the position. Variance components In Table 3, the proportion of total variance due to polygenic (h 2 ), litter (c 2 ) and QTL (v 2 ) as well as the residual and total variance are given for traits mentioned in Table 2 at the evaluation point where the LRT for the QTL was at its maximum. Given the hybrid origin of the population used in this study, i.e. a single strain sire line was crossed with a 3-way cross sow, the two-component model is probably more appropriate than the one-component model because in the two-component model the segregation of the paternal and maternal haplotypes are modelled as independent effects. This is illustrated in Table 3 where contribution of paternal and maternal components is given, i.e. v 2 s and v 2 d . In general, proportions of variance due to polygenic and litter effects are in close agreement with van Wijk et al. [29] in which data was analysed before marker data was available, i.e. they applied a model without QTL effects. The biggest disagreement was observed when comparing the h 2 estimates for pHu. The h 2 for pHu dropped from 0.11 to 0.02. In both models, the QTL variance (v 2 ) is relatively high indicating that the genetic variance has shifted from polygenic to QTL variance. This might be the result of the specific data analysed. Since it is unlikely that a single QTL explains most of the genetic variance, the QTL variance is most likely overestimated. Different variance components for sire and dam haplotypes for HAMB and HAM indicate that the underlying QTL are not segregating in dam and sires, respectively. Preferably a two-component model should be applied for crossbred data where different QTL alleles could be segregating in different populations involved in the hybrid offspring. LDLA In this study, linkage disequilibrium (LD) information was included when calculating the IBD matrices. However, it is not clear how IBD due to LD should be calculated for crossbred populations. The theory developed by Meuwissen and Goddard [33] assumes a single population 100 generations ago, which is not very likely for very different pig breeds. Uleberg et al. [42] have applied an IBD-value of zero due to LD between base-haplotypes of different breeds. Given that all pigs originate from a domesticated wild boar population this seems to be too LRT profiles for meat quality traits with QTL Figure 1 LRT profiles for meat quality traits with QTL. Thresholds are corrected for multiple testing and averaged over traits; triangles on the X-axes indicate the location of the markers Markers extreme because haplotypes could be identical by descent due to the single origin. Biodiversity studies, e.g. Eding and Meuwissen [43], which provide estimates of genetic distance among breeds, could be used to determine IBD within and between breeds simultaneously. Compared to Meuwissen et al. [44] and Olsen et al. [45] the LRT-profiles (Figures 1 and 2) are less peaked. This might due to the lower marker density used in this study or to the use of cross bred data instead of single population data in the other studies, which have a positive effect on linkage disequilibrium information because IBD among founder haplotypes can be better estimated. In particular, the linkage disequilibrium information decreases the width of the peaks because it takes historic recombination into account [44]. LRT profiles for carcass quality traits with QTL Figure 2 LRT profiles for carcass quality traits with QTL. Thresholds are corrected for multiple testing and averaged over traits; triangles on the X-axes indicate the location of the markers Conclusion QTL affecting meat and carcass quality were found on SSC2 in this large, commercially produced population. QTL-effects were significant even after correction for multiple testing. The variance component method to detect QTL made it possible to detect QTL segregating in the paternal line (e.g. HAMB), the maternal lines (e.g. Ham) or in both (e.g. pHu). Combining association and linkage information among haplotypes slightly improved the significance of the QTL compared to an analysis using linkage information only.
2014-10-01T00:00:00.000Z
2009-01-05T00:00:00.000
{ "year": 2009, "sha1": "88cb6c25eef75fb995c2d2ded0e81c0114b42728", "oa_license": "CCBY", "oa_url": "https://gsejournal.biomedcentral.com/track/pdf/10.1186/1297-9686-41-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a8bfee7118264784de442c6f578a1027272fd4f", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
233597694
pes2o/s2orc
v3-fos-license
Improved earthquake aftershocks forecasting model based on long-term memory A prominent feature of earthquakes is their empirical laws, including memory (clustering) in time and space. Several earthquake forecasting models, such as the epidemic-type aftershock sequence (ETAS) model, were developed based on these empirical laws. Yet, a recent study [1] showed that the ETAS model fails to reproduce the significant long-term memory characteristics found in real earthquake catalogs. Here we modify and generalize the ETAS model to include short- and long-term triggering mechanisms, to account for the short- and long-time memory (exponents) discovered in the data. Our generalized ETAS model accurately reproduces the short- and long-term/distance memory observed in the Italian and Southern Californian earthquake catalogs. The revised ETAS model is also found to improve earthquake forecasting after large shocks. Introduction Earthquakes pose a serious threat to human life and property and as such attracted the attention of many scientists. Yet, the understanding and the forecasting of earthquakes are limited. Predicting earthquakes by using a diagnostic precursor via some observable signal has not produced a reliable prediction scheme [2][3][4]. Indeed, the current earthquake predictability is based on the known seismic laws. The distribution of earthquake magnitudes is exponential and follows the Gutenberg-Richter law (N(m) ∝ 10 −bm , where N is the number of earthquakes of magnitude m and b ≈ 1) [5]. The number of earthquakes triggered by a mainshock increases exponentially with its magnitude (Utsu law) [6]. In addition, the rate of triggered events decays as a power law with time (Omori law) [7]. An operational earthquake forecasting scheme has been developed and applied to forecast earthquake sequences based on the empirical laws, including the clustering of earthquakes in space and time [3]; space-time earthquake clustering can be attributed to the triggering of earthquakes [8]. This clustering stimulated the development of a series of earthquake forecasting models based on a branching process, such as the epidemic-type aftershock sequence model (ETAS) [9,10], and the short-term earthquake probability model (STEP) [11]. The ETAS model combines the Gutenberg-Richter, Utsu, and Omori laws into a Hawkes (point) process in a way that every past earthquake (above a certain magnitude) triggers other earthquakes according to the same laws. Previous studies and many retrospective analyses [12,13] have proved that clustering models, such as the ETAS model, provide better forecasts than other models; still, these models cannot explain some central earthquake features [14][15][16][17], and many statistical physics features of seismic activity have not yet been fully understood [18]. Temporal and spatial memory (correlations) exists widely in many natural systems [19][20][21], including in earthquake activity. For example, Livina et al [22] identified the short-term memory of successive inter-event times in real earthquake catalogs using a conditional probability method. They found a strong short-term memory in which a short (long) inter-event time tends to follow a short (long) inter-event time. Other correlation detection methods, such as the detrended fluctuation analysis [23], have also been applied to detect the memory of inter-event times [24]. The empirical short-term memory between successive inter-event times in real catalogs has been found to be reproduced by the ETAS model, only for a narrow range of model parameters [25]. Recently, a new measure has been [1] introduced, called 'lagged' conditional probability, to explore long-term memory, both in successive and non-successive inter-event times and distances [1]. This analysis has resulted in a memory measure versus (time or distance) lag for which a crossover between two distinct behaviors has been found: a slowly decaying power law at short scales (time or distance) and a significantly faster decay (that may be exponential) at long scales [1]. This behavior, discovered in real catalogs, could not be reproduced by the ETAS model. More specifically, the model's analysis resulted in the memory without the crossover that was observed in the real catalogs [1]; the model's memory is weaker (stronger) in short (long) time scales than the real catalogs. The value of the power law exponent depends on the productivity parameter α, which is associated, in the model, with the Utsu law. Earthquakes can trigger more correlated events with a larger α, resulting in enhanced earthquake memory. Therefore, based on the empirical finding [1] of crossover in the memory behavior, here we introduce into the ETAS model two productivity parameters, large and small, α 1 and α 2 , for short and long time scales. We show here that this revised ETAS model reproduces the observed double power law behavior of memory, as well as the crossover observed in the real data. Moreover, we show that the revised model significantly improves the forecasting performance of earthquake events. Data The Italian earthquake catalog is complete [26] for events with magnitudes above 3 and includes 8854 events from 1981 to 2017; the earthquake rate is 0.67 events per day. A catalog is 'complete' when all events above the specified magnitude are included in it. The Southern California catalog is complete [27] for events with magnitudes above 3 and includes 13 586 events from 1981 to 2018; the earthquake rate is 0.98 events per day. The revised ETAS model In the ETAS model, seismic events are assumed to involve a space-time stochastic point process [9,10]. Each event above a magnitude M 0 is selected independently from the Gutenberg-Richter distribution (where b = 1). The conditional rate, λ, at location (x, y) at time t is given by where H t is the history process prior to t, t i are the times of the past events, and M i are their magnitudes. μ(x, y) = μ 0 u(x, y) is the background intensity at location (x, y), where u is the spatial probability density function (PDF) of background events, which is estimated by the method proposed by Zhuang [28,29]; μ 0 is the background rate of the entire region. We represent the total number of the past events as n − 1. The dependence of the triggering ability on magnitude is given by the Utsu law as where A is the occurrence rate of earthquakes at zero lag. In equation (2), we introduce two productivity parameters, α 1 and α 2 (α 1 α 2 ). When α 1 = α 2 , equation (2) reduces to the original ETAS model. h10 −bM 0 is the crossover number of events with the magnitude threshold M 0 ; h is a parameter that can be estimated from the data. The ith historical event has a larger rate to trigger the nth event (due to the larger α 1 ), when the number of events between the ith and nth events is smaller than h10 −bM 0 . Note that we consider here a regional size (like Italy) model where the spatiotemporal clustering sequences below the crossover are mostly unmixed with other remote sequences. The parameter h may be different when dealing with a much larger area size. The function g (t − t i ) follows the Omori law as where c and p are the Omori law parameters. The spatial clustering of aftershocks is implemented by introducing a spatial kernel function f indicates that the distances between triggering and triggered events depend on the magnitudes of the triggering events. q, D and γ m are the estimated parameters. We chose the parameters of the original ETAS model (ETAS1) for Italy based on refs [25,30] as follows: 26, and the critical parameter n = Ac it is smaller than 1 (β = ln (10)). The spatial parameters were chosen to be q = 2.0, D = 0.03 (in units of 'degree'), and γ m = 0.48, and were estimated based on the method described in references [28,29]. The parameters of the revised ETAS model (ETAS2) were chosen to be A = 3.35, α 1 = 2.0, α 2 = 1.5, and h = 2 × 10 5 (the crossover h10 −bM 0 = 200 for the magnitude threshold M 0 = 3.0); the other parameters are the same as for the ETAS1 model. N-test The N-test [31] compares the total number of observed earthquakes (N obs ), within the spatial size, magnitude, and time interval, to the expected total number of target earthquakes predicted by the forecasting model (N forecast ). Results We first define the earthquake inter-event time interval as τ i = t i+1 − t i (in days); this is the time interval between two consecutive earthquake events above a certain magnitude threshold. Similarly, an inter-event distance r i is defined as the distance (in km) between the epicenters (i.e., the projections of hypocenters on the surface) of events i + 1 and i, where both are above a certain magnitude threshold. We calculate the inter-event times and distances with the magnitude threshold M 0 = 3.0(M w ) for Italy's seismic catalog, which is known to be complete for earthquake magnitudes above 3.0 [26] (see materials and methods); this catalog spans 37 years, from 1981 to 2017. We then propose the 'lagged' conditional cumulative distribution function (CDF) method based on the 'lagged' conditional PDF [1] as follows. First, all inter-event times (distances) are sorted into ascending order and then divided into three equal quantiles. The first quantile, Q1, contains the smallest 1/3 inter-event times (distances), and the third quantile, Q3, contains the largest 1/3 inter-event times (distances). The conditional CDF of inter-event times (distances) is defined as C(τ k |τ 0 ) (C(r k |r 0 )), where τ 0 (r 0 ) belongs to Q1 or Q3, and τ k (r k ) is the lagged kth inter-event time that follows τ 0 (r 0 ). This method generalizes previous studies that used lag 1 (k = 1) [22], thus revealing the long-range memory empirical laws of earthquake catalogs. To demonstrate the empirical long-term memory, we show, in figures 1(a) and (b), the lagged conditional CDFs C(τ 50 |τ 0 ) and C(r 50 |r 0 ) using a high lag number k = 50, for the first and third quantiles, Q1 and Q3, for the Italian catalog. It can be seen that the lagged conditional CDFs of Q1 are significantly different from those of Q3 (see figures 1(a) and (b)). This implies the existence of memory (correlations) also for a large number of lags. For the randomly shuffled catalogs that do not contain memory, both lagged conditional CDFs of Q1 and Q3 are identical to the unconditional CDF (indicated by the dashed black curves in figures 1(a) and (b)). Thus, the real catalogs exhibit memory, even for large k lags, as found earlier [1]. We next test the memory in the ETAS model by simulating earthquake records for the region of Italy (34 • N-48 • N, 6 • E-20 • E), based on the thinning method [10,32]. The original and our revised ETAS models are called here ETAS1 and ETAS2, respectively. In ETAS2, we introduce the productivity parameter α 1 for small lag-index k (short time scale), and the smaller productivity parameter α 2 for large lag-index k above a crossover value (long time scale). See the data and methods section for more details regarding the model. We generate 50 realizations of synthetic events greater than or equal to magnitude 3 for the time window of 13 500 days (same as the time length of the real catalog) for the synthetic catalogs of ETAS1 and ETAS2. The earthquake rates are 1.11 ± 0.06 and 0.74 ± 0.09 events per day for ETAS1 and ETAS2, respectively. The higher rate indicates more aftershocks generated for ETAS1 than ETAS2 in the entire period. The rate of the real catalog is 0.67 events per day. Figure S1 (https://stacks.iop.org/NJP/23/042001/ mmedia) shows the epicenter map and longitude-time plot of the catalogs for the time window. Figures 1(c) and (d) show the lagged conditional CDFs C(τ 50 |τ 0 ) and C(r 50 |r 0 ) for the catalogs simulated by ETAS2. Note that there are substantial differences between the CDFs of Q1 and Q3, where these differences are common both for the real (figures 1(a) and (b)) and for the ETAS2 simulated catalogs (figures 1(c) and (d)). In contrast, the CDFs of Q1 and Q3 of the original ETAS1 model almost completely overlap, for both inter-event times and distances (figures 1(e) and (f)), in contrast to the observed CDFs. This demonstrates the existence of stronger memory in ETAS2 and, in contrast, much weak memory in ETAS1. Figure S2 shows that the memory of ETAS1 is even weaker when the rate of aftershocks is similar to that of ETAS2. To quantify the memory based on the lagged conditional CDF, the maximum gap S (indicated by a double arrow in figure 1) between the lagged conditional CDFs of the first and third quantiles is calculated. It follows that S is theoretically bounded between 0 (complete overlap between the CDFs or PDFs of Q1 and Q3) and 1 (complete separation) where larger S indicates stronger memory. We next calculate the memory measure for the real Italian catalog as a function of the lag index k for inter-event times and distances, respectively (in figure S3). We also consider here the larger complete magnitudes with thresholds M 0 = 3.5, 4.0. We rescale the memory measure S(τ k |τ 0 ) by a factor 10 aM 0 , which represents the dependence of memory on the magnitude threshold [1] (i.e., F(x) = S(x)10 aM 0 ) where the lag-index, k, is rescaled by x = k10 bM 0 , to account for the Gutenberg-Richter law. The curves S(x) of all cases collapse into a single curve F(x) after the rescaling (figure 2). It is seen that the rescaled memory measure F(x) of the Italian catalog decays slowly for small x (lags) and faster for large x (lags), both for inter-event times and inter-event distances, as seen in figures 2(a) and (b). Figures 2(c) and (d) depict the corresponding scaling Table 1. Estimated parameters (mean ± std), a exponent in the scaling factor 10 aM 0 and power law exponents of the scaling function, γ 1 , γ 2 for the interevent times and distances for the real catalog of Italy, ETAS1 and ETAS2 models. For the models, the error bars show the standard deviations for 50 realizations. Parameters Real ETAS1 ETAS2 a) and (b)) and the associated crossover of the scaling curves. Moreover, the crossovers are close to x c ≈ 10 5.0 , for both the real catalog and our developed ETAS2 model. Immediately after large shocks, the crossover point (lag) corresponds to a time of the order of 40 days (see figures S4 and S5). The scaling function behaves as F(x) ∼ x −γ 1 for x < x c and F(x) ∼ x −γ 2 for x x c . These two exponents (γ 1 , γ 2 ) and the parameter a are summarized in table 1, showing that the new ETAS2 model reproduces quite accurately the scaling exponents of the real catalog. In contrast, the original ETAS1 model basically exhibits a power law behavior but with only a single exponent, without a crossover, and thus fails to reproduce the observed memory characteristics of the data (see figures 2(e) and (f)). The flat in large k indicates that the inter-event distances are completely uncorrelated in figure 2(f). We also calculate the memory measure for the real and simulated catalogs of Southern California (see figures S6 and S7 and table S2). The results indicate that the new ETAS2 model reproduces quite well the scaling function of the real catalogs, in contrast to the ETAS1 model. An earthquake catalog could, in principle, be incomplete after a large shock as a result of: (a) coda waves, (b) the inefficiency of the seismic network to detect weak and frequent events, and (c) the overlapping of aftershock seismograms [33][34][35][36]. To test whether the crossover could be due to incompleteness, we have produced incomplete catalogs based on refs [34][35][36]. Figure S8 shows that the synthetic incomplete catalog based on the ETAS1 model cannot reproduce the crossover observed in the real data and in the ETAS2 model. This suggests that our results are not due to the possible incompleteness of earthquake catalogs. Moreover, the crossover is not reproduced as observed in the real data for ETAS1 using extensive set of model's simulations covering a wide range of ETAS1 model parameters in figures S11-13. In all of these the resulting crossover is significantly weaker than the observed one. Next we test and compare the forecasting performance of the ETAS1 and ETAS2 models. We perform the N-test [31] (see materials and methods), which compares the total number of earthquakes forecasted by the model with the total observed number of earthquakes over the entire region; we apply this test to the L'Aquila (Italy) earthquake (magnitude 6.3) that occurred on April 6, 2009 [37]. Figure 3(a) presents the locations of earthquakes above magnitude threshold 3 that occurred within one month after the L'Aquila mainshock. The cumulative number of earthquakes increased immediately after the L'Aquila mainshock (red circles in figure 3(b)). Figure S9 shows the non-cumulative number of earthquakes as a function of time. Notably, the original ETAS1 model forecast many fewer events compared to the real catalog, while the forecast of the revised ETAS2 is very similar to the real events. Indeed, the ETAS1 model is known to severely underestimate the number of earthquakes immediately after large shocks [37], while overestimating the number of earthquakes at long-term periods. The suggested remedy is to increase the α-value after large shocks [37][38][39][40][41][42]. However, this increase of the α-value is not consistent with the α-value evaluated based on the entire Italian catalog. The reason is that ETAS1 model with large α-value overestimates the cumulative number of earthquakes in the long time period (see figure S10) and furthermore cannot reproduce the observed crossover of memory in the Italian catalog (see figures S11-13). We also test the quality of the forecast in space (figure 3(c)) and find that our revised ETAS2 model outperforms the original ETAS1, resulting in an increased number of events, which makes its performance closer to real observations. The spatial clustering of aftershocks can cause a large fraction of events within a short distance. Aftershocks generally occur within a 100 km radius from the mainshock's epicenter. We find that 98% of the events in Italy within 30 days after the L'Aquila mainshock were within a radius of R = 100 km ( figure 3(c)). We find that 94% and 86% of the events (from 500 independent realizations) of the ETAS1 and ETAS2 models, respectively, occur within a 100 km radius within one month after the main shock. The fractions in the real data and the ETAS2 are similar to each other. We also test the forecasts after the six largest shocks (M w 6) that occurred in Italy from 1981-2017. This is because an even larger earthquake (L6, magnitude 6.6 in comparison to magnitude 6.1 of the L5 event) hit Norcia later, leading to more aftershocks four days after L5. We also show in figures 4(d)-(f) the fraction of earthquakes within a 100 km radius around the mainshock's epicenter and within different numbers of days from the mainshock. The results indicate that the observed fraction of earthquakes (red dots) falls within the narrower error bars of the proposed ETAS2 model, while the observations fall outside the error bars of the ETAS1 model, despite the larger error bars of this model (figures 4(d)-(f)). We thus conclude that the new ETAS2 model's forecasting performance is significantly better than that of the ETAS1 model. Similarly improved forecasting performances of the ETAS2, after large shocks in Southern California, are shown in figure S14. Conclusions Here we study the long-term memory by considering the lagged conditional probabilities of interevent times and distances in real and synthetic earthquake catalogs. We suggest that the spatiotemporal clustering of aftershock sequences dominates the memory in inter-event times and distances that are smaller than the crossover. While above the crossover, the clustering and memory behavior substantially decreases. This is incorporated into the ETAS model. Our results suggest that the memory of the ETAS model actually depends on the aftershock productivity parameter α. Following the observed catalogs [1], we revised the ETAS model to include two productivity parameters, α 1 and α 2 , for short and long time scales. We show that the revised ETAS model suggested above not only reproduces the memory characteristics (scaling function) of the real catalog but also exhibits significantly better forecasting skills in comparison to the original ETAS model. According to the Utsu law, the aftershock rate depends on the magnitude of the mainshock. The two α-values of the revised ETAS model indicate that the Utsu relation for long-term triggering is not same as for short-term triggering. For lag index smaller than the crossover lag, a large earthquake tends to trigger many more events. The triggering ability is substantially reduced for lags above the crossover lag. This may imply a possible earthquake generating mechanism that depends on the stress conditions established by historical events [43,44]. Our results imply that the stress conditions depend more on the near recent events (below the crossover number) and less on the far events (above the crossover number). The fundamental reason for the crossover dependence on number of earthquake events could be that this number is proportional to the energy released from faults. Our methods may be also relevant for other fields like in the studies of sleep disorder and epileptic seizures [45,46].
2021-05-04T22:05:56.687Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "8cd898563130b7ab210d5d27156297de55730260", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1367-2630/abeb46", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9d2c1cc3b5afcaebf5aeb5464872fff715807d34", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Physics" ] }
229413213
pes2o/s2orc
v3-fos-license
Use of Pulsed Arc Discharge Exposure to Impede Expansion of the Invasive Vine Pueraria montana : The invasive kudzu vine Pueraria montana var. lobata is an agricultural nuisance that disturbs the field cultivation of crop plants. We developed a simple electrostatic method of suppressing the invasive growth of kudzu vines as an alternative to the use of herbicides for weed control. Exposure of the vine apex to a high-voltage arc discharge was the focal point of the study. To achieve this, we constructed a ladder-shaped apparatus by arranging several parallel copper rods at specific intervals in an insulating frame. The top rod was linked to a direct current voltage generator and pulse-charged at − 10 kV, and the remaining rods were linked to a grounded line. Because of the conductive nature of the grounded vine body, the vine climbing along the grounded rods was subjected to a pulsed arc discharge from the charged rod when its apex entered the electric field produced around the charged rod. The part of the vine exposed to the discharge was heated, which promoted vaporisation of body water. This destroyed the tip growing point and prevented vine elongation. A simplified weed control apparatus was developed, which can be fabricated for practical use from inexpensive, ready-made materials. Introduction Tomato plants have been farmed organically in our laboratory under both greenhouse and field cultivation conditions. For the greenhouse tomatoes, we have developed physical (electrostatic) methods of controlling airborne fungal pathogens [1][2][3] and flying insect pests [3][4][5] that pass through a conventional insect-proof net as an alternative to the use of pesticides. In the field, we have grown numerous wild tomato species under constant exposure to natural infection by pathogens and/or attacks by insect pests to breed tomato lines resistant to pathogens and insects. Unfortunately, our field cultivation has frequently been disturbed by agricultural nuisances such as wild animals and invasive plant creepers (kudzu) because our experimental fields were created by clearing part of a forest. We have successfully repelled wild animals using an electric fence, but we have had no effective means of impeding the invasive expansion of plant creepers under our herbicide-independent organic farming conditions. Kudzu or Japanese arrowroot (Pueraria montana var. lobate [Willd.] Ohwi) was originally cultivated to harvest starch [6] and various medicinal phytochemicals [7] to supply livestock feed [8][9][10][11] as a source of biofuel [12], and to control land greening or land erosion in desert areas [13]. However, the use of kudzu has declined in recent years, and the plants have been abandoned without being controlled properly. As a consequence, they have become a major weed species that flourishes vigorously across Japan. Kudzu is an interesting plant that shows specific growth and differentiation patterns in response to seasonal change in the environment [14]. In warm and hot seasons, the plant spreads vigorously through vegetative reproduction via runners (stolons) that form new plants and roots at the nodes. The plant invasively trails perennial vines that cover trees or shrubs, which are then killed by the heavy shading [15,16]. In addition to disturbing the ecological balance, these noxious weeds cause environmental and social problems [17]. This can include coiling and climbing up electric poles to overhang bridged aerial power lines or cover traffic signs or trailing over fences and covering slope faces along electric railway tracks and motorways. It has, therefore, become important to control the kudzu plant for landscape preservation. In late autumn, morphological changes induced in the vines of the plant ensure its perennial nature. These changes are vital to the plant's survival under cold conditions. If they are effective, the plant will live through the winter and generate new plantlets and runners on its perennial vine nodes in the next growing season [18]. Eventually, this convertible differentiation system of the kudzu plant perpetuates the spoilage of natural ecosystems through the invasive growth of creepers. The main method of controlling kudzu plants is to apply herbicides [19][20][21]. Some systemic herbicides can be applied directly on cut kudzu stems to be transported into the plant's extensive root system. Herbicides are more effective after other methods (e.g., mowing, grazing, and burning) have been used to weaken the plants [22]. The use of bio-herbicides is an additional option for controlling kudzu plants, and these can be applied in combination with other control measures [21,[23][24][25]. After an initial herbicidal treatment, follow-up treatments and monitoring are usually necessary, depending on how long the kudzu plants have been growing in a particular area. Unfortunately, this is not always practical because of the massive cost and labour required to control kudzu plants growing over vast areas [26,27]. Herbicides are the predominant method of controlling weeds in modern agriculture. However, their overuse has led to the rapid evolution of herbicide-resistant weeds [28]. Given the problem of herbicide resistance and the long-standing public concern to reduce overall pesticide use, alternatives to herbicides and truly integrated weed management strategies are urgently required. Therefore, we developed a new physical (electrostatic) method as an alternative to the use of herbicides. The electrostatic method devised for weed control was based on the conductive nature of grounded plants. In a previous study [29], we found that a discharge-generating electric field was formed between a grounded plant and a negatively charged metal needle brought close to the plant. This implies that negative electricity (free electrons) accumulated on the pointed tip of the needle pole positively polarised the plant by electrostatic induction (i.e., these opposite charges formed the electric field between them). In a high-voltage electric field, the electricity on a needle pole is transferred to the ground via a ground-linked plant body. This transfer of electricity occurs through an arc discharge of the needle pole [30]. Thus, free electrons that are accelerated by a high-voltage pass through the plant body in the electric field are expected to have detrimental effects on the survival of the plant. In this study, we designed a simple electric field-based apparatus to impede invasive expansion of the kudzu vine. For this purpose, we developed a plant-mediated arc discharge between a negatively charged metal conductor and a plant in contact with the grounded conductor. Young saplings that were developing vines were used to examine the generation of the arc discharge between the apical tip of the grounded vine and the conductor wire that was pulse-charged with the negative voltage. In this practical application, we used a pulse-charging-type voltage generator, which is commonly used in electric fences to repel wild animals for safety reasons. Based on the results we obtained, we propose a simple and unique physical tool for selectively exposing vines of the kudzu plant to a high-energy electric current from a charged conductor, immediately destroying the vines. This method is a promising tool for impeding the invasive expansion of kudzu vines. Plant Material Saplings of kudzu (P. montana var, lobata) were propagated by cuttings because of the low seed germination rate [31]. Vine stems (8-10 cm long) containing single hibernating nodes were detached from kudzu plants growing naturally on a university campus (Nara Prefecture, Japan). The nodes were potted and grown in a sunny greenhouse to generate a new plantlet and vine out of the node ( Figure 1A). These saplings were used in the following experiments. . Determination of the arc discharge distance required to cause mechanical discharge (C1) and the non-mechanical discharge-causing distance (C2) between the charged iron rod (CIR) and Grounded iron rod (GIR1). Exposure of a vine in contact with GIR1 (D1) and GIR2 (D2) to the arc discharge. The solid arrow represents the direction of electricity movement with the arc discharge (red arrow), and the dotted arrow shows the change in the GIRs connected to the grounded line. Abbreviations: VN, vine (runner). ND, node. SP, sapling. RT, root. ST, stem cutting. VG, voltage generator. CIR, negatively charged iron rod. GIR1-GIR10, grounded iron rods 1-10. GL, grounded line. PF, polypropylene frame. GM, galvanometer. D-MD, distance to cause mechanical discharge. D-NMD, distance to cause no mechanical discharge. D-AD, distance to cause arc discharge. D-CF, distance of current flow on the vine. Measurement of the Conductivity of Kudzu Vines In the first experiment, undetached vines developed from the saplings mentioned above were examined for their conductivity with a digital surface resistance meter (Satotech, Kanagawa, Japan). Two points (the apical tip and designated stem point) of the vine were touched with the electrodes of the resistance meter (point-to-point resistance measurement), and the conductivity (S/m) was calculated from the resistance values obtained. In the second experiment, we examined the relationship between electrical conductivity and evaporation of body water in the vine. We measured body water content using the loss-on-drying (LOD) method [32]. The vine was cut at a point 50 cm from the apex, weighed, and then placed in a thermostat-controlled convection oven set to 35 • C. Once a constant weight was attained, the difference between the initial and final weights was calculated to determine the amount of moisture (body water) vaporised. Vines that had lost different percentages of body water according to the weight loss curve obtained from the LOD were collected at different durations of desiccation and examined for their electrical conductivity using the method mentioned above. Data are expressed as the magnitude of the electric current, which was calculated from the resistance value. Figure 1B shows the configuration of the LE-PAD, which consisted of 11 identical iron rods (6-mm diameter, 30 cm length) arrayed in parallel and fixed with a polypropylene frame (insulator) and a direct current voltage generator (pulse-charging type, 1-minute pulse interval, −10 kV usable voltage, Suematsu Denshi, Kumamoto, Japan). Ten grounded iron rods (GIR1-GIR10) were arranged in parallel at 50-mm intervals, and a grounded line was linked to one or all of the GIRs. Construction of the Ladder-Shaped Exposer of the Pulsed Arc Discharge (LE-PAD) and an Assay of Arc Discharge Generation In the first experiment, the interval between the negatively charged iron rod (CIR) and GIR1 was changed and charged with a voltage of −10 kV to determine the distance at which a mechanical discharge (arc discharge) occurred between them. In the subsequent experiment, the distance was fixed at 15 mm (the randomly selected distance at which there was no mechanical discharge, Figure 1C1,C2). In the second experiment, an undetached vine of a kudzu sapling was placed on the LE-PAD and brought close to the CIR in a stepwise manner, then charged with a voltage of −10 kV to determine the distance from the CIR at which an arc discharge is created to the vine tip. In the subsequent arc discharge exposure experiments, we positioned the vine tip at specified sites (5, 7, and 9 mm from the CIR) and changed the GIR linked to the grounded line ( Figure 1D1,D2). The exposures of the vines to the arc discharge were video-recorded. The electric current was recorded with a galvanometer (Sanwa Electric Instrument, Tokyo, Japan) integrated into the grounded line, and its magnitude was measured with a current detector (detection range, 0.01 µA to 10 A) integrated into the galvanometer. Simultaneously, the sound produced by the arc discharge was measured in decibels with a sound-level meter (Sato Tech, Kanagawa, Japan). The sound profile was recorded with a spectrum analyser integrated into the sound-level meter. We photographed discharge-exposed vines with a thermographic camera (Flir One, FLIR Systems, Wilsonville, OR, USA) to compare heat-zone images among discharge-exposed and nonexposed samples. The temperature of the apical areas of the vine was determined with the multiple spot temperature meters for selectable onscreen temperature tracking regions in the camera. The subsequent growth of discharge-exposed vines was recorded for one week to determine the degree of wounding of the vine apical regions. Damage was assessed by wilting and/or drying of the electrified region of the vine. A Simplified Version of the LE-PAD (SE-PAD) for Practical Use We fabricated an SE-PAD for practical use from copper or aluminium wires and a ready-made polypropylene net (insulator, Figure 2A). Four metal wires (2 mm diameter, 30-100 cm length) were attached to a net in parallel and at specific intervals (50 mm). One wire at the highest position was linked to a solar cell-driven pulse-type voltage generator (Suematsu Denshi), which is commonly used in electric fences to repel wild animals from crop fields, and charged with a voltage of −10 kV at a 1-s interval. The remaining three wires were linked to a grounded line. In the first experiment, the SE-PAD was attached to a fence to control the vines that were climbing along it. This experiment lasted for one month. In the second experiment, to control vines creeping along the ground, we attached an SE-PAD (15 cm height, 100-cm length) to a flat aluminium board (40-mm width, 100-cm length) with a slope of 95-100 • ( Figure 2B). A block of land was partitioned into squares together with the boards and with the inclined SE-PAD. This apparatus was set at 20 sites, and the functionality was surveyed over three months (June to August, the most suitable periods for kudzu vine growth). Figure S1 shows photographs of the apparatus. Statistical Analysis All experiments were repeated five times, and data are presented as means and standard deviations. Analyses were performed to identify significant differences among conditions as well as correlations between factors, as shown in the figure and table legends. The Conductive Nature of the Kudzu Vine is Essential for the Arc Discharge Exposure Treatment The purpose of this study was to expose the apical region of the kudzu vine to an arc discharge to impede its subsequent growth. The conductive nature of the kudzu vine was a prerequisite for receiving electricity from the charged conductor and discharging it to the ground via a GIR. The apparatus designed in this study produced an electric circuit. Electricity was pumped upward from a grounded conductor by the voltage produced by a voltage generator. It then accumulated on the surface of the iron rod connected to the voltage generator and was sent back to the grounded conductor through the vines of the plant. However, the release of electricity from the charged conductor was impeded by the air between the conductor and the plant. Therefore, a relatively high force (i.e., a high potential difference) was needed to break down the resistance of the air (i.e., dielectric breakdown of gases) to successfully transfer the electricity [30]. This could be achieved by applying a high voltage to the iron rod and/or reducing the distance between the charged rod and the plant. The conductivity of the kudzu vine was an additional impediment to current flow. In the study, the voltage used for the arc discharge exposure was fixed (−10 kV), and, therefore, we focused on the conductivity of kudzu vines and, then, the distances between the charged conductor and the vine apical tip (arc discharge distance) and between the apical tip and designated position of the vine stem (current flow distance). In the first experiment, we measured the electrical conductivity of the vine by a point-to-point resistance measurement. The electrical conductivity of the vine was defined as the density of the vine's electrical conductivity based on the potential difference produced between points that were touched. Figure 3A shows the change in resistance of a kudzu vine when the point-to-point distance for current flow was changed. As the distance became longer, the resistance became larger. There was a linear relationship between the increase in the current flow distance and electrical resistance. The conductivity of the kudzu vine was calculated to be approximately 10 −5 S/m. It appeared that the vine was a suitable conductor for an electric discharge exposure treatment. Nevertheless, the electric current that flowed on the kudzu vine decreased in direct proportion to the increase in distance. This implies that electricity could be transmitted through a limited region from the vine apex. Figure 3B shows the temporal change in body weight of the test vines. The technique used in this study effectively dehydrated the vines to a desired level by changing the duration of desiccation. The data exhibited a high degree of reproducibility. There were two distinct phases in the loss of body water: an initial rapid loss and then a slower phase. In both phases, a linear relationship was observed between the duration of body desiccation and the extent of body water loss. The total water content constituted 85-90% of the body weight. Water may also become locked in molecular structures as bound moisture with the result that greater amounts of heat energy are needed to release the tightly bound moisture. The LOD treatment likely promoted the vaporisation of free water in the plant body. Using this method, we collected vine samples with different degrees of water loss and examined their electrical conductivity. We found that conductivity did not change until 80% of the water content had been vaporised, and then it decreased substantially at greater degrees of water loss ( Figure 3C). These results indicate that, for our technique to be effective, water loss from the vine should be maintained at less than 85% of total water to ensure sufficient electrical conductivity of the vine. Exposing a Kudzu Vine to an Arc Discharge Can Destroy the Apical Growing Point of the Vine, Inhibiting Its Subsequent Growth Discharge is defined as the generation of an electric current between opposite poles due to the dielectric breakdown of gases in the electric field, according to the potential difference between the opposite poles [30]. If the grounded conductor is one of the poles (i.e., the recipient of electricity), the discharge occurs more easily because this conductor receives electricity without any restriction (in this experiment, 10 mA of the maximum current of the voltage generator). In an electric field, a corona discharge occurs first, which then changes from a glow discharge (or surface discharge) to a brush-like discharge as the applied voltage increases and/or the distance between the poles decreases. The discharge breaks down with the occurrence of an arc discharge between the two poles [33]. Plants are conductive, and, therefore, when they receive electricity resulting from the discharge of a charged conductor, an electric current flows through their bodies [29]. If a continuous voltage is applied to the conductor, a continuous discharge from the charged conductor occurs. A continuous arc discharge can produce a continuous electric current that causes damage to targets due to heating, based on the Joule effect [34]. In addition, it produces a strong force that can destroy small organisms through a high-voltage-mediated transient electric current flow [35][36][37]. Our preliminary investigation indicated that, in the LE-PAD, the pulsed arc discharge from the charged conductor was generated when the apical tip of the vine reached the test position of 9 mm from the charged conductor. The focus of the present experiment was to clarify whether the pulsed voltage caused damage to the vines. When the arc discharge occurred between the charged conductor and the vine placed on the LE-PAD, electricity moved through the ground-to-ground circuit that included the air and plant body ( Figure 1D1,D2). The focus of the experiment was to examine the effects of changes in the arc discharge distance (D-AD) and the distance of current flow on the vine (D-CF in Figure 1D1,D2) on the generation of current flow and arc discharge sound in order to verify the inhibitory effects of arc discharge exposure. The arc discharge sound was a sonic boom caused by the shock wave from the high-speed electrons moving in the electric field, and its intensity was an indicator of the impact strength of the shock wave produced by the arc discharge exposure. Figure 4A shows the kudzu vine exposed to an arc discharge from the CIR. Figure S2 shows the magnitude of the electric current that flowed through the vine following the arc discharge exposure (A), the intensity (B), and the number of arc discharge sounds (C in Figure S2). With the application of a pulsed voltage, the electric current and arc discharge sound were generated simultaneously as the voltage was pulsed. As D-AD became longer, the current magnitude and sound intensity became lower, and the number of arc discharge sounds increased. Eventually, the vines were wounded by the arc discharge exposure treatment. Figure 4B,C show the electrified region of the arc discharge-exposed vines. It appeared that water was not supplied to this region after it was exposed to the arc discharge. It is likely that the exposure destroyed the vein system and prevented water movement in this region. More importantly, the electric current and arc discharge sound stopped automatically, likely because of the lower conductivity of the electrified vine region. We believe that the electrified region was heated by the current flow through the arc discharge exposure to cause the rapid vaporisation of the plant body water. Figure 4D-F show thermographic data, indicating that the temperature of the electrified region was increased by a repeated pulsed arc discharge exposure. It is likely that this increase in temperature (75-80 • C) caused water vaporisation in the plant body, which led to lower conductivity. As D-AD or D-CF became longer, the number of arc discharge sounds (i.e., the duration of the pulsed arc discharge exposure) increased and sufficient damage was caused to the vine. In future studies, light and electron microscopic analyses of the discharge-exposed vine will be conducted to determine the arc discharge-mediated plant disintegration mechanism. Table 1 shows an inhibitory function of the LE-PAD under different D-AD and D-CF conditions. In all D-AD distances, the inhibitory function became weaker as the more distant GIR was linked to a grounded line (i.e., the path of current flow on the vine became longer) due to the increase in the electric resistance. At 5 mm of the D-AD, all of the discharge-exposed vines were prevented from elongating even when the GIR8 was grounded (approximately 37 cm from the vein apex). As the D-AD became longer (7 and 9 mm), the D-CF became shorter (approximately 32 and 26 cm from the apex, respectively). In fact, 100% elongation inhibition at 7 and 9 mm of the D-AD was detected when the GIR7 and 6 were grounded, respectively. Therefore, the present result indicated that, if we link the grounded line to the GIR6, all vines could be inhibited regardless of the D-AD. In a real situation, the creeping vine is subjected to the arc discharge exposure when it reached 9 mm from the CIR. Exposure of a kudzu vine, which was contacted with a ground iron rod (GIR1), to a pulsed arc discharge (AD) from a negatively charged ion rod (CIR). (B, C) Arc discharge-exposed vines showing the dryness of the electrified region (five days after the arc discharge exposure treatment). The arrows in B and C show the points of the vines that came into contact with the second and third grounded iron rods (GIR2 and GIR3), respectively. (D-F) Thermographic demonstration of the increase in temperature in a kudzu vine exposed to a pulsed arc discharge at −10 kV. In D, E, and F, GIR1, 2 and 3 were linked to a grounded line, respectively. Subfigures D-F represent temperature ( • C) measured with a spot temperature meter of a camera. The arrow was the site of a temperature measurement. The interval between CIR and vine apex was fixed at 5 mm. Table 1. Percentage of damaged Pueraria montana var. lobata vines subjected to an arc discharge from a charged iron rod (CIR) of the ladder-shaped arc discharge exposer (LE-PAD) a . GIR Linked to a Grounded Line b Interval (mm) between CIR and Vine Apex Based on the results obtained in the present study, we constructed a simple version (SE-PAD) of the LE-PAD (Figure 2), where one CIR and three GIRs were paralleled at an interval of 10 cm. The CIR was linked to a pulse-type voltage generator, and three GIRs were linked to a grounded line. In this apparatus, the vine apex was subjected to arc discharge at a 9-mm position from the CIR, and the 10-cm apex of the vine could be destroyed. Simultaneous grounding of three GIRs was useful to ground the creeping vine securely. The SE-PAD is a Promising Tool for Impeding the Invasive Growth of Creeping Kudzu Vines The first requirement for practical use was to confirm the ability of the SE-PAD to inhibit vine elongation by kudzu. For this purpose, we hung the SE-PAD on a fence and examined its ability to inhibit the growth of vines climbing along the fence ( Figure 5A). In all tests, the SE-PAD was functional with the arc discharge exposure damaging the vines to the extent that their invasive growth was impeded. The drying up of their apical region resulted in their elongation ceasing. Figure 5A (lower photograph) shows two vines concurrently approaching the CIR. In theory, the nearest target (the right vine in the photograph) was preferentially exposed to the arc discharge until the discharge stopped automatically. Another vine was allowed to grow continuously and was subjected to a stronger arc discharge after the first discharge stopped because the distance between the second vine and the CIR was shortened. In this manner, the invasive growth of all vines approaching the CIR could be prevented. The results of the study indicated the feasibility of the SE-PAD, and, therefore, it was used to control vines creeping along the ground. To achieve this, we used an inclined SE-PAD attached to a flat board ( Figure 2B and Figure S1B) to prevent the creeping vines from entering a square block of land. The creeping vine climbed the ladders (GIR1-GIR3) along the slope of the inclined SE-PAD and then received an electric current through the arc discharge from the CIR when its apex got close to the CIR ( Figure 2B). To ensure exposure to the electric discharge, it was essential that the vine body was exactly grounded (i.e., in contact with the GIR[s]). The incline was effective because it enabled the vine's weight to create close contact between the grounded ladder and the vine. Figure 5B (upper photograph) shows that the apparatus was extremely effective at preventing the entry of the creeping vines into the guarded area of land. Complete prevention of all vine growth was attained in all sites tested. These results indicate that the SE-PAD could be a promising physical tool for impeding the invasive growth of kudzu vines. The effects of rainfall on the conductivity of the vine are important with regard to the functionality of the SE-PAD. Because rain-wet vines become more conductive, they were subjected to an arc discharge with a larger electric current and stronger sound. The vines were, therefore, more extensively damaged by the arc discharge exposure on rainy days (data not shown). The rainfall caused no mechanical damage to the SE-PAD. It is interesting that there was a clear difference in conductivity between young slender vines and older thick vines. Both types of vine were impeded by the present discharge exposure treatment, but it appeared that the older vines received larger amounts of electricity and exhibited more severe damage than the young slender vines (data not shown). Although we obtained no data to verify this difference, further investigation will be undertaken to clarify the change in bioelectric properties associated with vine growth and differentiation. The framework of the apparatus is simple and easy to fabricate from materials that are widely available. In addition, the voltage generator of an electric fence is suitable for use in the system. Electric fences are ubiquitous and essential in modern agriculture. Accidents in association with agricultural electric fences are very rare [38]. Although unintentional human contact with electric fences occurs regularly, it causes little more than temporary discomfort [38]. In the present system, which targets both wild animals and creeping vines, it is possible that wild animals may come into contact with an electric wire on the fence while a vine is being exposed to the arc discharge. In this case, the discharge generation will move to the animal. However, because the animal will immediately move away, the discharge point will return to the original vine. Thus, the use of a pulse-type voltage generator is an effective and economical approach for preventing both wild animals and weeds from invading crop fields. Conclusions An electric field-based phenomenon was effective when used in a unique weed control application. In this study, a creeping plant (kudzu) invading a crop field was targeted for eradication by means of a non-agrochemical method during organic farming. The electrostatic phenomenon used for weed control was an electric discharge in a dynamic electric field, to which creeping kudzu vines were exposed in an attempt to destroy the growing point at the apical end. The discharge exposure was sufficiently effective that the targets were destroyed immediately. This apparatus remained functional during long-term operation under field conditions. Thus, we developed a practical physical method of controlling weeds invading an agricultural field, which could be useful for organic farmers. Supplementary Materials: The following are available online at http://www.mdpi.com/2077-0472/10/12/600/s1. Figure S1: A simplified pulsed arc discharge exposure (SE-PAD) and a solar cell-driven pulse-type voltage generator, Figure S2: Effects of change in D-AD on the generation of an electric current on a vine (A), and the intensity (B) and frequency (C) of the arc discharge sound. Conflicts of Interest: The authors declare no conflict of interest.
2020-12-27T10:07:17.783Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "bee86cd01a0d847fa73b53e6a4d13db86de41c4a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0472/10/12/600/pdf?version=1607318863", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "39bbb8d79ded81a7dedc010ed16e3cda2c1636fd", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
260954600
pes2o/s2orc
v3-fos-license
Edge Enhancement Optimization in Flexible Endoscopic Images to the Perception of Ear, Nose and Throat Professionals Digital endoscopes are connected to a video processor that applies various operations to process the image. One of those operations is edge enhancement that sharpens the image. The purpose of this study was to (1) quantify the level of edge enhancement, (2) measure the effect on sharpness and image noise, and (3) study the influence of edge enhancement on image quality perceived by ENT professionals. INTRODUCTION 3][4][5][6] Digital endoscopes are connected to a video processor that applies various operations to enhance the image without perceivable delay for the observer.One of those operations is edge enhancement and its effect on an in vivo image of the larynx is illustrated in Figure 1.This operation makes the image sharper and sharpness is strongly correlated with the perceived image quality by Ear, Nose and Throat (ENT) professionals. 7Although edge enhancement is applied by all vendors, the literature on edge enhancement in ENT is limited to two articles.Kawaida et al reported that in their experience image quality was improved when structure enhancement, that is, a form of edge enhancement was applied. 8Kawaida et al later showed that edge enhancement also seems to improve diagnostic accuracy: applying structure enhancement refined the diagnosis in 2 out of 15 patients. 91][12][13] In fact, this operation does not introduce new information or increases the resolution of the image, but increases the step in brightness of edges.This operation probably works so well, because it mimics the biological process of retinal lateral inhibition in the visual system. 14 major drawback of edge enhancement is that the operation cannot discern edges from noise and therefore enhances both. Edge enhancement can be applied using various methods that determine the resulting image and can be optimized for different purposes, such as aesthetics and fidelity (i.e., the degree of exactness with which reality is reproduced) or diagnostics. 11,15dge enhancement is commonly applied in digital ENT endoscopes, but the specific method and parameters are not disclosed, and the units to express the level of edge enhancement are arbitrary and differ. 13Literature to substantiate the default settings has not been found and could not be provided by the manufacturers.Because we do not know the methods and parameters applied by the vendors, we can only measure the effects on images that are processed by the video processors. The purpose of this study was to (1) objectively quantify the level of edge enhancement uniformly from test images, (2) measure the effect of edge enhancement on sharpness and noise, and (3) study the influence of edge enhancement on image quality perceived by ENT professionals. MATERIALS AND METHODS To study the effect of edge enhancement, we used three endoscopes.Because the purpose was to study image quality metrics and not to compare the types of endoscopes, we refer to endoscope A, B, and C throughout the manuscript and disclose the specific type of endoscope and video processor here once for sake of reproducibility of the study: (A) Olympus ENF-V4 connected to a CV-170, (B) Pentax VNL9-CP connected to a VIVI-DEO CP-1000, and (C) Xion HD connected to a Matrix P Spectar.These systems were selected because they were operational in our outpatient clinic and readily available. The user interfaces of the video processors have different names for the option to adjust the level of edge enhancement.System A and C have a numerical value, but system B has a wedge without numerical value.Therefore, we used a ruler on the display, to systematically vary the level of edge enhancement from 0% to 100% in nine steps of 12.5%.The exact levels of edge enhancement that are used in this study for in vitro and in vivo measurements are listed in Table I. To genuinely capture images, the endoscope was connected to a video processor and the DVI-D video output to the display was split using a Blackbox 1x2 DVI-D splitter.One output was connected to the surgical display for the ENT specialist, and the other output was connected to an Epiphan DVI2USB3 frame grabber.Images were stored as uncompressed 24-bit BMP files. In vitro measurements Direct comparison of edge enhancement settings between video processors is impossible due to the arbitrary units provided by the manufacturers.Therefore, we measured the levels of edge enhancement in vitro by capturing images of the Rez Checker Target Nano Matte at 3.0 cm distance.Image Science associates developed this test chart specifically to narrow illumination geometries, notably those used in endoscopic imaging. 13,16The endoscope was positioned and fixated in a setup, and images were captured at the levels of edge enhancement in Table I.Measurements were performed using slanted edges and gray patches.The level of edge enhancement was determined by subtracting the step response of an image without edge enhancement from the other eight images with edge enhancement and measuring the resulting peak-to-peak differences.These differences were then normalized by dividing by the step size, that is, a value of 1 indicates that the edge height/step is doubled by the processing.Sharpness was characterized by observing the normalized modulation transfer function (MTF) and computing the spatial frequency at 50% MTF.The standard deviation of the image noise was measured on the gray patches and computed as the square root from the weighted sum of variances of the luminance (Y) and the chrominance channels red (R), green (G), and blue (B) of the pixel values. 17 where the luminance is computed as Pairwise Comparison In Vivo Image Acquisition.To study the effect of edge enhancement in vivo, three different types of endoscopes were used to image the larynx of a single healthy volunteer.The endoscopes were introduced in the nose through the nasal cavities as it is the common practice for this medical examination.The ENT specialist pointed the endoscope at the larynx and steadied the view of the endoscope.For each studied level of edge enhancement (nine levels per type of endoscope), at least two images were captured.The acquired images were immediately quality checked by two ENT professionals, and the examination was repeated for the levels with unsatisfactory images.The best image was selected with respect to positioning, anatomy, and lack of motion blur.The selected images were included in the series for pairwise comparison.The protocol was reviewed by the accredited Medical Ethical Review Committee Erasmus MC In Vivo Image Pairwise Comparison.One image was selected per studied level of edge enhancement (n = 9) and used for a forced pairwise comparison, resulting in a series of (n 2 Àn)/2 = (9 2 -9)/2 = 36 test pairs of images to be compared by ENT professionals. 17Thirty-nine ENT professionals participated in this study: 16 ENT specialists, 16 ENT residents, 2 physician assistants, 3 speech therapists, and 2 researchers.All the participants had more than 6 months of relevant experience at the outpatient clinic.Three series of images with edge enhancement applied by the three included endoscopes were compared separately.The images were displayed on a color-calibrated diagnostic display (EIZO RadiForce MX315W 4096x2160) with their native resolution by a custom made Matlab program.The program randomized the sequence for each observer to prevent any form of learning effect.The side at which the images were presented was also randomized between left and right to prevent any selection bias.The resolution was checked by counting the pixels using the "print-screen" function.The characteristics sharpness, image noise, and color fidelity were verified by comparison of print screens of the in vitro test images to the original test images.The monitor was calibrated to sRGB color space with a color temperature of 6500 K and gamma of 2.2, and the luminance ranged from 0.5 to 300 cd/m 2 .The observers were asked to select the image with the best image quality characteristics for diagnostic purposes and neglect the influence of illumination, position, and viewing angle.The test series were preceded by a smaller training series (n = 4) to make the participants familiar with the assignment and user interface. Pairwise Comparison Analysis.Pairwise comparison is an easy task for the participants, but the analysis is more challenging.][19][20][21] The goal is to use the votes of the participants to determine the perceived difference of image quality on a psychometric scale.The unit of this scale is the just noticeable difference (JND).One JND is defined as the difference in image quality, where 50% of the observers perceive a difference and vote consistently, whereas the other 50% do not perceive a difference in image quality and will choose randomly.One half of the second group will randomly vote the same as the first group, resulting in 75% voting for the image with better image quality, versus 25% for the image with lesser image quality.Hence, +1 JND corresponds to a probability of 75%, 0 JND corresponds to 50%, and À1 JND corresponds to 25%.Using the JND as a unit provides an intuitive and meaningful measure to evaluate quality differences.We followed the guide of Pérez-Ortiz and Mantiuk, 19 to find the differences on the psychometric scale between the nine levels of edge enhancement per endoscope. In Vitro Measurements The measured levels of edge enhancement, sharpness (MTF50), and image noise using the in vitro images are shown in Figure 2A, B. The range of available amount of edge enhancement is similar for endoscope A (0-1.12) and B (0-1.24), but endoscope C has a smaller range (0-0.83). Sharpness and noise both increased as the level of edge enhancement was increased.Endoscope C has a larger sharpness and is able to capture smaller details compared with the other endoscopes (Figure 2A).For endoscopes B and C, the horizontal sharpness is approximately equal to the vertical sharpness; however, endoscope A applies a remarkable lower amount of edge enhancement vertically compared with horizontally.Endoscope A has slightly higher levels of noise compared with B and C has the lowest noise levels (Figure 2B). Pairwise Comparison The pairwise comparison of in vivo images was completed by 39 ENT professionals.We found no differences in votes between the specialists and residents. The maximum likelihood estimates of the quality differences with respect to zero edge enhancement and 95 confidence intervals (2.5 and 97.5 percentiles) are plotted versus the levels of edge enhancement measured in vitro in Figure 3.A third-order polynomial is fitted as a trend line through the quality scores of each endoscope.The trends steeply increase when edge enhancement is switched on for all endoscopes.The trend of endoscope C stabilizes at the level of edge enhancement of 0.7 at a peak of 7 JND.According the Thurstone V model, this means that more than 99.99% of ENT professionals will perceive a difference between the optimum setting and the image without edge enhancement and vote consistently for this optimum.Endoscope A and B reach an optimum at the levels of edge enhancement 0.75 and 0.90 with peak values of 3.8 JND (99.5%) and 4.5 JND (99.8%), respectively. The general optimal level of edge enhancement is estimated between 0.7 and 0.9.This corresponds to setting A5-A6 for endoscope A, 50%-75% for endoscope B, and 15 to 20 for endoscope C. One image of endoscope B was excluded from the analysis at setting 62.5% as this image had superior positioning and illumination compared with the adjacent settings yielding a quality score deviation of 2 JND above trendline.This indicates that the differences between image quality as a result of edge enhancement are relatively small and other factors like positioning and illumination start playing a role. DISCUSSION In this study, we quantified the level of edge enhancement from test images, measured the effect of edge enhancement on sharpness and image noise, and found the optimal setting by pairwise comparison for three different types of flexible ENT endoscopes. Edge enhancement has a major impact on sharpness, noise, and the image quality as perceived by ENT professionals.We found optima of the trend lines at levels of edge enhancement between 0.7 and 0.9.Although the trend optima are in a relatively small range, the peak of the quality scores is relatively wide spread (Figure 3).For example, the difference in quality scores between the best five images of endoscope C varies by less than 1 JND, meaning that less than 50% of ENT professionals perceive a difference and vote consistently.This relatively wide peak is due to differences between observer preferences.When looking at the data of individual participants across different endoscopes, some observers prefer higher levels of edge enhancement whereas others tend to prefer more subtle levels.Even though individual preferences are present, the optimal setting is certainly not below a level of edge enhancement of 0.5 as smaller details will be perceived as too vague.Levels above 1.0 yield objectionably large edge enhancement artifacts and levels of noise.We think that users can select their setting of preference within the range of 0.5-1.0,without affecting their diagnostic accuracy, because the difference in image quality is within one JND. In our experience, video processors can be distributed with suboptimal default settings and those settings cannot be motivated upon request.ENT professionals who are not using edge enhancement yet, can improve their endoscopic image quality by using this feature that is readily available.We encourage ENT professionals to test the effect of edge enhancement for themselves and we recommend to read the manual or contact the local vendor for support when adjusting the edge enhancement settings.Edge enhancement becomes objectionable when either the artifact or image noise becomes too large.The artifact is easily identified at the vocal cords as it is a straight well-illuminated anatomical structure against the dark background of the trachea.The bright line along the edge of the vocal cord and the darker line along the other side of the edge are physically absent, but are image artifacts produced by edge enhancement.When edge enhancement is applied too strong, it may obscure details for example near the sinus of Morgagni.Blood vessel demarcation on the mucosa is facilitated with higher levels of edge enhancement.Image noise is present in the entire image and will be increased by edge enhancement as well.Image noise will become objectionable first in the darker subglottic areas compared with well-illuminated areas such as the vocal folds and ventricular folds.When observing live images or videos, observers will notice that noise has a dynamic behavior resulting in moving noise. Edge enhancement should be taken into account when comparing image quality of endoscopes for procurement.In our previous study, 30 ENT professionals compared in vivo images of one larynx captured using the settings as recommended by the vendors.Twenty-eight observers preferred endoscope B (edge enhancement setting 50%) over endoscope A (structure enhancement setting A1). 7From the results presented above, however, we know that both systems have similar sharpness and noise characteristics.The results of the previous study would have been different, if structure enhancement of endoscope A was set to a higher value. Edge enhancement can improve diagnostic accuracy by enhancing surface irregularities of laryngeal lesions, so they are depicted more clearly as described by Kawaida et al, who found differences in clinical diagnosis between edge enhancement switched off (A0) and on (A4 or A8). 8,9Later, Scholman et al showed that the difference in overall sensitivity between the fiber optic endoscope and the high-definition endoscope they compared was 47.2% versus 59.7% when reviewed by experienced ENT specialists 5 or 61% versus 66.3% when reviewed by a more heterogeneous group of ENT professionals. 6We think that the key difference between these endoscopes is sharpness, that is, the level of details that can be recorded.In our previous studies, we measured the sharpness of a fiber optic endoscope and several highdefinition endoscopes.High-definition endoscopes have a better sharpness compared with fiber optic endoscopes, and sharpness can be improved by edge enhancement applied in the video processor. 7,13This may not be relevant for relatively large deviations, but will certainly improve visibility of pathologies with finer structures.This remains to be proven yet. A limitation of this study is that we did not include images with pathology, and it might be possible that participants selected images that were more appealing, although they were explicitly asked to select the image with better image quality for diagnostic purposes.Therefore, our future work will be to study the relationship between sharpness and diagnostic accuracy. CONCLUSION Edge enhancement has a major impact on sharpness, image noise, and the resulting perceived image quality.This feature is readily available.We conclude that ENT professionals benefit from this video processing and should verify if their equipment is optimally configured. Fig. 1 . Fig. 1. (Left) Example image of a larynx recorded without edge enhancement.(Mid) Edge enhancement applied.(Right) Edge enhancement applied to a level at which the operation becomes objectionable. Fig. 2 . Fig. 2. (A) Sharpness versus the level of edge enhancement.Sharpness increases linearly when edge enhancement is applied.(B) Image noise versus the level of edge enhancement.Noise increases linearly when edge enhancement is increased. Fig. 3 . Fig. 3. Quality differences with respect to zero edge enhancement determined by 39-ENT professionals in a pairwise comparison of in vivo images plotted versus the in vitro measured level of edge enhancement for endoscope A, B, and C. The bars depict the 95 confidence intervals (2.5 and 97.5 percentiles).The trend line is a third-order polynomial. TABLE I . Term Used for Edge Enhancement and Studied Levels of Edge Enhancement Per Endoscope.
2023-08-18T06:17:41.537Z
2023-08-17T00:00:00.000
{ "year": 2023, "sha1": "e8996ad99da7b0230686d5016f4995f0284a9681", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/lary.30981", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "7393d6ff8dee8c1347074fe4a791803a4e5cd8db", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
271457088
pes2o/s2orc
v3-fos-license
Neighborhood-level social determinants of health burden among adolescent and young adult cancer patients and impact on overall survival Abstract Background Neighborhood socioeconomic deprivation has been linked to adverse health outcomes, yet it is unclear whether neighborhood-level social determinants of health (SDOH) measures affect overall survival in adolescent and young adult patients with cancer. Methods This study used a diverse cohort of adolescent and young adult patients with cancer (N = 10 261) seen at MD Anderson Cancer Center. Zip codes were linked to Area Deprivation Index (ADI) values, a validated neighborhood-level SDOH measure, with higher ADI values representing worse SDOH. Results ADI was statistically significantly worse (P < .050) for Black (61.7) and Hispanic (65.3) patients than for White patients (51.2). Analysis of ADI by cancer type showed statistically significant differences, mainly driven by worse ADI in patients with cervical cancer (62.3) than with other cancers. In multivariable models including sex, age at diagnosis, cancer diagnosis, and race and ethnicity, risk of shorter survival for people residing in neighborhoods with the least favorable ADI quartile was greater than for individuals in the most favorable ADI quartile (hazard ratio = 1.09, 95% confidence interval = 1.00 to 1.19, P = .043). Conclusion Adolescent and young adult patients with cancer and the worst ADI values experienced a nearly 10% increase in risk of dying than patients with more favorable ADI values. This effect was strongest among White adolescent and young adult survivors. Although the magnitude of the effect of ADI on survival was moderate, the presence of a relationship between neighborhood-level SDOH and survival among patients who received care at a tertiary cancer center suggests that ADI is a meaningful predictor of survival. These findings provide intriguing evidence for potential interventions aimed at supporting adolescent and young adult patients with cancer from disadvantaged neighborhoods. The adolescent and young adult cancer population continues to grow and is expected to surpass 85 000 new cases in 2023 (1).Previous studies have identified survival disparities among adolescent and young adult cancer survivors, specifically noting poorer survival in Hispanic and Black adolescent and young adult patients with cancer than in White patients (2).Other studies have observed disparities in survival when comparing neighborhood type, insurance status, and poverty level (3-6).Though past research has highlighted these disparities in survival within the adolescent and young adult cancer population (7), the underlying sources for these disparities remain unclear. Social determinants of health (SDOH) are the nonmedical factors that affect health outcomes (8).The adolescent and young adult community faces a unique set of challenges when it comes to SDOH, including difficulty with access to health care, finances, employment, social support, and housing.Thus, nonmedical factors can greatly affect the survival of this population and implicate SDOH as an important variable to study in the context of cancer outcomes in adolescent and young adult patients.Socioeconomic status and health insurance status are among the factors that have been shown to negatively affect overall survival and other outcomes in adolescent and young adult populations with cancer (9,10).The Area Deprivation Index (ADI) is a validated composite score and neighborhood measure of SDOH that incorporates 17 factors that reflect neighborhood housing quality, household characteristics, education quality, income, and employment (8).Recent analyses of ADI as an indicator of SDOH have underscored an independent impact of the local environment on cancer outcomes in diverse populations, even when accounting for the individual SDOH factors (11)(12)(13).Though there has been research assessing the impact of individual SDOH components in the setting of adolescent and young adult disparities, studies have yet to use a composite SDOH measure that focuses on the neighborhood environment to investigate cancer outcomes among adolescent and young adult patients. In this study, we investigated the role of overall neighborhood-level SDOH via the ADI measure in observed survival disparities using a large, racially and ethnically diverse cohort of adolescent and young adult patients with cancer from a single institution.The research conducted in this study was designed to address 3 objectives: 1) to understand neighborhoodlevel socioeconomic disparities in the adolescent and young adult population, 2) to study the relationship between ADI and overall survival, and 3) to investigate the relationship between ADI and overall survival for different racial and ethnic groups.This project was established to help alleviate the significant knowledge gap regarding the association between neighborhoodlevel SDOH and disparities in overall survival in the adolescent and young adult patient population. Study population Study participants (N ¼ 10 261) included adolescent and young adult patients with cancer diagnosed between the ages of 15 and 39 years and who received treatment at MD Anderson Cancer Center between 2000 and 2016.Patient and tumor characteristics (race and ethnicity, sex, age at diagnosis, date of diagnosis, cancer diagnosis, vital status, date of last follow-up, and zip code at presentation) were obtained from our institutional tumor registry.Race and ethnicity information was collected as Black, White, Hispanic, Asian, American Indian/Alaskan Native, other, or unknown.Due to the limited numbers of individuals in the Asian (n ¼ 33), American Indian/Alaskan Native (n ¼ 16), other (n ¼ 601), or unknown (n ¼ 5) categories, these patients were grouped as "Other".Individuals with a prior diagnosis of childhood cancer were excluded from this study.This study was approved by the Institutional Review Board of MD Anderson Cancer Center. Area Deprivation Index The ADI from the Public Health Neighborhood Atlas is a composite of 17 SDOH measures (14).Patient zip codes at presentation to MD Anderson Cancer Center were linked to the ADI national ranking values and used for analyses in this study.The ADI is represented as a percentile ranging from 0% to 100%, with the 50% point denoting the "national midpoint."A low ADI score suggests affluence or prosperity, while a high ADI score signifies elevated levels of deprivation.International patients and patients without zip code information were excluded from analysis. Statistical analyses Student t tests or analysis of variance were used to compare ADI values by patient characteristic.Survival was defined as the duration between date of diagnosis and death from any cause or last follow-up, as recorded by the institution's tumor registry.Cox proportional hazards ratios (HRs) and 95% confidence intervals (CIs) were calculated for survival statistics by ADI mean, median, and quartile.Multivariate analyses were adjusted for sex, age at diagnosis (continuous variable), cancer diagnosis, and race and ethnicity, unless the variable was a stratification factor.Biological (additive) interaction analyses for race and ethnicity and ADI on overall survival were conducted using the method of Andersson et al. (15).Interactions were evaluated through interpretation of the synergy index [SI ¼ (HR 11 -1) / (HR 10 -1) þ (HR 01 -1)], with values less than 1 indicative of negative (antagonistic) effects and values greater than 1 indicative of positive (synergistic) effects (15).All other analyses were conducted using Stata, version 17, software (StataCorp, College Station, TX), with a 2-sided P ¼ .050set as the threshold of statistical significance. Patient population The study population of 10 261 adolescent and young adult patients with cancer was diverse in terms of race and ethnicity, with nearly 40% of the population identifying as non-White race or ethnicity-10.8%Black, 20.8% Hispanic, and 6.4% other races and ethnicities (Table 1).Breast cancer was the most common diagnosis, at 17.6%.The diagnoses with at least 400 patients in the population are shown in Table 1.There were slightly more female patients (55.6%) than male patients (44.4%), and a majority of individuals were diagnosed in the Young Adult age category, defined as individuals 26 to 39 years of age (73.3%).Over a median follow-up of 7.3 years, 4415 deaths were recorded, with more than 80% of the population surviving more than 2 years after diagnosis.Of note, the 5-year survival rate for this population was 65.8%, which is lower than the 85% reported for the adolescent and young adult cancer population nationally, likely because of the high number of patients with relapsed-refractory disease seen at MD Anderson Cancer Center. ADI, by patient characteristics Overall, the mean (SD) ADI value for the adolescent and young adult cancer population was 54.7 (22.4) (Figure 1, A). Mean ADI was statistically significantly different by race and ethnicity (P < .050),with Hispanic (65.3) and Black (61.7) adolescent and young adult patients living in neighborhoods with worse area deprivation.When assessing the ADI distribution by quartiles (Figure 1, B), Hispanic and Black adolescent and young adult patients with cancer had a higher proportion of patients residing in neighborhoods with the highest levels of deprivation (quartile [Q] 4: 44% and 32%, respectively) compared with White patients and patients of other races and ethnicities (19% and 10%, respectively).Statistically significant differences in mean ADI were observed by cancer diagnosis.Notably, patients diagnosed with cervical cancer had a statistically significantly worse ADI value (62.3) than patients with other cancer diagnoses (pair-wise P < .050).Of note, the largest proportion of cancer diagnoses for patients within the Q4 ADI were for cervical cancer (Figure 1, C). ADI was statistically significantly different between diagnosis age groups (P < .0100),with the highest mean ADI among patients in the emerging adult category, defined as those diagnosed between 19 and 25 years of age.The distribution of ADI quartiles, however, did not differ between age groups (Figure 1, D).There were no statistically significant differences in ADI by sex or follow-up time category, defined as less than or more than 2 years. Survival analysis Hispanic patients experienced favorable survival, and Black patients had worse outcomes compared to other groups 2, B).This statistically significant effect is evident in unadjusted and adjusted models for survival when analyzed by ADI as a continuous variable and median (Table 2).The effect sizes for Q3 and Q4 of the ADI were similar to each other, while Q2 did not show a statistically significant increase in hazard ratio compared with Q1.Therefore, we used median ADI categories for subsequent analyses (Figure 2, C). To investigate the potential relationships between ADI and race and ethnicity, stratified analyses were performed.Neighborhood deprivation was a statistically significant factor affecting survival only in White adolescent and young adult patients, with individuals residing in neighborhoods with high deprivation having an 11% increase in risk (HR ¼ 1.11, 95% CI ¼ 1.03 to 1.20, P < .0100)compared with those in low deprivation neighborhoods (Table 3).In contrast, ADI was not associated with overall survival in Hispanic patients (HR ¼ 0.97, 95% CI ¼ 0.84 to 1.13, P ¼ .73).Among Black patients, the effect size of worse ADI vs favorable ADI (HR ¼ 1.11, 95% CI ¼ 0.93 to 1.33) with regard to survival was similar to that among White patients, but the result was not statistically significant (P ¼ .24).This lack of association may be the result of a limited sample size from the Black and Hispanic populations when stratified by ADI categories.In analysis of survival by race and ethnicity, Black race was statistically significantly associated with poor survival in both high and low deprivation neighborhoods, conferring approximately 20% increased risk in both ADI categories (Table 3).No statistically significant interactions between ADI and race and ethnicity affecting overall survival were identified (data not shown), but a suggestive antagonistic interaction between race and ADI for the Hispanic patient population was observed, with a synergy index of 0.20 (95% CI ¼ 0.012 to 3.20) nearing a value below 1. Discussion In this study, we evaluated a large (N ¼ 10 261) and diverse cohort of adolescent and young adults with cancer from a single institution to explore the association between neighborhood-level SDOH and overall survival.Adolescent and young adult patients with cancer residing in areas with higher neighborhood-level deprivation experienced a nearly 10% increased risk of dying than those in lower-deprivation areas, implicating SDOH as valuable prognostic factor affecting long-term survival.The potential implications of this finding are concerning because this effect was observed in adolescent and young adult patients with cancer at a single comprehensive cancer center that is biased towards individuals who are insured. Past studies that have used ADI as a measure of SDOH have found inverse relationships between deprivation levels and survival that are aligned with our findings (16,17).A study focusing on disparities among patients diagnosed with primary central nervous system lymphoma made the same conclusion concerning the impact of worse ADI on overall survival.Primary central nervous system lymphoma is different from the cancers included in this analysis in that it is typically a disease of older adults and has a poor prognosis, yet the similarity of the relationship between ADI and survival is interesting.Another study in adult breast, prostate, lung, and colorectal cancers described similar relationships between SDOH and overall survival, even when accounting for individual-level socioeconomic status (18).This study used Surveillance, Epidemiology, and End Results data to gather patient information, indicating that the patients obtained their medical care across the country rather than at a single institution, as in this study.In addition, their population consisted of older adult patients, which differs from the population of the current study (18,19).Although past study populations have different features from the current adolescent and young adult cancer population, it provides compelling evidence that research using ADI is needed within other populations-particularly among individuals that are underinsured or uninsured.Thus, our analysis is the first to extend the effect of SDOH to adolescent and young adult patients with cancer treated in a single institution and further supports the importance of neighborhood-level deprivation as a potential driver of the persistent survival disparities observed in this community. Racial disparities in overall survival among adolescent and young adult patients with cancer have been reported previously (20,21).In an analysis of more than 80 000 adolescent and young adult patients with cancer from Texas, Black men and women had poorer 5-year survival rates than White individuals diagnosed with the same cancer (7).It is thought that structural racism within health care may contribute to these disparities (22)(23)(24)(25)(26). Studies have linked structural racism to poor physical and mental health outcomes, with studies also identifying differences in time to treatment and treatments offered to Black patients that would potentially contribute to differences in survival (27)(28)(29)(30)(31)(32).Black adolescent and young adult patients with cancer in our cohort also experienced worse survival than White and Hispanic patients.This effect was consistent when stratified by ADI, suggesting that the survival disadvantage among Black adolescent and young adults with cancer is independent of their neighborhood environment.It is worth noting, however, that the current study participants were all treated at a single academic cancer center, which could limit generalizability to other cancer care settings.Furthermore, the effects of structural racism that affect access and receipt of care are often difficult to measure.More investigation is needed in this area that incorporates detailed individual-level data, such as time to treatment, disease stage at treatment, and frequency of follow-up care, to fully explore this relationship. Conversely, the Hispanic adolescent and young adult cancer population had a more favorable survival rate than other adolescent and young adult patients with cancer, and this effect was not affected by ADI.The Hispanic paradox, or what some call the "barrio-advantage," may contribute to the improved survival observed in our study, regardless of local neighborhood disadvantage (33,34).This paradox describes improved outcomes among Hispanic vs non-Hispanic communities thought to be a result of strong social networks and support.Although not significant, we did observe a suggestively, slightly antagonistic interaction between race and ethnicity and ADI in the analysis of our Hispanic patients compared with White adolescent and young adult patients with cancer.This finding is in line with the Hispanic paradox hypothesis, where strong social support and other potentially beneficial cultural factors associated with Hispanic communities attenuate the adverse effect of neighborhood socioeconomic deprivation.More research on the impact of intricate social networks, possibly through assessment of neighborhood social capital and other measures of network support, in Hispanic adolescent and young adult cancer populations may provide more context for our results with respect to this phenomenon. Worse ADI representing poorer neighborhood environment was associated with a survival disadvantage among White adolescent and young adults with cancer, with an 11% increase in death among individuals living in neighborhoods with a worse ADI.Although White people make up the majority of the total number of people living in poverty within the United States, disparities between high-income and low-income Whites patients are far less well studied than those disparities for racial and ethnic groups (35).Our results provide evidence that ADI may contribute to disparities within the White community, though more research is needed in this setting.This finding also suggests a multifaceted relationship among ADI, race and ethnicity, and overall survival. Adolescent and young adult patients diagnosed with cervical cancer resided in neighborhoods with statistically significantly worse neighborhood-level deprivation compared with other cancer diagnoses.Underlying disparities in cervical cancer screening and prevention may be the driver of this result.Research evaluating human papillomavirus (HPV) vaccination by geographic measures in the adolescent and young adult population has concluded that people's local area can affect vaccination rates, suggesting an increase in cervical cancer incidence and reinforcing our findings in the present cervical cancer population (36,37).Despite the presence of HPV vaccinations as a method of cervical cancer prevention that became available to the public in 2006 (38,39), disparities in vaccination uptake continue (40,41).Furthermore, current vaccine recommendations are for the nonavalent formulation of the HPV vaccine, which was not approved until 2014 (42).At the end of the cohort diagnosis period (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016), the HPV vaccination rate in the United States was only about 43% for the completed HPV series and 60% for greater than or equal to 1 vaccine dose (43).In addition, cervical cancer screening rates are affected by county-level vulnerability and area deprivation level, either because of socioeconomic issues or geographical inaccessibility (44,45).Another study using ADI found higher deprivation levels associated with decreased cervical cancer screening, lending support to our own conclusions (45).Differences in health literacy may also be a contributing factor.In Texas, health literacy has been shown to be a barrier in the Hispanic and Black community (46,47).Interventions directed at disadvantaged groups have supported the importance of group-specific education materials and their ability to increase health-care knowledge and perceptions (48).The timeline of HPV vaccine approval, vaccination rates, general vaccine hesitancy, and health literacy may have all influenced the results we saw in this cohort (49)(50)(51). This study has several strengths, including long follow-up to enable survival assessment for a population that is characterized by favorable prognosis overall.The large sample size and the diversity of the population in terms of race and ethnicity, ADI, and cancer diagnoses enabled us to conduct robust stratified and interaction analyses.Information regarding disease stage at presentation would be beneficial for exploring whether ADI is linked to delays in treatment and higher disease stage at presentation, both factors that would directly affect survival.Detailed treatment information would also enhance the robustness of this analysis, and efforts to collect these data are ongoing to enable cancer type-specific analyses.The single-center cohort of patients treated at MD Anderson Cancer Center is a strength in that it minimizes concerns regarding the effect of access to care: All participants included in this analysis received best practicesdriven care, reducing heterogeneity in the treatments received.The preexisting barriers patients face to gain access to care at MD Anderson Cancer Center, a tertiary cancer center, such as lack of health-care access, financial means, transportation, and geographical location, may introduce a sampling bias and reduce the transferability of our findings to other adolescent and young adult cancer populations.Based on these barriers, our cohort would be expected to have worse ADI values than the general population or individuals receiving care at community hospitals.For patients unable to obtain care at MD Anderson Cancer Center, they may have less access to highly specialized cancer care and novel clinical trials.These barriers may have attenuated some of our results because the prognostic features of ADI were identified among a population with relatively high SDOH.Future studies in adolescent and young adult cancer populations with lower socioeconomic status and patients seen in other caredelivery settings would be beneficial to fully establish the prognostic ability of ADI.We would expect a larger magnitude of effect of the inverse relationship between ADI and survival when studied in the broader adolescent and young adult population. The composite ADI values used in the analysis were generated by patient zip codes.The exact addresses would have allowed for census block-based ADI by geocoding and enable investigation of other SDOH measures beyond that captured by ADI.Future studies with robust longitudinal residential information would be of interest to assess geographical changes between the time of diagnosis and time of censoring study participants.This consideration is important for the adolescent and young adult population because of their increased mobility during young adulthood and the potential impact of these transitions on ADI values. In conclusion, this study demonstrated the effect of ADI on overall survival in the context of a racially and ethnic diverse cohort to address survival disparities within the adolescent and young adult cancer population.Overall, our findings implicate ADI as an important prognostic factor in adolescent and young adult cancer survival.This information warrants continued investigation to better explain the impact of ADI on the adolescent and young adult population.With more investigation, ADI may guide individualized social interventions if it proves useful as a screening tool.Overall, the intriguing findings underscore the need for continued support in disadvantaged populations. Data availability Deidentified data may be made available upon reasonable request to the corresponding author. Figure 1 . Figure 1.ADI, by patient characteristics.A) overall distribution of ADI values in the population.B) distribution of ADI quartiles, by race and ethnicity.C) distribution of ADI quartiles, by cancer diagnosis.D) distribution of ADI quartiles, by diagnosis age group.ADI ¼ Area Deprivation Index; Q ¼ quartile. Figure 2 . Figure 2. Overall survival in adolescent and young adult cancer survivors.A) race and ethnicity.B) ADI quartiles.C) ADI medians.ADI ¼ Area Deprivation Index. Table 2 . Overall survival, by ADI value 5-y survival rate, % Log-rank P Hazard ratio (95% confidence interval) a P a Adjusted for sex, diagnosis age group, cancer type diagnosis, and race and ethnicity.ADI ¼ Area Deprivation Index. Table 3 . Overall survival, by race and ethnicity and ADI category Adjusted for sex, diagnosis age group, and cancer type diagnosis.ADI ¼ Area Deprivation Index. a E. R. Rodriguez et al. | 5
2024-07-27T06:17:07.355Z
2024-07-25T00:00:00.000
{ "year": 2024, "sha1": "59d32fc32ef1f66a7110cabea09b19a2ffccd6c4", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/jncics/advance-article-pdf/doi/10.1093/jncics/pkae062/58648876/pkae062.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4d5c286745c282b62ee0ae8082710d029cb0cd27", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
139495573
pes2o/s2orc
v3-fos-license
Influence of accounting the distribution parameters of the fuel assembly (FA) and dynamic operating characteristics on the fuel nuclide composition of a VVER-1000 spent fuel assembly (SFA) On the basis of the obtained results, the uncertainty of the isotopic composition was studied, which is associated with the uncertainty of operational characteristics and dynamic changes in the average burnup depth in the core of the WWER-1000 reactor. Introduction The isotopic composition knowledge of nuclear fuel in spent fuel assemblies (SFA) is necessary for accurate assessment of its neutron-physical properties. On the basis of this knowledge, the multiplication factor of the SFA is calculated and the radiation and nuclear safety procedures for handling them are justified. At present, there is no global precise way for characteristics determination of the SFA. Such characteristics as the energy release and the isotopic composition are determined by using tables of approximated values that are obtained with precision programs. At the moment there are a huge number of programs for neutron-physical calculation, which are actively used to simulate the nuclide composition of SF [1]. Thus, in the modeling process of a particular nuclear installation, arises a question about the choosing and the correct consideration of the optimal parameters of the used calculation code and of the accounting need for these parameters when irradiating fuel assemblies so that the nuclide composition uncertainties in the SFA do not exceed the technological ones that cannot be completely excluded [2]. In order to obtain more accurate results in modeling the irradiation process of FA, the consideration of physical parameters is of importance and an uncertainty component will appear. Another component of uncertainty is associated with the irradiation of a specific fuel assembly in a specific reactor. The rearrangement of fuel assemblies in the reactor core during fuel shuffling causes different situations in which the assembly data operates in different modes. The change in the power and ambient parameters affects the neutron spectrum in the fuel assembly. The main sources of this type of uncertainty are: 1) Fuel mass 2) Average burnup depth 3) Density and temperature of the moderator Calculation results Let us consider each source of uncertainty and its contribution to the total one separately based on the results of modeling the VVER-1000 irradiated (burnup depth of 70 MWd/kg) fuel assembly using the SCALE software package [3, 4, 5, and 6]. Tables 1 and 2 show the uncertainty results of the nuclide composition calculations of irradiated FA (ТВСА-12) to , where the uncertainty in fuel density is from 10.4 to 10.7 (g/cm 3 ). (Δm/m =% is the deviation of the fuel mass to fuel density). From the obtained results, it can be concluded that the deviation of the fuel mass from the nominal value makes a small contribution to the isotopic composition uncertainty calculation. It is also erroneous to assume that each fuel rod has a maximum mass deviation, since their production is controlled by a special quality system of product manufacturing. Therefore, we can assume that the contribution of this source of uncertainty to the overall calculation error is minimal. Particular interest in modeling is accounting the average burnup depth. It is determined, as a rule, by tracking the reactor operation history, which depends on the power load of a particular electrical grid. During a long period of operation, the electricity generation is affected by many factors: fuel shuffling, maintenance problems, etc. Thus, the determination of the burnup depth is a noticeable error source, which can be seen from the data shown in tables 3 and 4, as well as in figure 1 (these results were obtained for uncertainty in the burnup depth of ± 1% and ± 3% of the average burnup depth burnup depth (70 MWd/kg). Of no less serious interest is the source of uncertainty that arises when considering the moderator density and temperature during the simulation (in this reactor it is presented in the form of water with boric acid). As an example, consider various 2D and 3D models with an average enrichment of 4.6% UO2 fuel and burnup depth of 70 MWd/kg (assuming a linear moderator temperature and density distributions) [7,8,9]. In the first 2D model, a simple unit cell was chosen that contains fuel, a clad, and a moderator. In the second and third 3D models the fuel rod was selected with the same dimensions of the unit cell used in the first model but with a height of 3.54 m. The difference between the second and the third models is the partitioning of the third model into 10 layers with different mean moderator's density each. The main characteristics of the models are shown in figure 2. The results of calculating the uncertainties in the isotopic composition of all three models are presented in tables 5 and 6. While working with a nuclear reactor, in order to increase the burnup efficiency, the fuel is rearranged (shuffled) in the core each cycle. The different arrangement of fuel in the core as it burns up has its own power values and ambient parameters that differ from the mean values. Accordingly, the question arises for the need to take into account the irradiation regimes (modes) of FA, which affects the overall uncertainty of the isotope composition of SF. Using the SCALE program code, two cases of fuel shuffling schemes were simulated to a burnup depth of 70 MWd/ kg: The first scheme moves FA from the center of the core (1.13 times the mean specific power) to the periphery of the core passing through intermediate position (0.87 times the mean specific power). The second shuffling scheme performs the process of reversing the previous scheme. In both schemes, the shuffling time (30 days) after each cycle is considered. Table 6. Plutonium vectors (w %) and the isotopic composition uncertainty of spent fuel of the unit cell model and the 3D models. On the basis of the obtained results, it can be concluded that, in 2D modeling, the isotopic composition uncertainty has a higher value, in contrast to the other two models, where the Monte Carlo method was used [10]. It should also be noted that the behavior of the mean relative power (W/Wav, where Wav is the average power) and burnup (Bz) with fuel rod height differs somewhat from the theoretical cosine dependence; it is shown in figure 3. The results of calculating the nuclide composition uncertainty depending on the irradiation regime for the two described schemes were compared to a model that does not take into account this source of uncertainty. The results of the calculation are presented in tables 7 and 8. It should be noted that considering the irradiation regime leads to more realistic modeling of the FA and obtains more accurate data for the nuclide composition of SF. Moreover, it is worth mentioning that another uncertainty source of such importance related to the nuclide composition of SF is the concentration of the absorber (boron) in the moderator. Since the absorption of neutrons in boron occurs mainly in the thermal energy range, this leads to an increase in the fast spectrum. As a result, the transmutation of U-238 and the accumulation of plutonium are enhanced by the use of thermal neutron absorbers, and therefore the fission of U-235 decreases. Both of these factors can significantly increase the reactivity of SF and have a serious impact on the isotopic composition of SF. It should be noted from the obtained results that in modeling the dynamic change in boron concentration, more accurate results will be obtained for the nuclide composition of SF since, it approaches to more realistic situation when working with the reactor. Consider the modelling of a full-scale fuel assembly using the VVER-1000 fuel assembly: U46G2 (ТВСA-12). The fuel in this fuel assembly is held in a hexagonal grid. Light water, used as a moderator and coolant, is passed under very high pressure through the active zone, in a high-pressure housing of about 14 m in height and 4 m in diameter. The reactor's core includes, in common with 211 assemblies: 163 fuel assemblies identical by design, but differing in fuel enrichment and 48 reflector assemblies. In the axial direction, the core is divided into 10 layers 35.5 cm high each, which sums up to a total active height of 355 cm. Both upper and lower axial reflectors have a thickness of 23.6 cm. The active zone (core) is divided radially into hexagonal cells in increments of 23.6 cm, each of which corresponds to a single fuel assembly, plus a radial reflector of the same size. Fuel assembly VVER-1000 is a hexagon contains 312 fuel cells. It contains 18 guide tubes and 1 central tube, carries the fuel load. Tubes are also useful for supporting the assembly that guarantees the right space between fuel cells. The head of the fuel assembly and the lower mesh hold the guide tubes. In the central tubes there are control rods and sensors for monitoring neutrons and temperature. To improve physical characteristics and safety, some fuel nuclides are filled with pellets containing gadolinium oxide (Gd2O3) content of 5.0 w%. The fuel assembly characteristics are shown in table 11. The first model is a full-scale fuel assembly model with three enrichments (4.7%, 4.4% and 3.6% + Gd). The second model is a full-scale model with averaged enrichment (4.6%) without taking into account the presence of Gd. The third model is a unit cell with enrichment (4.6%) without taking into account the presence of Gd. The fourth model is a unit cell with enrichment (4.6%) with a homogeneous distribution of Gd in fuel rods within assemblies. The fifth model is a unit cell with enrichment (4.6%) and longer pitch (pitch = 1.336 cm) instead of (pitch = 1.275 cm) in the previous unit cell models, taking into account the presence of a larger amount of moderator in FA. In order to show how the value of uncertainty behaves in one or other models of a single fuel assembly, five different variants were created. The results of calculation of the nuclide composition of SFA and the quantity of plutonium isotopes are shown in tables 12 and 13. Based on the obtained data, it can be concluded that ignoring the influence of high-absorbing materials like Gd in the modeling process does not impose high uncertainty results if the same geometric model is used. A more complex model used in the analysis of isotopic composition requires more geometry and composition details in the modeling process and an increase in time costs. Also, the best choice of the geometric model during the modeling process has a significant effect on the obtained isotopic composition results.
2019-04-30T13:08:41.791Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "cca3dda2422f577a61db07106399391b4fd73cdc", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1133/1/012008", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e128af6a47f1ae1f2c35baf8755bb71bafc0d80d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
269014591
pes2o/s2orc
v3-fos-license
Lung Cancer Surgery in Octogenarians: Implications and Advantages of Artificial Intelligence in the Preoperative Assessment The general world population is aging and patients are often diagnosed with early-stage lung cancer at an advanced age. Several studies have shown that age is not itself a contraindication for lung cancer surgery, and therefore, more and more octogenarians with early-stage lung cancer are undergoing surgery with curative intent. However, octogenarians present some peculiarities that make surgical treatment more challenging, so an accurate preoperative selection is mandatory. In recent years, new artificial intelligence techniques have spread worldwide in the diagnosis, treatment, and therapy of lung cancer, with increasing clinical applications. However, there is still no evidence coming out from trials specifically designed to assess the potential of artificial intelligence in the preoperative evaluation of octogenarian patients. The aim of this narrative review is to investigate, through the analysis of the available international literature, the advantages and implications that these tools may have in the preoperative assessment of this particular category of frail patients. In fact, these tools could represent an important support in the decision-making process, especially in octogenarian patients in whom the diagnostic and therapeutic options are often questionable. However, these technologies are still developing, and a strict human-led process is mandatory. Introduction With the aging of the world population, lung cancer is gradually becoming a disease of old people.Nowadays, the highest incidence is between 75 and 79 years for females and between 85 and 89 years for males, with more than 40% of new cases diagnosed in patients aged 75 or more [1].Therefore, the number of octogenarians with early-stage non-small cell lung cancer (NSCLC), eligible for surgery, has increased, and it is estimated that 14% of all resectable NSCLC cases involve patients aged 80 or more. Meanwhile, the definition of "old patient" has progressively changed over the years.It was historically defined as a chronological age of 65 years, while, nowadays, it has been shifted to 75 years [2].However, it is now believed that the definition of elderly must be based on patient status and comorbidities considering that a chronological cut-off is not based on biological or medical evidence.Likewise, age is no longer considered a contraindication to lung cancer surgery per se, with a series of studies reporting good results in such patients [3][4][5][6]. However, octogenarians show some peculiarities that make surgical treatment more challenging, with up to 40% of these patients presenting postoperative complications [5,6].Therefore, an accurate preoperative selection of patients is mandatory to balance the impact of surgery and the expected outcome. Artificial intelligence (AI) can be defined as the set of all computer systems able to perform complex tasks that normally require human cognitive functions.Overall, the aim of these techniques in the medical field is to assist the decision-making process by extracting and interpreting information from massive structured and unstructured data [7].AI technology has been recently developed worldwide in almost all medical disciplines [8].Its application in lung cancer treatment currently sounds limitless and is being tested from cancer screening, diagnosis, and therapy evaluation to outcome prediction.Several trials and meta-analyses showed improvements in patients' healthcare due to AI intervention in several fields [9][10][11].In particular, AI-based oncological tools have shown great potential with performances comparable to or even higher than human capacities [10,11].However, a concrete clinical application is still limited due to intrinsic limitations. The aim of this paper is to present a narrative review of current applications of AI in the preoperative evaluation and surgical planning of patients undergoing lung cancer surgery, and to evaluate the implications and advantages that these tools can offer to octogenarian patients. Materials and Methods The international literature was searched using PubMed, Scopus, and Cochrane Library.The research was performed by matching the Medical Subject Heading (MeSH) terms "artificial intelligence", "machine learning", "deep learning" and "radiomics" with the terms "thoracic surgery", "lung cancer", "lung surgery", "preoperative risk" and "surgical planning".Further analysis was performed following the reference lists of all included articles.All studies published between January 2013 and November 2023 were evaluated for inclusion.A total of 312 articles were screened after duplicate removal.A total of 252 articles were removed after title and abstract reading.Eighteen papers were excluded due to non-available full papers (5), articles not in English (3) and articles not deemed relevant after full-text reading (10).Finally, 42 articles were considered eligible for the aforementioned scope and were included.The search strategy is shown in Figure 1. Preoperative Risk Assessment Lung cancer surgery has achieved low mortality and morbidity rates due to the improvement of surgical and anesthetic techniques, alongside the spread of minimally invasive surgery [12].However, there are categories of "fragile" patients in which the postoperative complication rate remains high.Octogenarians are in this category, with a postoperative complication rate that exceeds 40% in some studies [5,6,13,14].Over the Preoperative Risk Assessment Lung cancer surgery has achieved low mortality and morbidity rates due to the improvement of surgical and anesthetic techniques, alongside the spread of minimally invasive surgery [12].However, there are categories of "fragile" patients in which the postoperative complication rate remains high.Octogenarians are in this category, with a postoperative complication rate that exceeds 40% in some studies [5,6,13,14].Over the years, many studies have been performed to evaluate the preoperative risk in these patients to select those fit for surgery [15,16].In particular, the factors most strongly associated with lower morbidity appear to be performance status, FEV1 value, minimally invasive surgery, and limited resections [16,17].Moreover, the use of artificial intelligence in preoperative risk stratification has widely diffused, with the development of machine learning-based algorithms that can efficiently predict morbidity and mortality after general surgery [18].Similarly, lung surgery models of event prediction were developed with encouraging results [19][20][21][22] (Table 1). In 2021 Salati et al. [19] created an AI-based predictor of cardiopulmonary complications after lung resection using 50 preoperative characteristics of 1360 patients undergoing lung resection.The prediction model was generated by training and testing the XGBoost ML algorithm, reaching an accuracy of 70% and a positive predictive value of 0.68.Similar results were also achieved by Huang et al. in a Chinese population study with an AUC of different ML models ranging from 0.72 to 0.76 [20].According to the authors, the most important predictors of postoperative complications were the percentage of predicted postoperative forced expiratory volume in one second and the ratio of forced expiratory volume in one second to forced vital capacity. A good rate of prediction of respiratory failure after lobectomy was achieved by Bolourani et al. [21].They built two different ML-based models, achieving 99.7% and 94.4% specificity and 75% and 83.3% sensitivity, respectively.The first model, focused on a high specificity, was suited for performance evaluation, while the second model, with high sensitivity, was built for clinical decision making.However, they used an inaccurate national registry that could lead to misleading results and needs validation for its clinical translation [21]. Notably, there are no algorithms specifically built for elderly or octogenarian patients, even though most of these algorithms encompass age as a risk factor for postoperative complications [18][19][20]23]. Advanced age has also been evaluated by Chang et al. [22] in their Real-Time Artificial Intelligence-Assisted System to predict weaning from a ventilator immediately after lung resection surgery.The model included estimated post-OP lung function, exercise loading, resting oxygen saturation before the operation, major diseases, severe coronary artery disease risk factors, smoking or not before the operation, presence of smoking history, and advanced age.The aim was to guide the anesthesiologist to predict whether patients can be safely weaned after lung surgery in the operating room.This model showed a good performance, allowing a shorter decision time and improved confidence, especially in young physicians.Considering that prolonged weaning is associated with worse clinical outcomes in elderly patients [24], this tool may be useful in octogenarian patients. The model developed by Lee HA et al. [25] for the prediction of VO2max in candidates for lung resection seems of particular interest for the octogenarian category.In fact, the European Respiratory Society guidelines indicate the VO2max assessment through the standard cardiopulmonary exercise test as the gold standard to discriminate the operability of those patients with impaired lung function.The authors set an algorithm for determining VO2max in patients with limited exercise capacity or in case cardiopulmonary exercise testing cannot be performed.Their model was able to predict a closer estimation of VO2max values measured using a CPET than existing equations.This tool could be a valid surrogate in elderly patients, who are often not able to perform a complete cardiopulmonary exercise test due to other non-respiratory comorbidities.Predicting whether patients could be weaned immediately from ventilator after lung resection surgery Naïve Bayes The AI model with the Naïve Bayes Classifier algorithm had the best testing results with an accuracy of 0.845, sensitivity of 0.870, and specificity of 0.838 Lee HA et al. [25] To evaluate the usefulness of an ML model in estimating VO 2max in patients requiring lung resection surgery with limited exercise capacity or when a CPET is not possible Quadratic regression model This model provides a closer estimation of VO 2max values measured using a CPET than other existing equations (bias: −0. Overall, the accurate selection of octogenarian patients undergoing lung cancer surgery is demonstrated to be the best way to reduce postoperative events.In this context, artificial intelligence algorithms seem to be promising for personalizing and optimizing preoperative risk stratification, providing an effective aid in the preoperative decision-making process.An effective clinical application is still far from routine practice, and further research is needed to validate the models. Predictors of Histological Tumor Characteristics Being able to predict the histological characteristics of lung cancer starting from radiological imaging could be of crucial importance for various aspects of the treatment pathway.Computed tomography (CT) scans, as well as second-level tests such as 18-Fluorodeoxyglucose positron emission tomography (18FDG-PET), have a diagnostic specificity that ranges from 72% to 84.6% [26,27], leading to surgical interventions for benign pathologies in some cases.Therefore, an accurate assessment of the malignancy of an indeterminate pulmonary nodule must be a priority over proposing a primary surgical approach especially in "fragile" patients such as octogenarians.In fact, the key point in planning lung cancer treatment is the diagnosis.Sampling sufficient tissue might be difficult or have excessive risk.For these reasons, AI tools and radiomics predictors have been developed to discriminate between malignant and benign lung nodules from CT imaging with good performance [28][29][30][31].Elia et al. [28] were the only ones who specifically implemented a radiomics-based tool for elderly patients.In their study, radiomics data from 71 old patients were used to build three different machine learning algorithms for predicting malignant pulmonary nodules.These algorithms reached good predicting values with an accuracy of 0.83-0.90.The authors concluded that AI can be a valid alternative to invasive diagnostic procedures in the decision-making process of suspected solitary pulmonary nodules in elderly patients.This advancement could be of particular importance in reducing the rate of elderly patients undergoing pulmonary resection for benign disease. Spread through Air Spaces Spread through air spaces (STAS) is emerging as a tumor characteristic correlated with a worse prognosis, especially in patients undergoing sublobar resections [32,33].To date, predicting the presence of STAS has not been possible before staining at the bench.Being able to know STAS before surgery may allow for a more tailored surgical treatment avoiding oncologically ineffective sublobar resection or, alternatively, unnecessary large resections in borderline patients.This is particularly important in octogenarians, where a higher incidence of morbidity and mortality is reported in lobectomies rather than wedge/segmentectomies [29] and in whom a limited resection is often the best option [34]. Radiomics model The model exhibited good performance with an AUC of 0.63 (CI 0.55-0.71) in internal validation and 0.69 in external validation Jiw W et al. [35] developed a dual-delta deep learning and radiomics model using preoperative CT scans of 674 patients with a diagnosis of lung cancer.The model showed good prediction power between STAS and non-STAS, yielding an AUC of 0.94 (95% CI, 0.92-0.96),0.84 (95% CI, 0.82-0.86),and 0.84 (95% CI, 0.83-0.85),respectively, in the internal validation cohort and two different external validation cohorts.Lin et al. [36] tested a deep learning model for STAS prediction in ground glass-predominant lung adenocarcinoma in a retrospective cohort of 581 patients.They achieved satisfactory performance with an AUC of 0.82 and an accuracy of 74%.Similar results were achieved by other studies, with accuracy ranging from 0.66 to 0.93 [37][38][39][40].However, results seem strictly dependent on CT characteristics and scarcely reproducible, with only one study attempting to use radiomics tools in a heterogeneous dataset [39].These limitations make radiomics tools hardly applicable in daily clinical practice at present. Other studies tried to predict an extended panel of histological characteristics using radiomics and AI.Some of them included visceral pleural invasion [41], EGFR mutation [42], and PD-L1 expression [43].Results are still experimental and their utility in preoperative evaluation of patients is currently debated. Surgical Planning Surgical planning is a crucial step for a successful surgery.In recent years, surgeons have been assisted with a variety of computational tools for a tailored approach to surgery [44].In lung cancer surgery, these tools encompass computed tomography vascular reconstructions, 3D models, and others [45].New advanced technology is particularly important in minimally invasive surgery and in pulmonary segmentectomy, where anatomical variants are not infrequent [46].In fact, the use of 3D CT-based models for preoperative planning has been demonstrated to reduce the risk of unnecessary resection of lung tissue, save operative time to find segmental planes and vessels, lower the risk of bleeding, and decrease the overall operating room costs [47,48].More recently, artificial intelligence instruments have been proposed for better and more in-depth planning.These include automatic segmentation of the tumor area and vascular planes in 3D CT reconstructions and virtual reality tools. 3D Reconstruction Models One of the issues with 3D pulmonary reconstruction is that manual or semi-automatic tools are time-consuming and can only be performed by experienced personnel.Thus, AI methods have been proposed to automatically identify pulmonary nodules and lung structures and create the 3D model, improving accuracy and time efficiency [49][50][51][52][53][54][55][56][57]. Regarding automatic pulmonary nodule detection, a great number of AI models have been proposed generally based on convolutional neural networks (CNNs) [49][50][51][52].Compared with traditional computer-aided diagnosis (CAD) techniques, CNN methods have a better performance in detection, segmentation, and classification due to their capacity to learn from verified data [49].Specifically, they present a lower false positive rate than traditional CAD tools.However, even if the number of false positives has decreased, it remains the limiting factor for their wider clinical application. Lancaster et al. [50] compared an automatic deep learning algorithm for nodule detection and segmentation in 283 participants from the Moscow lung cancer screening program.CTs were also analyzed by five experienced thoracic radiologists, and the results were compared with the AI model.The authors found that the AI tool had fewer negative misclassifications than most radiologists, but more positive misclassifications.Similar results were found by Li L. [51] in their analysis of 346 healthy subjects.The AI system showed a higher detection rate than two-radiologist readings (86.2% vs. 79.2%;p < 0.001) but the false positive rate was also considerably higher than that of double reading (1.53 per CT vs. 0.13, p < 0.001).Zhi L. et al. [52], in their analysis of 32 open-source deep learning models, concluded that the high false positive rate of CNN models can be reduced with a higher quality of CT image data. Regarding lung structure reconstruction, Chen et al. [53] recently presented their novel fully automated reconstruction algorithm for vessel and bronchial detection based on AI.Their algorithm was used to create a 3D model using non-contrast CT images of 20 patients retrospectively enrolled and compared to a manual approach.The AI model achieved good performance with an overall accuracy of 0.70, compared with 0.80 of the manual approach, and accurate vessel and bronchi detection (85% by the AI model vs. 80% by the manual model).The median time consumption of the AI algorithm was only 280 s.The authors concluded that AI may achieve high identification accuracy in a short time frame.The same group developed an AI-based chest CT semantic segmentation algorithm that recognized segmental pulmonary vessels to provide a semi-automated approach for operation planning [54]. Interestingly, automated segmentation of the lung parenchyma produced worse performances than vessel and bronchi recognition, allowing a correct segmentation in 72.7% of patients.Reasons for parenchyma segmentation failure were identified in severe emphysema and fibrosis/pneumonitis [55]. Overall, automatic detection and segmentation of lung nodules is feasible, some with AI models that are already commercially available.However, these tools cannot be automatically used alone safely but they require human supervision considering the high number of false positive misclassifications still being produced.Even if specific studies have not been performed yet, it is probable that those automatic tools may have worse performances in elderly patients than the standard considering the higher prevalence of benign pulmonary nodules in this population [58].Similarly, there are no specific studies on 3D lung models reproducing lung segments in elderly patients, but available data suggest that these tools may be less accurate in octogenarian patients due to the higher rate of senile emphysema and interstitial lung disease [59], low-quality CT imaging for motion artifacts, or the absence of intravenous contrast [60]. Virtual Reality Virtual reality (VR) refers to all computer-and AI-based techniques used to simulate reality and thus allow interactions between human and virtual 3D interfaces.With its fast development in all fields, the advantages of its possible application in healthcare are obvious.VR creates unlimited expectations in surgical fields, where its potential in training and perfecting techniques seems endless.However, performances are still unsatisfactory and its distribution in daily clinical practice is still utopic. In the lung cancer field, studies including VR for preoperative evaluation are limited [48,[61][62][63][64][65][66] (Table 3).In 2018, Frajhof et al. [61] first evaluated VR as a preoperative tool to improve decision making and surgical planning in a challenging video-assisted thoracoscopic surgery case.Perkins et al. [62] developed a mixed-reality tool that provided 3D visualization of the lung structures and allowed for interaction with the model to simulate lung deflation and surgical instrument placement.The authors concluded that the tool may facilitate accurate and faster identification of small lung nodules, potentially avoiding the need for additional invasive preoperative nodule localization procedures.Tokuno et al. [64] transposed their dynamic simulation system (Resection Process Map) for anatomic pulmonary resection.This VR tool has the useful capacity to mimic the deformation of lung structures, including vessels and bronchi, upon deformation and manipulation of the lung such as fissure opening.Ujiie et al. [64] developed a VR navigation with head-mounted displays that generated virtual dynamic images based on patient-specific CT.They evaluated its utility for the surgical planning of lung segmentectomy in a case.Their tool did not allow for lung manipulation but only an immersive experience with the use of the entire visual field instead of a series of digital 3D images.The surgical plan was adjusted in 52%; the tumor was localized in a different segment in 14%; more lung-sparing resection was planned in 10%; and extended segmentectomy, including 1 lobectomy, was performed in 28% of cases after VR evaluation Sadeghi et al. [65] first performed a prospective observational pilot study in 10 patients, aimed at assessing the clinical applicability of their AI-based 3D VR platform for lung segmentectomy.In their study, the surgical strategy was adjusted according to VR-based evaluation in 40% of the cases.This result suggests the potential impact that VR-guided planning may have in the preoperative phase.The trend has been recently confirmed by Backius et al. [66] in a cohort of 50 patients undergoing pulmonary segmentectomy.They observed an adjustment in the surgical plan in 52% of patients after VR visualization compared with CT scan evaluation only.In particular, the tumor was localized in a different segment in 14% of cases. Future Applications and Limitations AI is becoming part of healthcare settings in several fields [8].Its advantages include the possibility for the fast and accurate processing of large datasets and even the chance to learn and predict new data by identifying hidden patterns.In particular, AI technologies have shown remarkable potential to enhance the preoperative evaluation of the patient and to assist thoracic surgeons and anesthesiologists in the decision-making process [67].This aspect seems to be particularly important considering the aging of the population and the necessity to perform surgery on patients aged 80 or older.In fact, an accurate selection of octogenarian patients undergoing lung surgery is mandatory to reduce perioperative morbidity and mortality. AI models have shown great potential for predicting respiratory and cardiovascular complications after lung surgery and thus predicting which patients will benefit from a surgical treatment [19][20][21]23].However, clinical studies in this field are limited and often monocentric with a restricted number of patients and pilot algorithms.This makes the current clinical application of AI tools in preoperative settings still hypothetical and experimental.Further and more robust research is needed for concrete use in daily clinical practice in the future. Different is the advancement of AI in the thoracic imaging field, which is one of the most studied AI applications [68].In fact, several studies have been conducted both in the automated detection of lung nodules and in the risk assessment for lung cancer [28-31] with good results.In this field, the application of AI technology seems closer, and several commercial AI programs for lung nodule detection and segmentation are already available.In the context of a tailored approach, an interesting and emerging field concerns the possibility of STAS prediction.In fact, being able to predict STAS may prevent unnecessary lobar resections in marginal patients or oncologically ineffective sublobar resection if possible [32,33].This is particularly important in octogenarian patients, who are characterized by a higher incidence of morbidity and mortality in lobectomies compared to sublobar lung resections [29].Unfortunately, we are far from a reliable preoperative prediction of STAS, which remains a histological characteristic tested postoperatively.Limitations in a future clinical transition lie in the heterogeneity of studies and datasets that make comparison and reproducibility very difficult [37][38][39][40].Moreover, many studies are monocentric and do not include external validation, making their applicability questionable [39]. Overall, it is predictable that AI as well as other technological innovations will be part of the future of healthcare.However, there are several concerns that need to be addressed before a concrete and widespread application.First of all, studies are still experimental, and more validation research is needed to understand the real impact of AI in this specific surgical context.In fact, the robustness and stability of AI models are still too dependent on input data, and the heterogeneity of databases may affect their diagnostic performance.Second, AI systems are not able to solve complex or uncommon diagnostic challenges or personalize treatment options, making the role of physicians still central in the decisionmaking process.Third, specific laws regarding the legal responsibility of AI-based decisions have not been drawn up yet, limiting the possibility of concrete use in daily clinical practice at the moment. Conclusions The traditional medical methodology of searching for detailed information to achieve a definitive diagnosis with strong scientific evidence cannot be replaced yet.The role of the physician in the diagnosis and treatment is not questioned at all, and there is no other choice than allowing them to persist in a stronger and stronger status.Additionally, new technologies and unexplored fields of knowledge offer opportunities to reduce human errors and increase quality performance.The most recent advancements in AI suggest a break in stale habits and a change in the outdated paradigm.In fact, these tools provide invaluable support to physicians in the decision-making process and, if properly used, they could help to reduce errors and therefore increase healthcare standards.This could be particularly applicable to fragile patients, such as octogenarians, in whom the diagnostictherapeutic path is often questionable.On the other hand, too much optimism in growing technologies without a strict human-led creative process is unacceptable as well. Table 1 . Artificial intelligence studies in lung cancer surgery preoperative risk assessment.ML = machine learning; XGBOOST = extreme gradient boosting; AUC = area under the receiver operating characteristic curve. Table 2 . Selected studies using radiomics and machine learning models to preoperatively predict STAS in lung cancer.STAS = spread through air spaces; AUC = area under the receiving operative characteristic curve; CI = confidence interval. Table 3 . Articles using AI and VR tools for intraoperative planning in lung resections.AI = artificial intelligence; 3D = three-dimensional; CT = computed tomography; VR = virtual reality; MR = mixed reality; AR = augmented reality.
2024-04-10T15:22:00.284Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "359a781f0721dcaaf2d6cca4fafac08fc0a00021", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9032/12/7/803/pdf?version=1712485392", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0e74ff2f5474f137c2d518c0973ebce6976507b8", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [] }
206849034
pes2o/s2orc
v3-fos-license
Citrobacter rodentium Subverts ATP Flux and Cholesterol Homeostasis in Intestinal Epithelial Cells In Vivo Summary The intestinal epithelial cells (IECs) that line the gut form a robust line of defense against ingested pathogens. We investigated the impact of infection with the enteric pathogen Citrobacter rodentium on mouse IEC metabolism using global proteomic and targeted metabolomics and lipidomics. The major signatures of the infection were upregulation of the sugar transporter Sglt4, aerobic glycolysis, and production of phosphocreatine, which mobilizes cytosolic energy. In contrast, biogenesis of mitochondrial cardiolipins, essential for ATP production, was inhibited, which coincided with increased levels of mucosal O2 and a reduction in colon-associated anaerobic commensals. In addition, IECs responded to infection by activating Srebp2 and the cholesterol biosynthetic pathway. Unexpectedly, infected IECs also upregulated the cholesterol efflux proteins AbcA1, AbcG8, and ApoA1, resulting in higher levels of fecal cholesterol and a bloom of Proteobacteria. These results suggest that C. rodentium manipulates host metabolism to evade innate immune responses and establish a favorable gut ecosystem. Correspondence jc4@sanger.ac.uk (J.S.C.), g.frankel@imperial.ac.uk (G.F.) In Brief Berger et al. reveal how C. rodentium infection manipulates host metabolism to evade innate immune responses and establish a favorable gut ecosystem. Binding of C. rodentium to the gut epithelium rewires cellular bioenergetics and cholesterol metabolism, altering the composition of the gut microbiota and processes involved in fighting infection. INTRODUCTION The intestinal epithelium is comprised of LGR5 + stem cells at the base of the crypt, proliferating transit-amplifying (TA) cells at the lower part of the crypt and a monolayer of columnar intestinal epithelial cells (IECs) that are renewed every 5-7 days (Barker, 2014). The proliferating crypt cells are believed to utilize aerobic glycolysis (the Warburg effect) by fermenting glucose to lactate (Koppenol et al., 2011). The IECs play a key role in absorption and systemic dispersion of electrolytes, nutrients, and water from the lumen of the gut (Peterson and Artis, 2014). IECs also form a robust line of host defense against ingested pathogens, acting as a physical barrier and through detection of pathogen-associated molecular patterns via pattern recognition receptors, such as toll-like receptors (TLRs) 2 and TLR4 (Peterson and Artis, 2014). As such, the pathogen-IEC interface constitutes the battle line between the host innate immune system and the pathogen's counteracting virulence factors. IECs have a high-energy demand and their extensive anabolic activity relies on various sources of energy. Glucose, glutamine, glutamate, and aspartate are delivered to IECs through the circulatory system (Blachier et al., 2017), while the short-chain fatty acids (SCFAs) acetate, propionate, and butyrate are absorbed directly from the gut lumen, where they are produced by the microbiota through fermentation of dietary fiber and amino acids (Neis et al., 2015). Butyrate, which in the colon is absorbed mainly via the monocarboxylate transporter 1 (Mct1), is processed via b-oxidation and feeds the tricarboxylic acid (TCA) cycle and oxidative phosphorylation in the mitochondria (Donohoe et al., 2011). Despite the growing appreciation of the role subversion of cellular metabolism plays during host-pathogen interactions (Fuchs et al., 2012), our understanding of changes to the metabolic networks of host cells during infection, particularly IECs, is incomplete. Citrobacter rodentium is an extracellular, mouse-specific, pathogen that intimately binds the apical surface of IECs and triggers effacement of the brush border (BB) microvilli, forming attaching and effacing lesions, in a similar manner to enteropathogenic and enterohemorrhagic Escherichia coli (EPEC and EHEC) (Mundy et al., 2005). Moreover, by inducing extensive amplification of TA cells and inhibiting anoikis and cell detachment, C. rodentium induces colonic crypt hyperplasia (CCH) (Collins et al., 2014). Following oral inoculation C. rodentium first resides in the cecum, before colonization spreads to the entire colonic mucosa (Wiles et al., 2004). Bacterial shedding peaks around 8 days post infection (DPI) and the infection starts to clear at around 12 DPI. Injection of bacterial effector proteins via a type III secretion system (T3SS) into IECs is the key mechanism by which of C. rodentium establishes infection at the epithelial surface (Mundy et al., 2005). Once in the host cytosol these effectors take control of key cell signaling processes, including actin dynamics, endosomal trafficking, and apoptosis (Wong et al., 2011). The inflammatory nature of the infection means that C. rodentium interactions with IECs take place in an environment rich in cytokines (e.g., interleukin-18 , IL-22, IL-6, IL-1b, tumor necrosis factor alpha [TNF-a], interferon-g) and infiltrating immune cells (Collins et al., 2014). Consequently, infected IECs respond to the inflammatory signals in the gut by expressing high levels of antimicrobial peptides (Collins et al., 2014) and apical inducible nitric oxide synthase (iNOS), capable of producing nitric oxide (NO) to which C. rodentium is sensitive (Lopez et al., 2016;Vallance et al., 2002). Indeed, one well-characterized function of the T3SS effectors is the subversion of innate immune responses in IECs, e.g., nuclear factor kB [NF-kB] and c-Jun N-terminal kinase (Pearson et al., 2016) and the non-canonical caspase-4/11 inflammasome (Pallett et al., 2017). In addition, T3SS effectors also target mitochondrial functions. The effector Map, which acts as a guanine nucleotide exchange factor for Cdc42 (Huang et al., 2009), is targeted to the mitochondria where it induces disruption of the mitochondrial morphology and loss of mitochondrial respiratory functions (Ma et al., 2006). How the pathogen benefits from altering the function of the mitochondria, and thus the production and flow of energy in infected IECs, remains unknown. In this study we conducted the first in-depth proteomics analysis of IECs isolated from mice infected with C. rodentium and reveal extensive remodeling of metabolic pathways during infection. We subsequently confirmed these findings using targeted assays. We show that C. rodentium infection results in significant dampening of central carbon metabolism in IECs, particularly production of mitochondrial cardiolipins. This coincided with elevated levels of O 2 above the infected IECs, confirming a previous report showing that C. rodentium favors oxidative metabolism in vivo (Lopez et al., 2016). Uniquely, we found that infected IECs upregulate cholesterol biogenesis, which was unusually accompanied by upregulation of the cholesterol efflux transporter Abca1 and AbcG8, as well as ApoA1, leading to elevated levels of fecal cholesterol. Finally, the infection-induced increase in luminal O 2 and cholesterol were reflected by the observed dysbiosis triggered by C. rodentium infection. C. rodentium Disrupts Host Metabolic Processes and Cytoskeletal Proteins To characterize the effect of C. rodentium infection on host metabolism in vivo, we enriched IECs from colons of C. rodentium-infected mice 8 DPI (five mice per group), when the pathogen is shed at ca. 10 9 per gram of stool. IECs isolated from uninfected mice were used as a control (five mice per group). Examination of IECs by microscopy and flow cytometry revealed that IEC preparations were enriched by over 90% (Crepin et al., 2015). Immunofluorescence microscopy revealed that IECs extracted from uninfected mice exhibited typical columnar shape and projection of actin-rich BB microvilli. IECs purified from mice 8 DPI were round, covered with C. rodentium associated with polymerized actin, and devoid of microvilli ( Figure 1A). KEGG pathway enrichment analysis of differentially regulated proteins revealed a striking tendency toward downregulation of pathways related to a broad range of cellular metabolic and energy homeostasis activities including the TCA cycle, oxidative phosphorylation, and lipid metabolism ( Figure 1C). Consistently, 53% of the downregulated proteins were mitochondrial ( Figure 1D). C. rodentium Inhibits Feeding of the Host TCA Cycle Previous studies have demonstrated that C. rodentium infection results in extensive disruption of the mitochondria (Ma et al., 2006), where ATP is produced via the TCA cycle and oxidative phosphorylation. Key mitochondrial transporters supplying substrates for the TCA cycle were in lower abundance in infected IECs, including the pyruvate transporter (Mpc1), the carnitine/acylcarnitine carrier (Cac), the 2-oxoglutarate/malate carrier (Ogcp), calcium-dependent exchanger of cytoplasmic glutamate with mitochondrial aspartate (Aralar1/2), and the citrate transporter (Sfxn5) (Figure 2A and quantification in Figure S1A). The TCA cycle is fed by multiple metabolites including a-ketoglutarate, generated from glutamate by glutamate dehydrogenase and acetyl-CoA, generated via glycolysis and b-oxidation of butyrate and other lipids ( Figure 2A). Proteins involved in b-oxidation were found in lower abundance in infected IECs (Figure S2). Moreover, the nuclear-encoded mitochondrial transcription factor Tfam, which regulates expression of the mitochondrial b-oxidation genes (Joseph et al., 2006), was found in lower abundance (Log2FC À0.8) and was predicted to be inactivated (Z score: À2.333, p value: 5.14 3 10 À5 ). In addition, the abundance of proteins involved in butyric acid (or butanoate) metabolism was also lower in infected IECs ( Figure 1D), fitting with the predicted inhibition of this pathway (Z score: À3.656, p value: 5.32 3 10 À4 ). Butyrate is one of the main substrates fueling b-oxidation and the TCA cycle in colonic IECs; importantly, the abundance of the butyrate importer Mct1/Slc16a1 and its co-factor Bsg/CD147 was lower (Log2FC À1.4 and Log2FC À1.1, respectively) in infected IECs ( Figure 2B). Considering the central role butyrate plays in energizing IECs we tested experimentally whether C. rodentium infection inhibits butyrate uptake during in vitro infection of polarized Caco-2 cells using [ 14 C]sodium butyrate. This analysis revealed a 30% reduction of butyrate uptake into infected cells compared with uninfected control cells ( Figure 2C). Taken together, these data suggest that C. rodentium infection inhibits the supply of substrates to the TCA cycle in IECs, which is likely to impact on downstream oxidative phosphorylation and ATP production. C. rodentium Inhibits Mitochondrial ATP Biogenesis in Infected IECs The TCA cycle produces NADH for oxidative phosphorylation. The abundance of all the enzymes forming the TCA cycle and most of the proteins in the electron transfer chain was lower in C. rodentium-infected IECs compared with control IECs ( Figure 2D and quantification in Figure S1B). Oxidative phosphorylation is dependent on the inner mitochondrial membrane lipid cardiolipin (ca. 20% of mitochondrial lipid content), which is essential for generating the electrochemical gradient used for ATP production (Paradies et al., 2014). Cardiolipins are synthesized in the mitochondrial inner membrane by conversion and modification of phosphatidic acid (PA), which is transferred from the mitochondrial outer membrane by a complex comprising Ups1/Preli and Mdm35/Triap (Miliara et al., 2015;Yu et al., 2015). (B) Volcano plot summarizing the differential regulation of the mouse IEC proteome during C. rodentium infection. Red, green, and gray dots represent proteins with higher, lower, or unchanged abundance, respectively. (C) KEGG pathway enrichment analysis; proteins in the whole proteome are ranked according to the log2 values (top panel) from the most downregulated (green) to the most upregulated (red). Regulated proteins mapped to significantly enriched KEGG pathways are highlighted in the heatmap (bottom panel). The pathways are ranked from those that are highest statistical significant to the lowest (Benjamini-Hochberg false discovery rate [FDR] < 0.05). (D) Boxplots illustrating the downregulation of mitochondrial proteins (MSigDB annotation), proteins involved in fatty acid, b-oxidation, and butanoate metabolism (KEGG annotation). (legend on next page) We found all five enzymes generating long-chain acyl-CoA and PA in lower abundance in infected IECs ( Figure S3), which may result in accumulation of low-molecular-weight phospholipids, including phosphatidylinositols (PIs), and cardiolipins. Moreover, while Crls1, which generates immature cardiolipin, was in higher abundance ( Figure 3A and quantification in Figure S1C), the final maturation steps are likely to be impaired due to lower abundance of the mitochondrial and cytosolic cardiolipin maturation enzymes, Mlclat1 and Alcat1, respectively, and the unchanged abundance of the phospholipase iPla2, which digests monolysocardiolipin into dilysocardiolipin (Figure 3A and quantification in Figure S1C). Moreover, the abundance of Mdm35/Triap was lower in infected IECs (Log2FC À0.67). Based on these observations we hypothesized that infected cells will either accumulate low-molecular-weight cardiolipins or decrease the pool of cardiolipins. By applying a lipidomics fingerprint technique to infected IECs we were able to detect cardiolipins and PIs and to quantify their abundance ( Figure 3B). As a control, total lipid extracts from C. rodentium were analyzed to confirm the specificity for eukaryotic PIs and cardiolipins ( Figure S4A). As predicted from protein abundance measurements, low-molecular-weight PIs, present at m/z 835.4 and 863.4 ( Figure 3B), was in higher abundance in infected IECs ( Figure S4B). In addition, cardiolipins of the highest molecular weights, present at m/z 1,570.1 and 1,598.1, were in lower quantities in infected IECs, while cardiolipins of lower molecular weights, present at m/z 1,396.0 and 1,424.0, were in higher abundance ( Figure 3C), suggesting that accumulation of immature mitochondrial cardiolipins disturbs oxidative phosphorylation during C. rodentium. The T3SS effector Map is responsible, at least in part, for mitochondrial disruption in the colonic IECs (Ma et al., 2006). We therefore reasoned that as oxygen consumption by oxidative phosphorylation in IECs would be more efficient following infection with a map mutant (partial disruption of the mitochondria) compared with infection with wild-type (WT) C. rodentium (extensive disruption of the mitochondria), the apical surface of infected IECs would be more hypoxic following infection with the former. To test this experimentally, we infected mice with bioluminescent WT C. rodentium or a C. rodentium map mutant as reporters for surface oxygen concentration (as luciferase activity is dependent on the supply of O 2 to the epithelium [Ghisla et al., 1978]). At 6 DPI both strains were shed at equal numbers ( Figure 4A). Moreover, the number of tissue-associated C. rodentium ( Figure 4B), the magnitude of CCH ( Figure 4C), and the level of Ki-67 straining, a marker of proliferating cells ( Figure 4D), were similar at 8 DPI with either the WT or the map mutant strains. However, the bioluminescence signal was significantly lower following infection with C. rodentium Dmap compared with infection with WT C. rodentium ( Figure 4E). Upon complementation of the map mutant the increased biolu-minescent signal was restored ( Figure 4E). These results show that infection with WT C. rodentium results in oxygenation of the apical surface of infected IECs independently of CCH. C. rodentium Infection Triggers Biogenesis of Phosphocreatine While the proteomic and lipidomics analyses showed that the mitochondria are dysfunctional during C. rodentium infection, cytosolic glycolysis seems to be functioning, as key enzymes from throughout the glycolysis pathway are either unchanged or exhibit increased abundance during infection ( Figure 5A and quantification in Figure S1D). Moreover, the infected IECs adapted to the lack of mitochondrial ATP production by specifically upregulating the basolateral sugar importer Sglt4 (the abundance of the glucose transporter Glut1 did not change during infection) ( Figure 5A and quantification in Figure S1D). Next, we analyzed how the glycolysis-generated ATP is efficiently distributed to subcellular sites of energy utilization. One such go-between is phosphocreatine (PCr), which is generated by phosphorylation of creatine (Cr) by creatine kinases (CKs) (Wallimann et al., 2011). While biogenesis of Cr (as well as ornithine/spermidine) is mediated via degradation of L-arginine by Gatm, iNOS uses L-arginine as a substrate for generation of NO. Importantly, the abundance of both Gatm and iNOS was significantly higher in infected IECs ( Figure 5B and quantification in Figure S1E). As NO is highly bactericidal, it is possible that C. rodentium disrupts the mitochondria as a means to shift cellular utilization of L-arginine toward generation of Cr and away from production of NO. This hypothesis is supported by the fact that, based on the fold change of its target proteins, the transcription factor Nrf2, which is activated by NO (Kvandova et al., 2016), was predicted to be strongly inhibited (Z score: À4.74, p value: À7.22 3 10 À10 ) in infected IECs. Moreover, C. rodentium intimately colonize IECs expressing high levels of apical iNOS (Vallance et al., 2002). Conversion of Cr to PCr can occur in the mitochondria, by the mitochondrial CKs Cktm1 and Cktm2, or in the cytosol via CK-m and CK-b, which are directly associated with the glycolytic enzymes producing ATP (Joseph et al., 1997). The abundance of CK-m and CK-b was unchanged during infection, while the abundance of Cktm1 and Cktm2 was lower ( Figure 5B and quantification in Figure S1E). Consistently, the inner membrane mitochondrial ATP exporter (Ant), which is tightly coupled to Cr phosphorylation, and the mitochondrial outer membrane voltage-dependent anion channel (Vdac), which exports PCr into the cytosol, were in lower abundance following infection ( Figure 2A and quantification in Figure S1A). Taken together, this suggests that, in C. rodentium-infected IECs, L-arginine is mainly catabolized to Cr, which is converted by cytosolic CKs to PCr. We confirmed this experimentally using LC-MS-based metabolomic analysis, which revealed higher levels of the Cr Figure S1B). precursor guanidinoacetate (6.42 FC), Cr (2.16 FC), PCr (2.93 FC) and the breakdown produce of PCr, creatinine (2.37 FC; Figure 5C), as well as spermidine (91.77 FC; Figure 5D), in infected IECs. C. rodentium Triggers Simultaneous Cholesterol Biogenesis and Cholesterol Efflux While a fall in the PCr:Cr ratio activates the AMP-activated protein kinase (Ampk), a crucial cellular energy sensor (Hardie et al., 2012), IECs infected with C. rodentium exhibit an increased PCr:Cr ratio, suggesting that Ampk is not activated. Indeed, while, the abundance of Ampk-a was similar in uninfected and infected IECs, the abundance of Ampk-b and Y subunits was lower in infected cells ( Figure 6A and quantification in Figure S1F). Moreover, the abundance of Lkb1, which is the main kinase that phosphorylates and activates Ampk-a (Hardie, 2014), was lower in infected IECs ( Figure 6A and quantification in Figure S1F). Consistently, western blotting using anti-phospho Ampk-a antibodies revealed lower levels of Ampk-a phosphorylation in infected cells ( Figure 6B). Moreover, based on the fold change of its target proteins, the transcription factor p53, which, once activated by Ampk, inhibits cell proliferation (Jones et al., 2005), is predicted to be inhibited (Z score: À4.825, p value: 2.19 3 10 À29 ). In addition, we observed increased abundance of Acaca/b, which catalyzes the carboxylation of acetyl-CoA to malonyl-CoA, and decreased abundance of Mlycd, which catalyzes the conversion of malonyl-CoA back to acetyl-CoA (Figure 6A and quantification in Figure S1F), which goes against Figure S1C). Proteins below the significant value (log2 fold change >0.59 or <À0.59) are shown in gray. MLCL, monolysocardiolipin; DLCL, dilysocardiolipin. (B) MALDI-TOF mass spectra of uninfected (left panel) and infected IECs (right panel) showing the negative ion mass spectra using the DHB matrix solubilized at 10 mg/mL (mass spectra of C. rodentium are shown in Figure S4A). The absolute abundance of the ions is shown on the y axis, and the masses of the ions are shown on the x axis. The m/z represents mass to charge ratio. (C) Relative abundance of cardiolipins detected in uninfected and infected IECs (relative abundance of phosphatidyl inositol is shown in Figure S4B). Mann-Whitney test with *p < 0.05. Each dot represents an individual mouse and bars geometric means. (legend continued on next page) the activation of Ampk (Wolfgang and Lane, 2006). Significantly, while Ampk inhibits cholesterol synthesis, the proteomic analysis suggested that cholesterol biosynthesis was upregulated ( Figure 6B and quantification in Figure S1G), with most of the enzymes driving cholesterol biogenesis found in higher abundance, although DHCR7, which catalyzes the conversion of 7-dehydrocholesterol to cholesterol, was in lower abundance ( Figure 6B and quantification in Figure S1G). In agreement with upregulation of cholesterol biosynthesis, the abundance of the Ldl receptor (Ldl-R, Log2FC 1.31) and Pcsk9 (Log2FC 1.97), involved in cholesterol uptake and receptor recycling, were elevated in infected IECs. These responses are typical of cells suffering sterol deficiency (Spann and Glass, 2013). This hypothesis was supported by the fact that, based on the fold change of its target proteins, the transcription factor Srebp2, which upregulate expression of genes involved in cholesterol biosynthesis and uptake (Spann and Glass, 2013), was predicted to be strongly activated (Z score: 2.032, p value: 2.28 3 10 À8 ) in infected IECs. Under resting conditions Srebp2 localizes to the ER, however, sterol depletion promotes ER-to-Golgi transport of the sterol regulator Scap together with Srebp2, where Srebp2 undergoes proteolytic cleavage leading to nuclear translocation of its soluble N terminus and subsequent expression of sterol-regulated genes such as Hmgcr and ldlR (Spann and Glass, 2013). Western blotting of IECs purified from infected and uninfected mice confirmed that Srebp2 was specifically cleaved and activated during C. rodentium infection ( Figure 6D). Under Srebp2 activation conditions, processes involved in cholesterol efflux are inhibited (Spann and Glass, 2013). Unexpectedly, in infected IECs we found significantly higher abundance of the major basolateral cholesterol efflux transporter Abca1 (Log2FC 4.92), as was the abundance of the cholesterol binding protein Apoa1 (Log2FC 0.89); the apical cholesterol heterodimeric transporter Abcg5 and Abcg8 was not found in the proteome. We used western blotting to validate the induction of AbcA1 expression and to test whether Abcg8 is expressed in infected IECs. While Abca1 and Abcg8 were barely detectable in control IECs, they were in higher abundance in infected cells ( Figure 6E). Cholesterol secreted via Abca1/Apoa1 is excreted in feces via the reverse cholesterol transport (via the liver) while trans-intestinal cholesterol excretion is mediated by Abcg5/8 (Hong and Tontonoz, 2014). We therefore investigated the consequences of the apparent increase in cholesterol efflux by measuring its levels in feces (Figure 6F). This revealed a 67% increase in fecal cholesterol 8 DPI (p < 0.0001). We hypothesized that the combined elevated levels of fecal cholesterol and mucosal oxygen would impact on the composition of the colonic mucosal-associated microbiota. Phylogenetic analyses revealed no significant changes in alpha diversity (Figure S5), while principal-components analysis of operational taxonomic units at the genus level showed a 70.3% separation between infected and uninfected microbiomes ( Figure 7A). In particular, the abundance of butyrate-producing commensals (Roseburia, Coprococcus and Odoribacter genera and members of the Lachnospiraceae family) as well as Firmicutes/Bacilli and, to a lesser extent, Firmicutes/Clostridia and Firmicutes/Erysipelotrichi, was significantly reduced in infected mice ( Figures 7B and 7D), which is consistent with the oxygenation of the apical IEC surfaces. A decline in Bacteroidetes and Tenericutes was also observed (Figures 7B and 7E). In contrast, the facultative aerobes Proteobacteria became the dominant phylum among mucosal-associated bacteria during infection ( Figures 7B and 7C), largely due to IEC-associated C. rodentium ( Figure 7C). Interestingly, genera not detected in uninfected mice expanded significantly during infection (e.g., Dickeya, Cronobacter, Erwinia, Klebsiella, Pantoea, Serratia, and Trabulsiella) ( Figure 7C), suggesting that, by increasing the availability of O 2 and/or cholesterol, C. rodentium infection provides a beneficial niche for these commensal bacteria. Notably, Serratia, Dickeya, and Erwinia have previously been described to thrive on and metabolize cholesterol (Caspi et al., 2016;Garcia et al., 2012). Taken together, the apparent ''confused'' cellular behavior in relation to cholesterol homeostasis suggests that C. rodentiuum and the host clash over the control of cholesterol biogenesis and efflux, which impacts on the composition of the microbiota. DISCUSSION Enteric bacterial pathogens and the infected host battle for control of the gut ecosystem. This battle is classically thought to occur between the host's innate and acquired immune systems and counteracting bacterial virulence factors. However, the influence of infection on host cell metabolism is an underappreciated aspect of host-pathogen interactions. In this study we employed an unbiased quantitative shotgun proteomic screen, targeted metabolomics, and lipidomics to define the metabolic responses of mouse colonic IECs to C. rodentium infection, which provides a powerful and physiologically relevant infection model (Collins et al., 2014). While the fact that C. rodentium infection triggers substantive disruption of the mitochondria is well established (Ma et al., 2006), the advantage this offers to the pathogen is unknown. Study of bacterial interference with the mitochondria has previously focused on apoptosis (Giogha et al., 2014). In this study we show that C. rodentium infection causes shut down of mitochondrial ATP production, a switch to aerobic glycolysis and significant reduction in the levels of host high-molecular-weight cardiolipins, lipids essential for efficient oxidative phosphorylation and maintenance of mitochondrial integrity (Ren et al., 2014). In addition, using bioluminescent reporter strains, we show that infection with WT C. rodentium leads to increased oxygenation of the mucosal surface, likely due to disruption of mitochondrial respiration. This observation is consistent with previous work, which suggests that C. rodentium performs (D) Similar levels of Ki-67 straining were observed following infection with the WT and the Dmap strains. Scale bars, 200 mm. The graph shows the ratio of Ki-67positive cells over total crypt length. The graph shows measurement of Ki-67 staining in individual crypt. Bars represent means; *p % 0.0001. (E) Bioluminescence levels are lower in mice infected with the Dmap compared with those infected with the WT or complemented strains. The color scale bar indicates relative signal intensity (as photons s À1 cm À2 sr À1 ). The graph shows quantification of total flux (p/s) output from a defined area (white rectangular outline; 3.5 3 5 cm) of at least three mice per group. *t test with p value < 0.05; n.s., not significant. Data are represented as mean ± SEM. Figure 5. C. rodentium Triggers Production of Phosphocreatine (A) Schematic representation of the regulated proteins in the sugar import and glycolysis pathway. C. rodentium induces increased abundance of the sugar transporter Sglt4, feeding glycolysis, which remained functional during infection (quantification is shown in Figure S1D). (legend continued on next page) oxidative metabolism in vivo (Lopez et al., 2016). However, while Lopez et al. suggested that C. rodentium triggers CCH as a means to extract oxygen, we find that oxygenation of the mucosal surface occurs independently of CCH. The disparity between these results is likely due to the use of a triple map/espH/ cesF C. rodentium mutant by Lopez et al., which colonizes the gut inefficiently. Therefore, questions remain as to the relationship between CCH and oxygenation of the mucosal surface during C. rodentium infection. Of note, oxygen availability has been shown to be a key environmental cue for expression or activation of the T3SS in EHEC (Carlson-Banning and Sperandio, 2016) and Shigella flexneri (Marteyn et al., 2010). The proteomics analysis reveals that the abundance of many plasma membrane and mitochondrial lipid and carbohydrate transporters, which feed the TCA cycle, are significantly reduced in infected IECs. This results in reduced butyrate uptake by IECs infected with C. rodentium, similar to previously reported data during EPEC infection (Borthakur et al., 2006). This observation is consistent with the lower abundance of the butyrate importer Mct1 and its co-factor Bsg/CD147. Moreover, the microbiome analysis reveals reduction in the abundance of butyrate-producing commensals, which may further impact on the ability of infected IECs to derive energy from luminal SCFAs. The reduction in butyrate-producing commensals may be due to multiple factors, including C. rodentium-induced generation of antimicrobial peptides (Collins et al., 2014) and C. rodentium-induced oxygenation of the colonic mucosa, which could impact on the viability of anaerobic members of the microbiota. Of note, Lopez et al. (2016) found increased abundance of anaerobic commensals (e.g., Clostridia) within the oxygenated mouse gut. The differences between the two studies are likely due to the fact that while we quantified the abundance of mucosal-associated commensals, Lopez et al. extracted DNA for microbiome analysis form the colon content. Importantly, while the supply of mitochondrial derived ATP seemed to be inhibited, infected IECs do not present signs of ATP starvation (no signs of Ampk activation). Instead, IECs adapt to C. rodentium infection by increasing the abundance of sugar transporters that could feed aerobic glycolysis. The transition from oxidative phosphorylation to glycolysis is reminiscent of cancer cells and of classically activated macrophages (M1), which rely on aerobic glycolysis for energy, a phenomenon known as the ''Warburg effect'' (Koppenol et al., 2011). Notably, the fact that inhibition of glycolysis in transformed cells can re-activate oxidative phosphorylation suggests that the mitochondria in these cells are not damaged (Fantin et al., 2006). In contrast, C. rodentium disrupts the structure of IEC mitochondria, likely locking infected cells in aerobic glycolysis and forcing them to produce creatine. Coupled with the aerobic glycolytic program, the abundance of enzymes involved in L-arginine degradation, which leads to biosynthesis of Cr/PCr and spermidine was higher in infected IECs. We confirmed experimentally the presence of elevated levels of these metabolites. Although we detected a ca. 100fold increase in the level of spermidine in infected IECs, we are unable to conclude that this is due solely to increase spermidine production by infected IECs, as spermidine can also be generated by C. rodentium and the commensal flora. Importantly, in addition to being a substrate for Cr biogenesis, L-arginine is also used by iNOS, the protein with the sixth highest FC in response to infection. The inflammatory responses to C. rodentium infection leads to robust decoration of the apical surface of IECs with iNOS (Vallance et al., 2002); yet, although sensitive to NO (Vallance et al., 2002), C. rodentium thrives while forming intimate attachments with the plasma membrane of IECs. We therefore suggest that by triggering disruption of the mitochondria C. rodentium forces IECs to tilt the balance away from iNOS and NO production toward Gatm and Cr and polyamines, which themselves inhibit iNOS (Southan et al., 1994). This rebalancing may represent an immune evasion strategy. Importantly, at 14 DPI the abundance of Gatm returned to the pre-infection level, while the abundance of iNOS remained at the level seen 8 DPI, which could potentially contribute to C. rodentium clearance. Indeed, iNOS-deficient mice display a small but significant delay in bacterial clearance (Vallance et al., 2002). To the best our knowledge, our study is first to show that such an evasion strategy might occur in vivo in IECs. Previous studies have demonstrated that, while infecting the macrophage cell line RAW264.7 in vitro, Salmonella typhimurium upregulates expression of Arg2 to divert arginine away from iNOS (Lahiri et al., 2008). While we detected lower abundance of both Arg1 and Arg2 in IECs during C. rodentium infection, subversion of substrates from iNOS might be a common mechanism of innate immune evasion by pathogenic bacteria. While lipid biogenesis in general was downregulated in infected IECs, one of the most conspicuous consequences of C. rodentium infection was activation of Srebp2 and cholesterol biogenesis, despite the high-energy cost involved. Moreover, although seeming to be in limited supply, the available acetyl-CoA appeared to be diverted to cholesterol biogenesis. This is the first time the cholesterol biosynthetic pathway has been shown to be induced in IECs response to an enteric infection. Our current understanding of the function cholesterol plays in innate immunity mainly comes from studies of macrophages, where a positive feedback loop augments inflammatory responses. Macrophages containing elevated levels of cholesterol, e.g., in abca1 knockdown cells or in hypercholesterolemia not only contain higher levels of TLR4 and TLR9, but are also hyper-responsive to lipopolysaccharide as well as to TLR2, TLR7, and TLR9 agonists (Tall and Yvan-Charvet, 2015). As TLR signaling, e.g., IL-6 or TNF-a, triggers activation of Srebp2 (Gierens et al., 2000) and decreases cholesterol efflux, the cholesterol content of lipid rafts increases, which further amplifies (B) Schematic representation of the regulated proteins in the phosphocreatine pathway. L-Arginine is diverted toward production of spermidine, creatine, and phosphocreatine (quantification is shown in Figure S1E). activation of TLRs and NF-kB signaling (Tall and Yvan-Charvet, 2015). Although no equivalent data are available for IECs, activation of the cholesterol biosynthetic pathway may represent an important arm of the innate immune response to bacterial infection at mucosal surfaces. Indeed, the TLR adaptor, MyD88, is essential for host survival and optimal immunity following C. rodentium infection (Collins et al., 2014). Moreover, C. rodentium infection triggers rapid NF-kB nuclear translocation and robust recruitment of macrophages and neutrophils, which is diminished in TLR4-deficient mice. In addition, TLR2À/À mice succumb to C. rodentium infection (Collins et al., 2014). Consistent with this, a large proportion of the C. rodentium Figure 6. C. rodentium Triggers Production and Secretion of Cholesterol (A) Schematic representation of the Ampk-regulated proteins and downstream pathways, suggesting that Ampk is inactive. The transcription factor TP53 was predicted to be inhibited (blue), whereas Srebp2 was predicted activated (orange), promoting cell-cycle and cholesterol biosynthesis, respectively (quantification is shown in Figure S1F). (B) Phosphorylation of Ampka in control and infected IECs. (C) Schematic representation of the regulated proteins in the cholesterol biosynthetic pathway showing a global increased of enzyme abundance (quantification is shown in Figure S1G). Proteins below the significant value (log2 fold change >0.59 or <À0.59) are shown in gray. T3SS effectors are dedicated to dampening these innate immune processes (Pallett et al., 2017;Pearson et al., 2016). Unexpectedly, alongside activation of cholesterol biosynthesis the cholesterol efflux transporters Abca1 and Abcg8 were also present at higher abundance in infected IECs. These transporters are likely functional, as significantly higher level of fecal cholesterol was detected in C. rodentium-infected mice. A previous study reported elevated levels of serum cholesterol during C. rodentium infection, although the reason for this was not apparent (Raczynski et al., 2012). Our study suggests that cholesterol efflux from IECs can reach the lumen of the gut via reverse cholesterol transport. Notably, increased fecal cholesterol was associated with a bloom of the colonic commensal Serratia, Dickeya, and Erwinia, which can metabolize cholesterol (Caspi et al., 2016;Garcia et al., 2012), and therefore benefit from the alteration of the gut niche which occurs as a consequence of C. rodentium (IEC interaction). Under physiological conditions cholesterol limitation actives Srebp2 leading to cholesterol biogenesis and uptake, while an excess of cholesterol, which is cytotoxic, triggers expression of Abca1, Abcg5/8, and cholesterol efflux via the transcription regulator liver X receptor (Hong and Tontonoz, 2014). Using western blotting we found low levels of Abca1 and Abcg8 in IECs isolated from uninfected mice and robust expression in infected cells. Therefore, during C. rodentium infection both cholesterol biosynthesis and efflux are operating simultaneously. Importantly, at 14 DPI the abundance of the rate-limiting enzyme in the cholesterol biosynthetic pathway (hydroxymethylglutaryl-CoA reductase, Hmgcr) as well as Abca1, returned to pre-infection levels. Our data suggest that, while cholesterol biogenesis appears to be an innate immune IEC response to infection, the increased abundance of Abca1, Abcg5/8, ApoA1, and cholesterol efflux, concomitant with cholesterol production could represent yet another layer of defense C. rodentium erects while battling host immunity. Taken together our data suggest that C. rodentium subverts metabolism in IECs to evade immune responses and change the oxygen availability at the apical surface of IECs. As IECs adapt to C. rodentium-induced disruption of the mitochondria by increasing glucose uptake, feeding glycolysis, and disseminating ATP via PCr, L-arginine is diverted from iNOS and NO production. Moreover, C. rodentium infection appears to dampen TLR4 signaling by triggering cholesterol efflux. As controlling the cholesterol circuit involves the pathogen, IECs, and inflammation, this phenotype has not been observed, or could not be easily studied, in cell culture models. Indeed, infection of Figure S5). (B) Relative abundance (average) of the different phyla found in tissue-associated microbiota; *Mann-Whitney test with p value < 0.05. (C-E) Proteobacteria (C), Firmicutes (D), and Bacteroidetes and Tenericutes (E) genus abundances of tissue-associated microbiota. All data in (C-E) have a p value < 0.05 (Mann-Whitney test with FDR corrected). Each dot represents individual mouse and bars show the means. Caco-2 cells with EPEC impacts on central metabolism but does not induce the cholesterol biosynthetic pathway (Hardwidge et al., 2004). Our data suggest that subversion of the central carbon metabolism in IECs is an important infection strategy, which is likely to be shared between C. rodentium and human pathogens. Therefore, our findings could open the way for development of new intervention strategies, either directly applied to the host, or indirectly via microbiome-based metabolite treatment (Suez and Elinav, 2017). STAR+METHODS Detailed methods are provided in the online version of this paper and include the following: CONTACT FOR REAGENT AND RESOURCE SHARING Further information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact, Gad Frankel (g.frankel@imperial.ac.uk). EXPERIMENTAL MODEL AND SUBJECT DETAILS Bacterial Strain C. rodentium strains listed in Table S1 were grown at 37 C in Luria-Bertani (LB) with necessary antibiotics as indicated in Tables S1 and S2 at the following concentrations: nalidixic acid (50 mg/ml), kanamycin (50 mg/ml), streptomycin (50 mg/ml) or gentamicin (10 mg/ml). Animals All animal experiments were performed in accordance with the Animals Scientific Procedures Act 1986 and were approved by the local Ethical Review Committee and UK Home office guidelines. Experiments were designed in agreement with the ARRIVE guidelines (Kilkenny et al., 2010), for the reporting and execution of animal experiments, including sample randomization and blinding. Pathogen-free female C57BL/6 mice (18 to 20 g) or C3H/HeNCrl mice (18 to 24 g) were purchased from Charles River, UK. All mice were housed in individually HEPA-filtered cages with sterile bedding (Processed corncobs grade 6), nesting (LBS Serving technology) and free access to sterilized food (LBS Serving technology) and water. A minimum of 4 and a maximum of 8 mice randomly assigned for each group were used per experiment. Each experiment was repeated a minimum of two times. METHOD DETAILS Generation of C. rodentium Map Mutant All plasmids and primers used are listed in Tables S2 and S3, respectively. The map flanking regions were synthetized by GeneArt (ThermoFisher) and sub-cloned into pSEVA612S vector. Alternatively, map and its flanking regions were PCR amplified (primer DC074 and DC075) from purified C. rodentium genomic DNA. The PCR amplicon was purify using a PCR purification kit (Qiagen), digested in CutSmart buffer at 37 C for 2 hours with the High Fidelity Enzymes SacI and SphI (New England Biolabs) and ligated into pSEVA612S vector using T4 Ligase (New Engand Biolabs) for 2 hours at room temperature. The pSEVA612 derivatives were then chemically transformed in CC118lpir E. coli (Herrero et al., 1990). The map gene was deleted from C. rodentium ICC180 pre-transformed with pACBSR -a plasmid containing the endonuclease I-SceI (Ruano-Gallego et al., 2015), using tri-parental conjugation. Donor strain (E. coli CC118lpir containing pSEVA612 derivatives), helper strain (E. coli CC1047 (Kaniga et al., 1991), containing pRK2013 (Figurski and Helinski, 1979), and ICC180-pACBSR were combined and grown for at least 6 h prior to overnight selection on LB agar supplemented with gentamicin and spectinomycin. Selected colonies were grown in LB broth supplemented with and L-arabinose (0.4%) for a minimum of 6 h, to induce the I-SceI endonuclease from pACBSR plasmid. Cultures were subsequently streaked out for overnight growth on LB spectinomycin. Colonies were screened by PCR for successful map deletion, using primers DC084 and DC085. The same method was used for re-insertion of map onto the genome. C. rodentium strains were sequenced (GATC Biotech) to confirm deletion and re-insertion of map. Oral Gavage of Mice and CFU Count Mice were inoculated by oral gavage with 200 ml of overnight LB-grown C. rodentium suspension 10X concentrated in PBS ($5 x10 9 colony forming units (cfu)). Uninfected mice were mock treated with PBS (200 ml). The number of viable bacteria used as inoculum was determined by retrospective plating onto LB agar containing nalidixic acid. Stool samples were recovered at regular intervals after inoculation and the number of viable bacteria per gram of stool was determined by plating onto LB agar containing nalidixic acid. For determining tissue associated CFU, 4 cm of distal colonic tissues were harvested, opened longitudinally to allow stools removals, washed in PBS and homogenized in 10 ml PBS per gram of tissue using an gentleMACs automated tissue dissociator (Miltenyi Biotech). The aqueous layer was plated on LB agar containing nalidixic acid and the CFU were quantified. Extraction of Enterocytes At 8 DPI, 4-cm segment of terminal colon was cut longitudinally, placed in 4 ml enterocyte dissociation buffer (1X Hanks' balanced salt solution without Mg and Ca, containing 10 mM HEPES, 1 mM EDTA and 5 ml/ml 2-b-mercaptoethanol), and incubated at 37 C with shaking, for 45 min. The enterocytes were collected by centrifugation (2,000 x g for 10 min) followed by two PBS washes. Enterocytes pellets were either kept frozen for proteomic analysis and Western blotting or fixed in 4% formaldehyde for immunofluorescence staining. Immunostaining of IECs Fixed enterocytes were permeabilized with 0.1% Triton and stained with rabbit polyclonal anti O152 antiserum (a gift from Claire Jenkins, Public Health England) for 20 min, followed by 30 min of incubation with standard secondary as described above and with Phalloidin-TRITC (Sigma) to visualize actin filament. Samples were analyzed with an Axio Imager M1 microscope (Carl Zeiss MicroImaging GmbH, Germany), and images were acquired using an AxioCam MRm monochrome camera and computer processed using AxioVision (Carl Zeiss MicroImaging GmbH, Germany). Tissue Staining and CCH Measurement Half a centimeter of terminal colon of each mouse was collected, flushed with PBS and fixed in 1 ml 10% neutral buffered formalin. Formalin fixed tissues were then processed, paraffin-embedded and sectioned at 5mm. Paraffin-embedded sections (FFPE) were either stained with haematoxylin and eosin (H&E) using standard techniques or treated with sodium citrate antigen de-masking solution prior to immunofluorescence. Primary antibodies were used at 1:200 dilution for anti-intimin (a gift from Professor Fairbrother, Montreal University) and 1:50 for E-cadherin (CD324; BD Biosciences) and Ki67 (SP6; Thermo Scientific) followed by secondary antibodies from Jackson ImmunoResearch used at a 1:200 dilution (donkey anti chicken Cy3, donkey anti mouse AMCA, donkey anti rabbit -AlexaFluor 488). H&E stained tissues were evaluated blindly for CCH microscopically by measuring the length of at least 20 well-oriented crypts from each section from all of the mice per treatment group. Similarly, Ki-67 staining was assessed microscopically by measuring the distance from the bottom of the crypt to the last stained nuclei. For comparison, Ki-67staining was expressed as a ratio over the total length of the crypt. Tissues were imaged with an Axio, images were acquired using an Axio camera, and computer-processed using AxioVision (Carl Zeiss MicroImaging GmbH, Germany). Bioluminescence Imaging For bioluminescent imaging (BLI), C3H/HeNCrl mice were depilated using hair removal cream (Veet) prior to infection to remove pigmented fur that may interfere with signal output. At 6 DPI, animals were imaged using the IVISÒ Spectrum CT (Perkin Elmer) system under gaseous anesthesia with isofluorane (Zoetis). Fecal Cholesterol Measurement Total fecal cholesterol (cholesterol esters and free cholesterol) was quantified using a colorimetric reaction, as per manufacturer recommendations (Cell Biolabs, STA-384). Stools were harvested from uninfected and infected mice 8 DPI, vacuum dried and weighed before being crunched to powder and extracted in 800 ml of a mixture of chloroform : isopropanol : NP-40 (7:11:0.1). The colorimetric signal was analyzed using a spectrophotometric microplate reader in the 540-570 nm range. resolution ranging from 10,000-25,000 over the m/z range of 121-955 atomic mass units, and a 100,000-fold dynamic range with picomolar sensitivity. The data were collected in the centroid mode in the 4 GHz (extended dynamic range) mode. 16S rRNA Gene Sequencing Colons were collected from mice and DNA was isolated using PowerSoil DNA Isolation Kit (MO BIO Laboratories). For 16S amplicon pyrosequencing, PCR amplification was performed spanning the V3and V4 region using the primers 515F/806R of the 16S rRNA gene and subsequently sequenced using 500bp paired-end sequencing (Illumina MiSeq). Reads were then processed using the QIIME (quantitative insights into microbial ecology) analysis pipeline with USEARCH against the Greengenes database. Importantly, the C. rodentium 16SRNA sequence was recognized by the Greengene database as Enterobacter, as the two differ in only 8 bp (hence the combined classification in Figure 7C). Statistical Analysis GraphPad Prism software was used for all statistical calculations. Statistical test used was Mann-Whitney compared to controls (or as indicated in the figure). P-values < 0.05 were considered significant. For the microbiota, p-values were FDR corrected using Benjamini and Hochberg method. Quantification of BLI and Statistical Analyses Analysis of IVISÒ Spectrum images was carried out on Living image software. Photons from regions of interest (ROI) of a defined size (3.5 x 5cm) were quantified as total photon flux (p/s). All statistical analysis was carried out using GraphPad Prism 7.0. A multiple t-test was used to identify statistical significance for total flux output of bioluminescent images. DATA AND SOFTWARE AVAILABILITY The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD005004. Figure S1. Relative abundance of metabolic proteins (related to Figure 2A, 2D, 3A, 5A, 5B, 6A and 6C). Supplementary figure and tables Bar plots showing the relative abundance of the individual proteins: A. mitochondrial membrane proteins (related to Figure 2A), B. TCA cycle (related to Figure 2D), C. creatine biosynthesis (related to Figure 5B), F. AMPK pathway (related to Figure 6A) and G. Schematic representation of the β-oxidation cycle, with the affected proteins during infection (related to Figure 2A). The bar plot shows the relative abundances of the individual proteins. Schematic representation of lipid elongation and phosphatidic acid production, with the affected proteins during infection (related to Figure 3B). The bar plot shows the relative abundances of the individual enzymes. Proteins below the significant value (log2FC>0.59 or <-0.59) are shown in a lighter shade. Figure S4. Lipid profile of C. rodentium (related to Figure 3B). A. MALDI-TOF negative ion mass spectra of C. rodentium. The absolute abundance of the ions is shown on the y axis, and the masses of the ions are shown on the x axis. The m/z represents mass to charge ratio. B. Relative abundance of phosphatidylinositol detected in uninfected and infected IECs. Mann-Whitney test with p-value < 0.05. Each dot represents an individual mouse and bars the geometrical mean. Both panels are related to Figure 3B. Herrero et al., 1990Kaniga et al., 1991
2017-11-07T01:02:56.554Z
2017-11-07T00:00:00.000
{ "year": 2017, "sha1": "fb0c661c02e6840bec139cedd2cf3566a3c1605e", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S1550413117305545/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9959b1b18ceb38b6a8cf3f4cd8ec18fc08028d12", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
198147347
pes2o/s2orc
v3-fos-license
Correlated photon-pair generation in a liquid-filled microcavity We report on the realization of a liquid-filled optical microcavity and demonstrate photon-pair generation by spontaneous four-wave mixing. The bandwidth of the emitted photons is $\sim 300$ MHz and we demonstrate tuning of the emission wavelength between 770 and 800 nm. Moreover, by employing a liquid as the nonlinear optical medium completely filling the microcavity, we observe more than a factor $10^3$ increase of the pair correlation rate per unit pump power and a factor of 1.7 improvement in the coincidence/accidental ratio as compared to our previous measurements. Introduction Nonclassical states of light, such as correlated photon pairs [1,2] or anti-bunched photons, have contributed significantly to the tests of fundamental quantum mechanics and to interconnect remote quantum systems [3]. One particular challenge when coupling different quantum systems is to match their wavelengths [4,5] or to bridge wavelength gaps. Different methods from non-linear optics, such as spontaneous parametric downconversion (SPDC) and spontaneous four-wave mixing (SFWM), have been employed to generate correlated photon pairs. SPDC sources with short crystals and SFWM sources, for example in optical fibers [6,7], employ high-intensity, shortpulse pump lasers propagating through non-linear optical media. The simplicity of these schemes results from the weak requirements regarding phase matching and leads to correlated photon pair emission in a broad bandwidth of several THz. In contrast, narrow-band sources for photon pairs have often utilized non-linear media in optical cavities in order to enhance the field strength and control the emission bandwidth. For a recent compilation of parameters see [8]. Little attention has generally been devoted to the generation of photon pairs in liquid non-linear media [9], and no experiments in optical cavities have been reported. In this work, we propose and demonstrate a novel approach of using a liquid-filled optical microcavity to prepare correlated photon pairs with tuneable wavelength separation by SFWM. The four-wave mixing process absorbs two photons from the continuouswave pump light field at frequency ω 0 and produces photon pairs at frequencies ω n,± = ω 0 ± n · ω F SR , where ω F SR = πc L denotes the free spectral range of the Fabry-Perot cavity of length L, c is the speed of light, and n is an integer. In principle, the usable range of n is only limited by the bandwidth of the high-reflectivity coating of the cavity mirrors and the transparency of the liquid. The liquid-filled approach of the optical cavity has significant advantages: (1) many liquids exhibit significantly higher non-linear refractive indices than solids, and (2) there is no additional interface between the non-linear medium and mirrors, which would affect the cavity performance. The latter is an issue, in particular, for the highly-curved mirror of our microcavity. The optical Kerr effect in solids has a very fast response time, typically, in the few femtosecond range or below. This has been confirmed by experiments with attosecond laser pulses [10] and rapidly-oscillating optical fields [11]. In liquids, however, the situation is more complex [12,13]. The optical Kerr effect has a contribution from both the purely electronic degree of freedom and the reorientation dynamics of the molecule if it has an anisotropic polarizability. The latter has a complicated dynamics as it involves intermolecular interactions as well as vibrations and rotations. There has been a longstanding tradition to study the optical Kerr effect in liquids using pump/probe schemes with adjustable time delay and it has been found that the peak Kerr response has delays with respect to the pump field in the few-ps range and that the instantaneous response is smaller by one or two orders of magnitude as compared to the peak response [14]. Experiment We have constructed a Fabry-Perot microcavity composed of a micromachined and coated endfacet of an optical fiber as one mirror [15,16,17,18,19,20,21,22,11] and a conventional planar mirror with identical coating as the second mirror (see Figure 1). The length of the cavity is L = 38.4 µm and the finesse is F = π/(T + L) = 12500 ± 500 with a nominal mirror transmission of T = 100 ppm and intracavity losses of L 100 ppm per mirror. The radius of curvature of the fiber mirror is R = 200 µm, giving rise to a 1/e 2 -beam radius on the planar mirror of w 0 = 3.5 µm. This small mode waist enhances the desired nonlinear effects. We have filled the cavity with the synthetic silicone oil (Tetramethyl-tetraphenyltrisiloxane) with high optical transparency. The refractive index of the oil leads to an increase of the optical path length of the cavity which has been detected by a change of the free spectral range from (3.901 ± 0.001) THz to (2.507 ± 0.002) THz. From this we have determined the refractive index of the oil as n = 1.556 [23]. However, the absorption coefficient of the oil has not been accurately determined previously. After filling the microcavity with the oil, we have not observed a change of the cavity finesse. Instead, the cavity linewidth decreased from 313 MHz to 200 MHz. Including the effect of the changing mirror reflectivity in presence of the oil, this sets an upper limit on the absorption coefficient of α = 5 m −1 , which is comparable to pure SiO 2 and indicates that the oil is a very high quality optical medium. Strong pumping of the cavity with intracavity intensities of up to 10 11 W/m 2 leads to significant thermal effects. For example, we observe the characteristic [24,25,11] bistable cavity line shape, which is a result of the optical path length change caused by absorption-induced heating from the high-intensity pump field inside the oil. To lowest order, it can be modeled by an additional power-dependent detuning in the Lorentzian cavity lineshape [24]. with the detuning ∆ = ν−νres δν , the natural cavity linewidth δν, and the lineshift β = β P cav . The lineshift broadens the resonance by several tens of the natural linewidth (see Fig. 2). We measure the lineshift as a function of the transmitted power and observe a linear behaviour (see Fig. 2c) [24]. From this we deduce a constant outcoupling efficiency and finesse, even for increasing powers, and hence the intracavity power can be determined from the transmitted power. Moreover, we estimate the temperature increase inside the cavity by connecting the resonance shift to the temperature increase δT inside the cavity via with the quality-factor Q = 2 × 10 6 , the thermo-optic coefficient C TO and the refractive index n = 1.56. The tabulated thermo-optic coefficients of various silicone oils are in the range of C TO ∼ 3 × 10 −4 K −1 [26], however, the value for our specific oil is not known. Using the average value, we estimate a temperature increase of δT = 0.2 K. The comparatively low temperature increase results both from the low absorption coefficient and from the fact heat convection in liquids leads to a much faster heat dissipation than heat conduction alone, which would be the mechanism in solids. In order to minimize variations resulting from thermal effects, we lock the cavity to the pump laser using a Pound-Drever-Hall locking scheme and as a result the residual power fluctuations are below 2%. Using two additional lasers, we measure the frequency difference to the higher frequency (+) and lower frequency (−) longitudinal modes for the same order n. We then shift the center frequency of the cavity to the dispersioncompensated point such that both free spectral ranges are equal to within the cavity linewidth. This dispersion compensation plays the role of fulfilling both the phase matching condition and the energy conservation. Generally, we find that different pump power levels affect the dispersion compensation and we perform the compensation individually for each power. The output of the Fabry-Perot cavity is spectrally dispersed by a home-built grating spectrometer, with a resolution of λ/δλ = 6000 and the output is recorded with a pair of single photon counters (SPCM). Additional dielectric line filters protect the SPCM from stray light. Both line filters have a linewidth of 3 nm and transmit light at the respective detection wavelengths while providing a suppression of the pump stray field on the SPCM of at least four orders of magnitude. The measured quantum efficiencies of the two beam paths from the cavity are 9.9% and 7.2% including output coupling from the cavity and photon detection efficiencies. We record the SPCM signals using a time-to-digital converter with timing resolution of 40 ps, much better than the SPCM timing jitter of 350 ps, and subsequently perform a correlation analysis with adjustable bandwidth. Results In Figure 2a we show a typical two-photon correlation measurement of order n = 2, i.e., with photon frequencies of ω 2,− = 2π × 377.155 THz and ω 2,+ = 2π × 387.155 THz for an intracavity power of 0.58 W. The correlation signal shows a coincidence-to-accidental ratio (CAR) of 3.4 ± 0.5 and thus clearly signals the nonclassical nature of the emitted photon pairs [27]. The full width at half maximum (FWHM) of the correlation peak is (1.06 ± 0.08) ns, which corresponds to a cavity linewidth of (328 ± 19) MHz. We attribute the increased width to a higher transmission of the dielectric coating at signal and idler frequencies than at the pump frequency. In Figure 2b we show the scaling of the rate of photon pairs vs. intracavity power. As expected for a spontaneous four-wave mixing process, we observe a quadratic dependence on pump power. We compute the expected flux of photon pairs from SFWM from the optical cavity as [28]: The quantities n 2 (z) and I(z) are the nonlinear refractive index and the light intensity, respectively, and k = 2πn/λ vac is the wave vector of the light with the refractive index n. The intensity is connected to the intracavity power via I = P πw 2 0 . Note that the quantity n 2 is proportional to the Kerr constant evaluated at an optical frequency and hence will be lowered as compared to the tabulated dc (ω → 0) value of n dc 2 = 44 × 10 −20 m 2 W −1 [29] by the mechanisms discussed in the introduction. In principle, the integral over the Kerr coupling would even include the non-linear effects arising from the mirror coating [30,11], however, they are approximately two orders of magnitude smaller and can be neglected. Comparing the theoretically expected flux Γ to the experimentally determined rate coefficient Γ exp = (1.12 ± 0.05) W −2 s −1 , see Figure 2b, we deduce the nonlinear refractive index n 2 = (3.62 ± 0.31) × 10 −20 m 2 W −1 , which is one order of magnitude lower than the dc-value and comparable to SiO 2 . On both spectrometer channels (+n) and (−n) we find a linear dependence of the count rate R ±n = γ ±n · P on the power coupled into the cavity. We interpret this result as Raman scattering in the medium giving rise to background emission centered near the pump wavelength. The count rates are several orders of magnitude above the detector dark count rates and they limit the CAR to 2 πτc Γexp γ 1 γ 2 . We have extracted the formula by assuming a Cauchy distribution of the correlated photon pairs of width τ c and a binning time much shorter than the correlation time, which is well fulfilled in our experiment. This shows that better background subtraction by spectral filtering at higher orders n facilitates detection of correlations with higher CAR. For n = 2 we compute a CAR of 3.3 ± 0.3, which is in excellent agreement with the measured values depicted in 3c. We also have observed correlated photon pairs in the third order at frequencies ω 3,− = 2π × 374.831 THz and ω 3,+ = 2π × 389.988 THz for ω 0 = 2π × 382.410 THz and 0.67 W intracavity power. This signals the versatility of our approach to generate photon pairs at controllable frequency difference. In this context, the liquid filling of our cavity offers the unique advantage that continuous changes of the cavity length and hence continuous adjustments of the frequency of the emitted photon pairs are possible. The measured photon rate coefficient in third order is (0.58 ± 0.22) W −2 s −1 . In principle, the rate of photon pairs should be independent of the order of the free spectral range. However, in our realization, we have observed some optical damage from the high-intensity experiments in the second order in addition to the lower reflectivity for increasing order of n, with both effects reducing the finesse. In conclusion, we have demonstrated a liquid-filled microcavity at very high finesse and show that liquids, in addition to their high nonlinearities, can exhibit very low absorption coefficients (comparable to pure SiO 2 ), which makes them attractive for non-linear optics studies. We have studied the generation of correlated photon pairs from such a microcavity using spontaneous four-wave mixing. Photons are emitted in pairs of longitudinal modes equally split from the pump laser frequency and we have detected correlated pairs up to order n = 3 thereby demonstrating a photon pair source with adjustable frequency spacing between the photons of a pair. This work was supported by DFG (SFB/TR 185, A2), Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 390534769, BMBF (FaResQ),
2019-07-21T22:27:13.000Z
2019-07-21T00:00:00.000
{ "year": 2019, "sha1": "ade568a8ccef31365241ee102811affae50259ac", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1367-2630/ab5daa", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "ade568a8ccef31365241ee102811affae50259ac", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
16329052
pes2o/s2orc
v3-fos-license
Identity, proliferation capacity, genomic stability and novel senescence markers of mesenchymal stem cells isolated from low volume of human bone marrow Human bone marrow mesenchymal stem cells (hBM-MSCs) hold promise for treating incurable diseases and repairing of damaged tissues. However, hBM-MSCs face the disadvantages of painful invasive isolation and limited cell numbers. In this study we assessed characteristics of MSCs isolated from residual human bone marrow transplantation material and expanded to clinically relevant numbers at passages 3-4 and 6-7. Results indicated that early passage hBM-MSCs are genomically stable and retain identity and high proliferation capacity. Despite the chromosomal stability, the cells became senescent at late passages, paralleling the slower proliferation, altered morphology and immunophenotype. By qRT-PCR array profiling, we revealed 13 genes and 33 miRNAs significantly differentially expressed in late passage cells, among which 8 genes and 30 miRNAs emerged as potential novel biomarkers of hBM-MSC aging. Functional analysis of genes with altered expression showed strong association with biological processes causing cellular senescence. Altogether, this study revives hBM as convenient source for cellular therapy. Potential novel markers provide new details for better understanding the hBM-MSC senescence mechanisms, contributing to basic science, facilitating the development of cellular therapy quality control, and providing new clues for human disease processes since senescence phenotype of the hematological patient hBM-MSCs only very recently has been revealed. INTRODUCTION Human mesenchymal stem cells (hMSCs) are nonhematopoietic, adherent fibroblast-like cells with intrinsic ability of self-renewal and potential for multilineage differentiation [1]. The stromal compartment of bone marrow (BM) was the first biological material from which MSCs were isolated. Since then, BM-derived MSCs have been the most widely studied and are thought to be key regulators of BM physiology [2]. MSCs are the major stem cells for cell therapy and have been used in the clinic for approximately 10 years [3]. Currently, BM represents the major source of MSCs for clinical use [4]. Stem cell-based therapy using human BM-MSCs (hBM-MSCs) holds promise for treating degenerative diseases, cancer, and repair of damaged tissues, where limited therapeutic options exist [5]. E.g., Wernicke et al. reported a high (73.8%) overall response to MSC therapy of the life-threatening severe steroidrefractory graft versus host disease [6]. Disadvantage of using hBM-MSCs is the limited cell numbers obtained from invasive isolation techniques [7]. This has led many researchers to investigate alternate sources of human MSCs, including adipose tissue [8] and umbilical cord [9], that can be used in the clinical setting. High quantities of MSCs are needed for clinical applications, thus requiring extensive cell expansion in long-term culture [10]. However, the occurrence of karyotypic instability in cultured hBM-MSCs has been documented. It has been admitted that genome instability enables tumor cells to acquire their characteristics [11], therefore the tumorigenesis potential of the hMSCs has become the most important concern for clinical use of MSCs [12]. Though, hBM-MSC studies presented highly conflicting results. It has been shown that hBM-MSCs in vitro acquire chromosomal aberrations, undergo spontaneous transformation and form tumors in vivo [13]. In contrast, other groups have documented normal karyotype throughout hBM-MSC culture and no malignant transformation in vivo [14,15]. Besides, it has been shown that hBM-MSCs do not transform spontaneously in vitro and chromosomal instability occurs without leading to malignant transformation, possibly being only a sign of cell senescence [16]. Cellular senescence, which refers to irreversible cell growth arrest [17], is another issue related to hBM-MSC cultivation. It limits the proliferative capacity of primary cells in culture [18], impairs therapeutic potential of hBM-MSC [19], and increases the risk of cell neoplastic transformation [20,21]. Although some publications reporting the alarming finding of malignant transformation of hMSCs [22], including hBM-MSCs [23], later on have been retracted [24,25], there is still debate concerning the genetic stability of hMSCs and the implication for clinical safety [26,27]. It is of great scientific interest to investigate MSCs isolated from low human bone marrow volume for potential medical use. Recently, our group has showed that MSCs can be successfully isolated by red blood cell lysis method from residual bone marrow transplantation material and expanded in vitro to clinically relevant numbers [28]. The aim of this study was therefore to assess hBM-MSC immunophenotype as proposed by The International Society for Cellular Therapy (ISCT) [29]; to evaluate proliferative capacity, senescence status and cytogenetic stability, as determined by The European Medicine Agency (EMA) [30]; and to apply array technology as suggested by the U.S. Food and Drug Administration (FDA) [31]. Our results highlight the identity, proliferation capacity, and genomic safety of MSCs isolated from low human bone marrow volume and reveal 38 new hBM-MSCs potential senescence markers during prolonged cultivation in vitro. Morphology We observed hBM-MSCs microscopically at every passage. Adherent long spindle-shaped or flat fibroblastlike cells were detected 24-48 hours after isolation. Such morphology retained up to passages 3-4 (P3-P4) ( Figure 1A). Later on the proportion of enlarged cells with altered morphology gradually increased, which became obvious at late passages 6-7 (P6-P7) ( Figure 1B). An average spread cell area was significantly enlarged at late passages of individual samples ( Figure 1C). Proliferation MSCs showed a slower proliferation after isolation and reached P1 after 21±6 days ( Figure 1D). From P1 to P3 (sample #1) or P4 (samples #2 and #3) the cells proliferated faster and CPDs resulted in 8.08±0.74 after additional 14.00±2.65 days. In late culture the gradual slow-down in the cellular growth occurred and it took 34.67±5.51 more days to complete with 15.08±3.04 CPDs. Flow cytometry analysis In the early passages over 99% of the cells were positive for CD73, CD90, and CD105, while below 2% of the cells expressed CD11b, CD19, CD45, CD34, and HLA-DR (Figure 2A-2C). However, part of MSC population of #3 sample lost the expression of positive markers and gained the expression of negative markers in P7 ( Figure 2F). The expression of negative markers also increased in #2 sample in P7, although expression of positive markers remained stable ( Figure 2E). The immunophenotype of #1 sample in P6 did not change ( Figure 2D). Mean viability of hBM-MSCs was 94.02±2.92% at early passages and 93.47±5.61% at late passages. The side-scatter (SSC) was 337.00±55.44 units at early passages and 391.67±27.00 units at late passages, although the difference was not statistically significant (P = 0.085). Gene expression To further evaluate the hBM-MSCs, we measured the expression of 162 different genes related to stemness, mesenchymal stem cells and cell senescence using commercial qPCR arrays at P3-P4 and P6-P7. Altogether, the expression of 154 genes was detected (C t < 33) in early passage MSCs and the expression of 156 genes was detected in late passage MSCs. From 162 genes, 4 genes were significantly (P < 0.05) up-regulated (≥2 fold) and 9 genes were significantly down-regulated in late passage hBM-MSCs when compared with early passage MSCs ( Figure 5A-5B, Table 1). This represents 8.02% of all genes investigated in the study. In order to better understand the underlying biological processes in late passage BM-MSCs, we performed gene enrichment analysis of set of 13 genes with significantly altered expression ( Table 2). The miRNA profiles of hBM-MSCs from early and late passages were analyzed with the commercial qPCRbased array for human miRNA. Overall, the expression of 358 miRNAs was detected (C t < 33) in early passage hBM-MSCs and the expression of 365 miRNAs was detected in late passage cells. Analysis showed significant (P < 0.05) ≥2 fold changes in expression of 33 of 420 miRNAs ( Figure 5C, Table 1), and these constituted 7.86% of all evaluated miRNAs. DISCUSSION In this study we isolated MSCs from residual human bone marrow transplantation material, as described earlier [28], expanded in vitro to clinically relevant numbers and characterized these cells by evaluating adherence to plastic, morphology, proliferative capacity, immunophenotype, senescence status, karyotype stability, gene and miRNA expression profiling, as proposed by ISCT [29], EMA [30], and FDA [31]. hBM-MSC lifespan was categorized as early passage (P3-P4) and late passage (P6-P7) according to proliferation ability and the percentage of SA-β-gal, similarly as proposed before [32]. Proliferation is a fundamental property of stem cells necessary for self-renewal and expansion and defining stem cell degree of stemness [33]. Population doubling (PD) is a precise way to measure cell growth [34] and is Red dots are the genes whose expression increased more than 2 fold, while green dots are decreased more than 2 fold. Vertical grey side-lines represent fold-change cutoff (≥2 fold) and horizontal blue line represents p-value cutoff (P < 0.05). Genes and miRNAs whose fold expression changes and P-values exceeded the boundaries are listed in the Table 1. recommended by the Cell Products Working Party (EMA) to describe the time for cells in culture [35]. We showed that hBM-MSCs at early passages are highly proliferative. Cryopreserved BM-MSCs at P1 can be expanded to high clinically relevant yield of cells within two weeks at P3-P4 with CPDs 8.08±0.74 ( Figure 1D), which would result in hundreds of millions of cells. MSCs at early passages maintained a spindle-shaped or fibroblast-like morphology ( Figure 1A), typical for adult hBM-MSCs [36], were adherent and exhibited immunophenotype ( Figure 3A-3C) in accordance with ISCT guidelines [29]. Genomic instability of MSCs is one of important concern for clinical use of MSCs [37] because it enables the cells to acquire tumor cell characteristics [11]. Therefore the cytogenetic analysis is essential for verifying the safety of MSCs [38] since the maintenance of a normal karyotype is a reliable indicator of genetic stability of MSCs [39]. By conventional karyotyping of cultured hBM-MSCs using G-banding, which is still a gold standard of all cytogenetic techniques [40], we showed that BM-MSCs at P3-P4 had a normal karyotype and none of the samples had clonal aberrations ( Figure 4A). These results indicate that the genomic stability of our MSCs would not prevent their potential use in a clinical application, similarly as shown earlier [15]. By expanding hBM-MSCs using additional three passages (until P6-P7) we investigated the possibility to achieve additional clinically relevant amounts of cells. However, in the late passages the hBM-MSCs growth gradually decreased ( Figure 1D) and cells acquired an enlarged flattened morphology ( Figure 1B-1C), indicating MSC senescence [41]. Cellular senescence is defined as irreversible cell cycle arrest [17]. By staining cells for SA-β-galactosidase, the most widely used biomarker for senescent cells [42], we confirmed that after 15.08±3.04 PDs ( Figure 1D) almost half of late passage hBM-MSCs reached senescence, whereas only a few cells stained positively in the early passages ( Figure 3A-3B). Interestingly, the onset of senescence in longterm culture manifested differently on hBM-MSC immunophenotype in each sample. Surface marker expression of #1 sample remained stable throughout the in vitro expansion (Figure 2D), in agreement with Dmitrieva et al. [43] and Somasundaram et al. studies [44]. Dmitrieva et al. demonstrated that hBM-MSC enter senescence after P3-P4, but the cells were CD105/CD90/CD166/CD73 positive and negative for CD34, CD19, CD14 and CD45 at all passages. Somasundaram et al. revealed remarkable (>90%) expression of CD73, CD90, and CD105 and sparse ( < 10%) expression of CD34, CD45, and HLA-DR of hBM-MSC irrespective of extensive culturing when the majority of samples lost potential to grow beyond P15. However, the expression of negative markers increased up to 5.10% in #2 sample in P7, although expression of positive markers remained stable ( Figure 2E). Moreover, part of non-proliferating MSC population of #3 sample lost the expression of positive markers and gained the expression of negative markers in P7 ( Figure 2F). Wagner et al. [45] has demonstrated that in vitro expansion has a major impact on the level of surface marker expression of human BM-MSCs. Surface antigen detection was much higher in early passages when compared to senescent passages. However, quantification (%) of expression was not presented in that study. Our results were unexpected and indicate that identification of late passage senescent MSCs by using cell-surface markers can be complicated. Therefore possible changes in standard surface marker expression during prolonged in vitro expansion require further investigations. We also revealed long-term culturerelated, however not statistically significant, differences in cell granularity, another hBM-MSC senescence associated feature [46]. Interestingly, the karyotype of late passage senescent cells remained stable ( Figure 4B), compatible with data obtained on long-term expanded hBM-MSCs by other groups [5,14] and opposing to the recent finding that senescence-prone human MSCs are highly aneuploid [47]. To date no molecular markers are available, which specifically reflect the degree of cellular aging in a population of MSCs [48]. Molecular analysis of a suitable panel of genes might provide a powerful tool to track cellular aging of MSCs and thus to assess efficiency and safety of long-term expansion [42]. Real-time quantitative PCR is the gold-standard technique for gene expression measurements [49], therefore we investigated the cells using qPCR arrays. Transcriptome analysis of 162 different genes revealed 4 significantly (P < 0.05) up-regulated (≥2 fold) genes and 9 significantly down-regulated genes in P6-P7 hBM-MSCs when compared to P3-P4 cells (Table 1). Pou5f1 (Oct4) is a critical regulator of pluripotency in embryonic stem cells and might be reactivated in response to culture conditions [50]. Exogenous OCT4 overexpression has been shown to induce early senescence of hBM-MSCs [51], virtually consistently with our observations. PTPRC encodes the protein tyrosine phosphatase CD45 not characteristic for hMSCs [29] and its overexpression decreases cytokine-induced signaling [52]. ACTA2, which was the most upregulated in our study, codes a smooth muscle α actin isoform enabling hBM-MSCs to contract the extracellular matrix (ECM) components [53]. THBS1 codes thrombospondin-1, which is secreted and incorporated into ECM [54]. We determined THBS1 upregulation in senescent hBM-MSCs in concordance with Yoo et al. report [55]. PLAU gene encodes enzyme urokinase-type plasminogen activator (uPA), which regulates ECM degradation, cell adhesion, and inflammatory cell activation [56] and which activity depends on cytoskeleton reorganization [57]. An impairment of cytoskeleton remodeling and/ or organization has been associated with hBM-MSC senescence [58]. E2F1 and E2F3 control the expression of numerous genes involved in DNA replication and cell cycle progression. Deregulation of these transcription www.impactjournals.com/oncotarget factors results in the induction of senescence [59], with the loss of E2F3, which was the most downregulated in this study, having the most pronounced effect [60]. TBX2 and TBX3 encode T-box proteins that function as transcriptional repressors [61]. Inhibition of both leads to cell senescence [62]. In contrast to our findings, Choi et al. showed higher TBX2 expression in late passage senescent hBM-MSCs [63]. Chk1 protein kinase is essential for the human G2 DNA damage checkpoint [64] and has been shown to be downregulated in senescent hBM-MSC [65]. PCNA codes proliferating cell nuclear antigen expressed exclusively in actively proliferating cells [66]. E2F1-3 induces expression of PCNA [67], which is regulated by Chk1 [68]. We showed PCNA repression in late passage senescent hBM-MSCs, in compliance with Choi et al. report [63]. Human Cdc25C phosphatase is a key activator of the cyclin B1/Cdk1 complex [69], which is essential for entry into mitosis [70]. CDC25C inhibition promotes cell cycle arrest [71], and G2/M arrest is characteristic for stress-induced premature senescence [72]. We showed CCNB1 downregulation in senescent hBM-MSCs, in agreement with Noh et al. study [65]. Functional gene ontology analysis revealed that these genes are associated with biological processes as cell cycle, metabolism, cell aging, and response to stress ( Table 2), all of which are important causes of cellular senescence [73]. In sum, these results together with literature data strongly suggest that identified 13 genes are interconnectedly related to hBM-MSC premature senescence. On the other hand, to our knowledge, we for the first time show that the expression of POU5F1, PTPRC, ACTA2, E2F1, E2F3, Tbx3, PLAU and CDC25C genes is altered in senescent hBM-MSCs during long-term expansion in vitro. PCR array data were deposited into a public database Gene Expression Omnibus (http://www.ncbi.nlm. nih.gov/geo/) under accession number GSE68933. The p53/p21/Rb and p16/RB axes are key signaling pathways involved in the induction of cell senescence [74]. In particular, RB and its family members, p107 and p130, are essential for the onset of senescence cell cycle arrest [17]. An unexpected finding of this study was the constant (within 2 fold) expression and/or nonsignificant changes (P≥0.05) of the expression of these crucial genes (data available in the database GEO under accession number GSE68933). These results stand out from contrary data obtained by other laboratories. Cheng et al. demonstrated that p16, p21, and p53 are significantly upregulated in senescent hBM-MSCs [75]. Shibata et al. showed significant increase in the expression level of p16 and no significant changes in the expression of p21 and p53 at the end of hBM-MSC life span [76], similar results are reported by Tarte with colleagues [16]. Kim et al. showed unaltered p16 expression and reduced expression of p53 during long-term culturing of hBM-MSCs [14]. However, we cannot rule out the possibility that genes exhibiting a less than twofold change may be of biologic value [77]. While the functions of RB1, p107 and p130 in the biology of MSCs remain largely uncharacterized [78]. MicroRNAs, also called miRNAs, are small 19-22 nucleotide sequences of noncoding RNA that work as endogenous epigenetic key gene expression regulators [79]. Only recently senescence-associated miRNAs (SA-miRNAs) have emerged as important effectors of senescence [80]. Therefore we were particularly interested in the possible involvement of the miRNAs in the regulation of hBM-MSC senescence. By using miRNA qPCR array, we identified 33 miRNAs with altered expression in late passage senescent hBM-MSCs (Table 1). Among the top downregulated, miR-935 previously has been shown to be downregulated in elder hBM-MSCs [81]; miR-193a has been reported to regulate uPA [82], to target oxidative stress pathway [83], and not to be repressed in normal BM cells [84]. Additionally, miR-337-5p was shown to be differentially expressed in pediatric hBM-MSCs comparing to adult hBM-MSCs [85]. Yoo et al. demonstrated that miR-29b is downregulated in senescent hBM-MSC compared to young hBM-MSCs, but miR-455-3p, unlike in our study, was upregulated [86]. Among the most upregulated, miR-376b has been shown to be differentially expressed in pediatric hBM-MSCs when comparing to adult hBM-MSCs [85]; miR-200a has been associated with the oxidative stress [87] and shown to be activated in stress-induced senescent cells [88]. Tome et al. demonstrated miR-335 increase in hBM-MSC ex vivo culture and correlation with cell senescence [89], similarly to our data. Together, these results along with other reports further firmly propose that hBM-MSCs underwent in vitro culture induced premature senescence. Besides, as far as we know, our report is the first to link the change of expression of new 30 miRNAs to hBM-MSC senescence during prolonged in vitro expansion. Taking everything into account, we state that MSCs isolated from residual bone marrow transplantation material and expanded to clinically relevant numbers are genomically stable and retain identity and high proliferation capacity. It is a crucial requisite for clinical application in terms of donor comfort and recipient safety. However, the cells enter senescence state after long-term expansion, most likely, due to culture-induced stress. We propose that the identified novel hBM-MSC senescence associated genes and miRNAs provide a better understanding of the mechanisms involved in hBM-MSC aging, significantly contributing to basic science and cellular therapy quality control development and revealing www.impactjournals.com/oncotarget new clues of hematological disease processes for future investigations. Further larger research in this area is needed to validate the claims of this study. Bone marrow collection Bone marrow (BM) specimens were collected from healthy adult donors after obtaining written informed consent at Vilnius University Santariskiu Clinic, Children Hematology and Transplantation Center. The study was reviewed by Vilnius Regional Committee of Biomedical Research, Lithuania (Permission No 158200-09-381-104). Isolation of MSCs MSCs were isolated from 3 donors (#1 female, age 24; #2 male, age 38; #3 female, age 28) using red blood cell lysis method as described earlier [28]. Briefly, BM samples were mixed with erythrocyte lysis buffer (Qiagen, Germany) and centrifuged for 5 min at 480g. After removal of the supernatant, the pellet was resuspended with 5 ml of RPMI 1640 medium (Invitrogen, UK) and washed twice through centrifugation. Finally, all amount of resuspended cell suspension was placed into T75 cm 2 tissue culture flask (BD Biosciences, France) and allowed to adhere for 24 hours in the DMEM medium containing 10% of fetal bovine serum (FBS) (StemCell Technologies, Canada) and 1% penicillin/streptomycin (Gibco, USA) at 37°C with 5% CO 2 and fully humidified atmosphere. Culture of MSCs After 24 hours the medium was removed and the cells were washed with phosphate buffered saline (PBS). Human MesenCult MSC Basal Medium containing 10% of MesenCult FBS for human MSCs (StemCell Technologies, Canada) and 1% penicillin/streptomycin (Gibco, USA) was used for subsequent cultivation of MSCs. The medium was changed every 3-4 days. When cells reached 80-90% confluence, they were harvested with 0.25% trypsin-EDTA (Invitrogen, UK), counted and subcultured with seeding density 4000/cm 2 into new T75 cm2 flasks under the standard conditions. Morphology Cell morphology was determined using Nicon inverted phase contrast microscope (models Eclipse Ti-S and TS100) and NIS-Elements imaging software (version 3.22.00). For spread cell area analysis nineteen cells from each image were chosen at random and manually outlined. Individual cell areas were measured. Image analysis was performed in ImageJ v1.50e image processing tool set. Cryopreservation and thawing MSCs were cryopreserved at P1 and P3-P4. Cells were mixed with MSC Freezing Solution (Biological Industries, Israel) and placed into Mr. Frosty Freezing Container (Thermo Scientific, USA) in -80 o C freezer for -1°C/minute freezing rate. After 24 hours cryovials were transferred to -150 o C freezer for storage. After six months of storing samples were rapidly thawed by placing cryovials in a water bath at 37 o C and diluted in a slow dropwise manner with pre-warmed fresh culture medium. After centrifugation at 150 g for 10 min, MSCs were plated with seeding density 4000/cm 2 into T75 cm 2 flasks and incubated under the standard conditions. Cell number and proliferation kinetics Cell number was determined using a CASY cell counter and analyzer (Roche, Germany) at each passage and long-term growth kinetics in vitro was assessed by determining cumulative population doublings (CPDs). The number of population doublings (PDs) was calculated using the formula: PD = log(X b /X a )•3.3, where X a represents the initial cell number, X b represents the cell harvest number, and 3.3 is a coefficient. PD of each passage was calculated and added to the PD of the previous passage level to obtain the CPD. Flow cytometry hBM-MSCs were characterized at P3-P4 and P6-P7 by flow cytometry using antibodies to CD44, CD73, CD90, and CD105 cell surface markers and using a mixture of antibodies to CD34, CD11b, CD19, CD45, HLA-DR cell surface markers (BD Stemflow™ Human MSC Analysis Kit). After harvesting, the cells were washed with PBS and treated according to the manufacturer's protocol. Viability of the MSCs samples was assessed by 5-minute 7-AAD staining and cell granularity was determined by side-scattered (SSC) light evaluation. Cytometric measurements were performed on BD LSR II flow cytometer. 10.000 cells per tube were analyzed with FlowJo X software. Senescence-associated β-galactosidase staining Senescence of cultivated hBM-MSCs at passages 3-4 and 6-7 was studied using Senescence Cells Histochemical Staining Kit (Sigma-Aldrich, Germany) according to the manufacturer's protocol. At the end of staining procedure, ten pictures were taken from random www.impactjournals.com/oncotarget areas of each culture. The percentage of senescent cells was calculated using the following formula: (number of cells with intracellular blue deposits/ total number of cells) x 100%. Karyotype analysis Cytogenetic evaluation by G-banding method was conducted on hBM-MSCs at passages 3-4 and 6-7. Colchicine was added into each culture at a final concentration of 0.2 μg/ml for 4 hours at 37 o C. MSCs were harvested using trypsin and resuspended in a hypotonic 0.075 mM KCl solution for 30 min at 37 o C. After centrifugation the cells were fixed with methanol:acetic acid (3:1) solution. After dropping the cell suspension onto microscope slides, these were trypsinized and stained with Giemsa solution (Sigma-Aldrich, Germany). Slides were scanned, metaphases we captured and analyzed with Leica CytoVision® (USA) platform. 7 to 17 metaphase spreads were analyzed for chromosome number and structure abnormalities at each established passage. Karyotypes were described following the recommendations of the International System for Human Cytogenetic Nomenclature 2013 [97]. RNA isolation Total RNA was isolated from hBM-MSCs using miRNeasy Mini Kit (Qiagen, Germany) following the manufacturer's instructions. Total RNA concentration and quality were checked using a NanoDrop spectrophotometer and verified by analysis on an Agilent 2100 Bioanalyzer using RNA 6000 Nano LabChip (Agilent Technologies, USA). PCR arrays hBM-MSC samples of P3-P4 and P6-P7 were analyzed using Human Mesenchymal Stem Cell RT² Profiler™ PCR Array (PAHS-082Z, SABiosciences, Qiagen) and Human Cellular Senescence RT ² Profiler™ PCR Array (PAHS-050Z). Template cDNA was synthesized from 800 ng of the total RNA using the RT ² First Strand Kit (Qiagen) following manufacturer's protocol. The reaction mix was prepared by mixing cDNA with 2x RT 2 SYBR Green ROX FAST Mastermix (Qiagen) and 20 ml of the cocktail was aliquoted into each well on the PCR array. Each array consisted of a panel of 96 primer sets of 84 mesenchymal stem cell or cellular senescence genes, 5 housekeeping genes, and 7 quality controls. PCR arrays were performed in Rotor-Gene Q thermocycler (Qiagen), as follows: 95°C, 10 sec for initial denaturation, and 40 cycles of 95°C for 15 sec and 60°C for 30 sec. Each sample was tested in technical duplicate. The data were analyzed using web-based RT 2 Profiler PCR Array Data Analysis v3.5 software. The foldchange in target gene expression was calculated using the ΔΔC t method and normalized to the geometric mean of 5 housekeeping genes (ACTB, B2M, GAPDH, HPRT1, and RPLP0) according to SABioscience guide. A more than two-fold change in gene expression was considered as the up-or down-regulation of a specific gene expression. Differences were considered significant when P value < 0.05. miRNA PCR array The miRNA levels in hBM-MSCs of early and late passages were analyzed with miRNome miScript miRNA PCR Array (MIHS-216ZR-4, SABiosciences, Qiagen). Template cDNA was synthesized from 600 ng of the total RNA with miScript II RT Kit using miScript HiSpec Buffer (Qiagen) following manufacturer's protocol. The templates were mixed with RT 2 SYBR Green qPCR Master Mix (Qiagen) and 20 μl aliquoted into each well of 5 PCR arrays. Each array consisted of a panel of 96 primer sets of 84 miRNAs of interest, 2 C. elegans miR-39, and 10 controls. PCR was performed in Rotor-Gene Q thermocycler (Qiagen), as follows: 15 min at 95°C and 40 cycles of 15 sec at 94°C, 30 sec at 55°C, and 30 sec at 70°C. Each sample was tested in technical duplicate. The miRNA data were analyzed using online software miScript miRNA PCR Array Data Analysis. The relative expression of each target miRNA was determined with the ΔΔCt method and normalized to the geometric mean of 6 small RNAs (SNORD61, SNORD68, SNORD72, SNORD95, SNORD96A, and RNU6-2) according to SABioscience guide. A miRNA was considered differentially expressed if it showed more than two-fold change and P value < 0.05 indicated significance. Gene ontology analysis Gene Ontology Consortium (http://geneontology. org/) was used for enrichment analysis of specific gene sets [98]. Genes were classified to gene ontology (GO) terms in three categories: molecular function, cellular component and biological process. Data analysis Statistical analysis was performed using SPSS software (version 21). The Student's paired t-test was performed to assess statistical differences which were considered significant when P value < 0.05.
2018-04-03T00:41:41.100Z
2016-02-17T00:00:00.000
{ "year": 2016, "sha1": "a630c42cb1225e98869f706bc0f9fbc24887e2b2", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=21485&path[]=7456", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a630c42cb1225e98869f706bc0f9fbc24887e2b2", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
259271048
pes2o/s2orc
v3-fos-license
Thermodynamic and Thermal Analyze of N,N-Dimethylformamide + 1-Butanol Mixture Properties Based on Density, Sound Velocity and Heat Capacity Data The present paper contains data on the density (ρ), sound velocity (u), and specific heat capacity cp of the mixture of N,N-dimethylformamide + 1-butanol (DMF + BuOH) determined in the entire concentration range of solution and in the temperature range (293.15–318.15) K. The analysis of thermodynamic functions such as isobaric molar expansion, isentropic and isothermal molar compression, isobaric and isochoric molar heat capacity, as well as their excess functions (Ep,mE,KS,mE,KT,mE,Cp, mE,CV, mE) and also VmE was undertaken. The analysis of changes in the physicochemical quantities was based on consideration of the system in terms of intermolecular interactions and resulting changes in the mixture structure. The results available in the literature were confusing during the analysis and became the reason for our decision to thoroughly examine the system. What is more, for a system whose components are widely used, there is very scarce information in the literature regarding the heat capacity of the tested mixture, which was also achieved and presented in this publication. The conclusions drawn from so many data points allow us to approximate and understand the changes that occur in the structure of the system due to the repeatability and consistency of the obtained results. Introduction Thermodynamic properties are a valuable source of information on the interaction between solute and solvent molecules. In particular, the combination of techniques such as density, sound velocity, and specific heat capacity gives a range of data allowing one to analyze the changes in the structure of mixed solvents. Regarding the thermodynamic properties, it appeared that volumes, heat capacities, expansivities, and compressibilities are sensitive to structural changes and allow prediction of the type of interactions prevailing in the liquid mixtures. The density, ultrasonic, and thermodynamic studies with the change of one component of the solution are of high value and practical importance in industry. Such data are also used in manufacturing process control and other significant fields. Alcohol-containing mixtures are gaining in popularity. Such solutions are used in the pharmaceutical and cosmetic industries, in high-energy battery technologies, or in organic synthesis [1,2]. Great interest in the product requires a better understanding of the influence of the liquid structure on its macroscopic properties, including density, speed of ultrasound propagation, and heat capacity [3,4]. 1-Butanol (BuOH) is used as a perfume ingredient, a solvent for the extraction of essential oils, and an extractant in the production of antibiotics, hormones, and vitamins [5]. It is also present in the cosmetics industry, mainly in make-up removers, nail care, and shaving products [5]. BuOH is also used in the food industry as a flavoring agent in cream and baked goods [5]. N,N-dimethylformamide (DMF) has many useful properties, making it an exceptionally good cosolvent [6]. Secondary and (C V,m ) molar heat capacity as well as their excess values E E p,m , K E S,m , K E T,m , C E p,m , C E V, m of the DMF + BuOH system at six temperatures in the whole concentration range. The data are available in the literature [4,[14][15][16][17][18][19][20][21]. The tested system so far does not include small and precise changes in the concentration of the mixture and such a wide range of temperatures as presented in our work. The change in concentration is 0.05 mole fraction, which allowed us to show characteristic changes in analyzed functions not previously listed. Connecting so many data points gives the opportunity to draw a variety of valuable conclusions regarding changes in the structure of the system [23,24]. Moreover, we determined the thermal properties of the system by making attempts to measure the isobaric specific heat capacity c p . In the literature, there are no data presenting isobaric C p,m and isochoric (C V,m ) molar heat capacities of the system. Such data, in combination with density and ultrasound velocity, in a temperature range of 293.15-318.15 K, is worth attention due to its coherent results. Volumetric Properties The results obtained from the density tests of the DMF + BuOH mixture (Table S1 supplementary materials) show that the values increase with increasing DMF content in the mixture and decrease systematically with increasing temperature over the whole range of mixture compositions. To study the properties of real solutions, excess functions are used, which determine the difference between the magnitude of a given molar thermodynamic function in a real solution and its magnitude in an ideal solution. The excess properties were calculated using the following expression: (1) where Z E is the excess quantity of the property Z and Z id is the corresponding ideal value [25]. In this paper, we will present and analyze six excess functions V E m , E E p,m , K E S,m , K E T,m , C E p,m , C E V,m calculated according to Equation (1). In order to calculate excess molar volume V E m , the molar volume of the mixture was obtained according to Equation (2): Molecules 2023, 28 where: ρ is the density of the DMF + BuOH mixture, x 1 , x 2 and M 1 , M 2 are the mole fractions and molar masses of the mixture components, respectively, i.e., BuOH (1), DMF (2). In order to determine the changes taking place in a real solution in relation to an ideal solution in a binary mixture, the values of excess molar volume ( V E m have been executed. V E m values of the mixture in the whole composition range and at the temperature range (293. 15-318.15 K) were calculated according to Equation (3) and presented in Figure 1: where: V m is the molar volume of the (DMF + BuOH) mixture, V id m is the volume of the ideal mixture, V * 1 , V * 2 are the molar volumes of pure compounds, i.e., BuOH (1), DMF (2). where: is the molar volume of the (DMF + BuOH) mixture, is the volume o ideal mixture, * , * are the molar volumes of pure compounds, i.e., BuOH (1), DMF The values of excess functions and, among others, the of the DMF + BuOH ture were fitted to the polynomial of Redlich−Kister type: where is the polynomial coefficient calculated by the least-squares method using E tion (5). where: , , , are the isobaric and isochoric molar heat capacities of the ideal mixture, , are the isothermal and isentropic compressibility coefficients for the ideal mixture: and , , * , , , * are the isobaric and isochoric molar heat capacities of pure compounds, BuOH (1) and DMF (2). , is presented as a function of mixture composition in Figure 7. The obtained dependency of , is analogous to , with the change in temperature. A comparison of both functions at 298.15 K is shown in Figure 8. The values of excess functions and, among others, the V E m of the DMF + BuOH mixture were fitted to the polynomial of Redlich−Kister type: where A j is the polynomial coefficient calculated by the least-squares method using Equation (5). As can be seen in Figure 1, the excess molar volume exhibits positive values in a solution with a predominantly BuOH content in the mixture. V E m increases when small amounts of DMF are added to pure BuOH and passes through a maximum, which appears depending on the temperature between x DMF ≈ 0.3 and x DMF ≈ 0.35 values. Above this value of mole fraction, V E m values are decreasing and reach the minimum value at x DMF ≈ 0.9. It is noteworthy that negative values for V E m are obtained for different mixture compositions depending on the temperature. The mole fraction of DMF, in which we observe the change of sign of the V E m function, increases with increasing temperature from the value x DMF ≈ 0.5 for 293.15 K to the value x DMF ≈ 0.9 for 318.15 K. One can notice that the volume contraction of the DMF + BuOH mixture increases when the molar fraction of DMF increases above 0.5 and decreases with increasing temperature. In the literature, you can find several reports in which an attempt was made to determine the value of V E m for the DMF + BuOH mixture [14][15][16][19][20][21]. However, the research results presented in these articles are divergent. Similar results to ours were obtained by Rao and Reddy [14] at 303 K and Garcia et al. [21], but only at 298 K. In the other works, the results diverged from each other and from ours. Moreover, some authors obtained negative values of V E m in the whole range of compositions [15,19], which was completely inconsistent with the results obtained by other researchers, including ours. Such ambiguous data and conclusions prompted us to study the properties of this mixture with more accuracy using the three different test methods mentioned earlier. The sign and magnitude of the excess functions may be attributed to the result of an appropriate combination of the following three major effects. The mutual dissociation of the components due to the addition of the second component, the formation of hydrogen bonds between different molecules, steric hindrance, as well as the geometry of molecular structure, could be the reasons for the resistance of the molecules. For DMF + PrOH, according to the results obtained by us in our previous work [26], the values were negative over the entire range of the mixture composition. Excess molar volumes are more negative in systems with lower alcohols, which may be attributed to strong interactions between different molecules and different molecular sizes [27]. Such properties cause volume contraction in these mixtures. For the DMF + BuOH system, the small increase in the size of the alcohol molecule (extra -CH 2 group in 1-butanol compared to DMF + PrOH [26]) gives completely different results and appears in volume expansion when DMF is added to BuOH. These values are positive in the BuOH-rich region. Mostly for the DMF + BuOH system, the results obtained in their absolute value are 1.5 to 2 times smaller than for the DMF + PrOH mixture; however, in some compositions, they are several or ten times smaller than for the DMF + PrOH mixture. Alcohols are strongly hydrogen-bonded in their pure state. Their molecules are selfand cross-associated [28,29]. The degree of association decreases with increasing alcohol chain length. Thus, the addition of DMF to pure BuOH breaks the hydrogen bonds between molecules in the structure of the alcohol network, which produces a positive contribution to V E m . The results obtained on the basis of the dielectric study prove that in a solution with a predominant content of BuOH, the largest changes in the structure of hydrogen bonds occur in the DMF + BuOH solution [30]. At the same time, an increase in the length of the alcohol chain causes a steric hindrance that also contributes to the increase in the real volume of the DMF + BuOH mixture. There is also not much difference in the molecular size of these two compounds, which also makes mutual accommodation difficult. The presence of hydrogen bonds between the components of the mixture tested was also confirmed by other researchers [4,20,30,31]. Prajapati [30], based on the analysis of parameters obtained from the dielectric study, provides confirmation of the formation of hydrogen bonds between DMF and BuOH molecules and weak dipole-dipole interactions between the components of the mixture. In contrast to the DMF + PrOH mixture, it is known that the interactions between different molecules are very weak in the DMF + BuOH solution [32]. Therefore, we observe an increase in V E m with increasing DMF content in the mixture to x DMF ≈ 0.3. The maximum observed at this mole fraction of DMF is related to the fact that in this range of mixture composition there are probably the weakest interactions between the components of the mixture, which also contributes positively to the V E m [32]. Despite the fact that there is probably still the possibility of forming some hydrogen bonds between DMF and BuOH molecules, the excess BuOH amount is a competitive factor in the formation of hydrogen bonds between different molecules. According to the researchers [32], the binding energy -O-H···O=C decreases when another alcohol molecule approaches with its oxygen atom favorablely oriented for "change". As a consequence, instead of hydrogen bonding between DMF and BuOH molecules, the bonding appears between alcohol molecules rather more. When the amount of BuOH prevails in the DMF + BuOH system, there is a greater tendency to create hydrogen bonds between BuOH molecules than between different species. As a result of the specific arrangement of neighboring molecules around the DMF and BuOH molecules that form the bond, this interaction is weakened, which is confirmed by the negative ε E 0 values in the entire concentration range of the solution obtained by Prajapati [30]. This parameter assumes the largest negative deviation for a mixture with the composition x DMF ≈ 0.3; hence, we observe a maximum on the dependency V E m = f ( x DMF ). The dissociation of bonding between pure components presumably became the main reason for the volume expansion of the tested system. When x DMF > 0.3 V E m values decrease and become negative when x DMF > 0.5 (293.15 K). In a solution with a predominant DMF content in the mixture, they can observe small volume contractions. The volume contraction value is similar to that observed for the DMF + PrOH system in the same concentration area of the mixture. When DMF starts to prevail in the solution, dipole-dipole interactions are most likely to begin to prevail in DMF, leading to the creation of hydrogen bonding between DMF and BuOH molecules. The analysis of the excess partial molar volume of both components of the mixture may help to explain the observed changes in this range of the mixture composition. The volume of the solution, in the case of a binary mixture, is the sum of the partial molar volumes of both components: where: n BuOH , n DMF -mole fraction of BuOH i DMF, respectively; V m,BuOH ; V m,DMFpartial molar volume of BuOH and DMF, respectively. V m,BuOH and V m,DMF we can calculate using the following Equations (7) and (8): where: V m -molar volume of the real solution (DMF + BuOH), x DMF , x BuOH -mole fraction of DMF and BuOH. An analysis of changes in the partial molar volume of the components of a mixture can be represented by the excess partial molar volume (V E DMF , V E BuOH ). These values can be calculated using Equations (9) and (10): where: V E m,DMF , V E m,BuOH -excess partial molar volume of DMF and BuOH, respectively; V * DMF , V * BuOH -molar volume of pure DMF and BuOH. The results for partial and excess partial molar volumes of DMF and BuOH, V The values of , and , express the difference between the value of the partial molar volume of DMF or BuOH in the solution and the molar volume of each of the components in their pure form. Based on the analysis of Figure 2, one can notice that for DMF, the values of , are negative when > 0.4 at 293.15 K. This means that DMF contributes negatively to the real volume of the mixture in this composition range of the solution. Although , assumes positive values in the same concentration range, the amount of DMF prevails over the amount of BuOH when > 0.4. Having been aware that DMF is a weakly associated liquid in contrast to BuOH [30], the results obtained allow us to conclude that the negative contribution to the real volume of the mixture is made by the DMF molecules ( , 0 . This is presumably one of the main factors causing the slight contraction of the volume of the mixture, causing us to observe < 0 when > 0.5. Analysis of data on the partial molar volume of each component in the DMF + BuOH mixture (Tables S4 and S5 in the supplementary materials) allows one to observe that the dependencies , and , show only a little change with increasing DMF content in solution. This proves that chemical entities, such as various types of associations or complexes built of both molecules of the solution components, are most likely not formed in the system. Otherwise, we would observe characteristic changes in the course of both functions, along with an increase in the content of one of the components of the mixture, as in the case of an aqueous solution of N,Ndimethylformamide [33], which also confirms the conclusions drawn earlier. The partial molar volume of DMF decreases slightly as the DMF content of the mixture increases at a higher temperature. Garcia et al. [19] presented a similar course of this dependence at a temperature of 298.15 K. In addition, , takes the highest value at the highest temperature. As determined by Zegers and Somsen [22], the value of , in pure BuOH is equal to 7.779 ×10 −5 m 3 ·mol −1 and is close to the one obtained by us (7.749 ×10 −5 m 3 ·mol −1 ). The course of the dependency , is opposite to that observed for DMF. , values increase slightly with increasing DMF content. The influence of temperature on the values of , is analogous to that of , . Zegers and Somsen [22] also determined , in pure DMF, which is equal to 9.197 ×10 −5 m 3 ·mol −1 . The Although V E m,BuOH assumes positive values in the same concentration range, the amount of DMF prevails over the amount of BuOH when x DMF > 0.4. Having been aware that DMF is a weakly associated liquid in contrast to BuOH [30], the results obtained allow us to conclude that the negative contribution to the real volume of the mixture is made by the DMF molecules V E m,DMF < 0 . This is presumably one of the main factors causing the slight contraction of the volume of the mixture, causing us to observe V E m < 0 when x DMF > 0.5. Analysis of data on the partial molar volume of each component in the DMF + BuOH mixture (Tables S4 and S5 in the supplementary materials) allows one to observe that the dependencies V m,DMF = f (x DMF ) and V m,BuOH = f (x DMF ) show only a little change with increasing DMF content in solution. This proves that chemical entities, such as various types of associations or complexes built of both molecules of the solution components, are most likely not formed in the system. Otherwise, we would observe characteristic changes in the course of both functions, along with an increase in the content of one of the components of the mixture, as in the case of an aqueous solution of N,N-dimethylformamide [33], which also confirms the conclusions drawn earlier. The partial molar volume of DMF decreases slightly as the DMF content of the mixture increases at a higher temperature. Garcia et al. [19] presented a similar course of this dependence at a temperature of 298.15 K. In addition, V m,DMF takes the highest value at the highest temperature. As determined by Zegers and Somsen [22], the value of V m,DMF in pure BuOH is equal to 7.779 × 10 −5 m 3 ·mol −1 and is close to the one obtained by us (7.749 × 10 −5 m 3 ·mol −1 ). The course of the dependency V m,BuOH = f (x DMF ) is opposite to that observed for DMF. V m,BuOH values increase slightly with increasing DMF content. The influence of temperature on the values of V m,BuOH is analogous to that of V m,DMF . Zegers and Somsen [22] also determined V m,BuOH in pure DMF, which is equal to 9.197 × 10 −5 m 3 ·mol −1 . The value obtained by them is close to the one determined by us (9.208 × 10 −5 m 3 ·mol −1 ). Using the density values of the mixture at six temperatures, the coefficient of thermal expansion (α p ) was calculated using Equation (11): V m was calculated with Equation (12) [34]: The values of molar isobaric expansion (E p,m ) were calculated using Equation (13) [25]: The obtained results of E p,m as a function of the DMF molar fraction are presented in Figure 3. Molecules 2023, 28, x FOR PEER REVIEW Using the density values of the mixture at six temperatures, the coefficient of expansion (αp) was calculated using Equation (11): The values of molar isobaric expansion ( , ) were calculated using The obtained results of , as a function of the DMF molar fraction are prese Figure 3. The isobaric molar expansion reaches its highest value for pure BuOH and de with increasing DMF in the mixture. The , values increase with increasing temp over the whole composition range of the mixture, which seems logical due to the i in thermal movements at higher temperatures. Such a behavior of the system BuOH-rich region is most likely related to the breaking of hydrogen bonds in BuO adding DMF to the solution. This causes greater changes in volume expansion increase in temperature. With a high BuOH content in the mixture, a slight maxi visible at the two lowest temperatures (293.15 K, 298.15 K). For higher temperature , dependency, we observe only a change in the slope of the func greater effect of temperature on the value of this function is observed when BuOH in the mixture. As the DMF content increases, the temperature differentiates where: , , , are the isobaric and isochoric molar heat capacities of the ideal mixture, , are the isothermal and isentropic compressibility coefficients for the ideal mixture: and , , * , , , * are the isobaric and isochoric molar heat capacities of pure compounds, BuOH (1) and DMF (2). , is presented as a function of mixture composition in Figure 7. The obtained dependency of , is analogous to , with the change in temperature. A comparison of both functions at 298.15 K is shown in Figure 8. The isobaric molar expansion reaches its highest value for pure BuOH and decreases with increasing DMF in the mixture. The E p,m values increase with increasing temperature over the whole composition range of the mixture, which seems logical due to the increase in thermal movements at higher temperatures. Such a behavior of the system in the BuOH-rich region is most likely related to the breaking of hydrogen bonds in BuOH after adding DMF to the solution. This causes greater changes in volume expansion with an increase in temperature. With a high BuOH content in the mixture, a slight maximum is visible at the two lowest temperatures (293.15 K, 298.15 K). For higher temperatures on the E p,m = f (x DMF ) dependency, we observe only a change in the slope of the function. A greater effect of temperature on the value of this function is observed when BuOH prevails in the mixture. As the DMF content increases, the temperature differentiates the E p,m values of the mixture less. Small changes in the value of the isobaric molar expansion in the DMF-rich region show that the structure of this solvent remains only slightly associated with dipole-dipole interactions. Using the calculated values of E p,m and Equations (14) and (15) [25], excess molar isobaric expansion was determined (E E p,m ) and presented in Figure 4: where: ϕ BuOH , ϕ DMF are volume fractions of BuOH and DMF and α * p,BuOH , α * p,DMF are the coefficients of thermal expansion of pure BuOH and DMF, respectively. Excess isobaric molar expansion in the entire concentration range has positive values A real solution has a greater ability to thermally expand relative to an ideal solution. In the range of 0.4 ≤ ≥ 0.5 (depending on the measurement temperature), a maximum appears, which proves the occurrence of characteristic changes in the interactions between molecules in this composition range. The value of , is the lowest at T = 318.15 K, which means that at the highest temperature, the volumetric expansion of a real solution is the smallest and increases with decreasing temperature. It should be expected that at the lowest temperatures, the interactions between molecules will be stronger, with the intermolecular interactions weakening with increasing temperature. Sound Velocity and Heat Capacity Based on density and sound velocity measurements (Tables S1 and S2 in the supplementary materials), the isentropic compressibility coefficient and molar isentropic compression , were calculated according to Equations (16) and (17) in the whole temperature range: where: , , , are the isobaric and isochoric molar heat capacities of the ideal mixture, , are the isothermal and isentropic compressibility coefficients for the ideal mixture: and , , * , , , * are the isobaric and isochoric molar heat capacities of pure compounds, BuOH (1) and DMF (2). , is presented as a function of mixture composition in Figure 7. The obtained dependency of , is analogous to , with the change in temperature. A comparison of both functions at 298.15 K is shown in Figure 8. Excess isobaric molar expansion in the entire concentration range has positive values. A real solution has a greater ability to thermally expand relative to an ideal solution. In the range of 0.4 ≤ x DMF ≥ 0.5 (depending on the measurement temperature), a maximum appears, which proves the occurrence of characteristic changes in the interactions between molecules in this composition range. The value of E E p,m is the lowest at T = 318.15 K, which means that at the highest temperature, the volumetric expansion of a real solution is the smallest and increases with decreasing temperature. It should be expected that at the lowest temperatures, the interactions between molecules will be stronger, with the intermolecular interactions weakening with increasing temperature. Sound Velocity and Heat Capacity Based on density and sound velocity measurements (Tables S1 and S2 in the supplementary materials), the isentropic compressibility coefficient (κ S ) and molar isentropic Molecules 2023, 28, 4698 9 of 21 compression (K S,m ) were calculated according to Equations (16) and (17) in the whole temperature range: where κ S is the isentropic compressibility coefficient, K S,m is the molar isentropic compression, u is the sound velocity of the DMF + BuOH mixture, ρ is the experimental value of the solution's density. Based on the obtained data, such as the isentropic compressibility coefficient (κ S ), the coefficient of isobaric thermal expansibility ( α p and experimentally gained data on density (ρ) and specific heat capacity ( c p of the tested DMF + BuOH solution, which are presented in Tables S1 and S3 in the supplementary materials, the values of isothermal compressibility efficient (κ T ) and isothermal molar compression (K T,m ) were calculated using Equations (18) and (19): The obtained results of κ S and κ T for the whole composition and temaprature range are presented in Table S8 (supplementary materials). The course of changes in isentropic and isothermal molar compression as a function of concentration and temperature is very analogous. K S,m and K T,m reach similar values. Both isentropic and isothermal molar compression decrease with increasing DMF content in the mixture. This is in agreement with the results of other researchers [35]. The highest value of K S,m and K T,m is observed for pure BuOH due to the hydrogen bonds existing in the alcohol structure. K S,m and K T,m increases with the concentration of alcohol. It is principally associated with an increase in compressibility due to structural changes in the mixture that lead to a decrease in ultrasonic velocity [36]. Adding DMF to BuOH breaks these bonds and creates new, weaker ones between the DMF and BuOH molecules. This leads to a closer arrangement of the molecules. In the DMF-rich region, only the dipol-dipol interaction prevails. Most likely, this phenomenon contributes to a decrease in the compressibility of the system with increasing mole fractions of DMF. The compressibility of the system increases with increasing temperature. A greater effect of temperature on the value of isentropic and isothermal molar compression is visible for solutions in which the content of BuOH prevails. When there is more DMF in the system, K S,m and K T,m depends on the temperature. In order to better understand the nature of the interactions between the components of the mixture and the nature of molecular agitation in dissimilar molecules, excess molar isentropic compression (K E S,m ) and excess molar isothermal compressibility (K E T,m ) were determined. These are found to be sensitive to differences in the size and shape of molecules [37]. For this purpose K E S,m and K E T,m values were calculated according to Equations (20)-(24) [25]: where: K E S,m , K E T,m represent excess molar isentropic and isothermal compression, K S,m and K T,m represent the molar isentropic and isothermal compression, and K id S,m and K id T,m their molar values for an ideal mixture; κ * S,i , κ * T,i the isentropic and isothermal compressibility coefficients of pure components 1 (BuOH) and 2 (DMF), ϕ i the volume fraction of the mixture components; C * p,i represents the isobaric molar heat capacity of pure BuOH (1) and DMF (2) calculated on the basis of the obtained c p values (Table S3 in the supplementary materials). The courses of both functions K E S,m = f (x DMF ) and K E T,m = f (x DMF ) in the whole temperature range are shown in Figure 5. where: ,m , ,m represent excess molar isentropic and isothermal , , where: , , , are the isobaric and isochoric molar heat capacities of the ideal mixture, , are the isothermal and isentropic compressibility coefficients for the ideal mixture: and , , * , , , * are the isobaric and isochoric molar heat capacities of pure compounds, BuOH (1) and DMF (2). , is presented as a function of mixture composition in Figure 7. The obtained dependency of , is analogous to , with the change in temperature. A comparison of both functions at 298.15 K is shown in Figure 8. The excess molar isentropic and isothermal compressions have negative values. The same trend can be seen in studies published by Thirumaran et al. [4], Rao and Reddy [14], and Acree [35]. This parameter takes negative values, and the minimum is observed at x DMF ≈ 0.45. It can be concluded that the real solution is less compressible than the ideal solution. At x DMF ≈ 0.45, where there is a minimum on the function curves K E S,m = f (x DMF ) and K E T,m = f (x DMF ) visible, apparently there are characteristic changes in the interactions between the particles of the real mixture in relation to the ideal solution, causing more and more difficulties in the compression of the system. With the addition of DMF to the pure BuOH, the excess compressibility rapidly decreases up to x DMF ≈ 0.45, caused by the rupture of hydrogen bonds in the pure BuOH occurring during mixing. In this range of composition, we also observe probably the weakest interactions between the components of the mixture [30,32]. The lower compression in the solution is observed for systems with lower alcohols, which can be attributed to strong interactions between different molecules and different molecular sizes causing stronger mutual accommodation of components [26,27]. Such a situation was observed for the DMF + PrOH mixture [26]. Since the size of both BuOH and DMF molecules is similar, the weakening and breaking of bonds and the lack of interstitial accommodation due to similar molecular sizes may be the reasons for decreasing the compressibility of the DMF + BuOH system in the BuOH-rich region. In the DMF-rich region, with the increasing importance of interactions between different molecules and dipole-dipole interactions in DMF, we can observe the decreasing negative values for K E S,m and K E T,m . The value of K E S,m decreases with increasing temperature. Thus, the lower the temperature, the greater the compressibility of a real solution relative to that of an ideal solution. At T = 293.15 K, the real system will show the least negative K E S,m and K E T,m of the mixture compared to the other temperatures. The values of isobaric molar heat capacity (C p,m ) were calculated using the specific heat capacity c p obtained from the experiment ( Table S3 in the supplementary materials). The results for the whole composition range of the mixture and for six temperatures (293.15-318.15 K) were calculated and presented in Figure 6 and in Table S9 (supplementary materials). olecules 2023, 28, x FOR PEER REVIEW molecules and dipole-dipole interactions in DMF, we can observe the decrea values for , and , . The value of , decreases with increasing tempe the lower the temperature, the greater the compressibility of a real solution r of an ideal solution. At T = 293.15 K, the real system will show the least neg , of the mixture compared to the other temperatures. The values of isobaric molar heat capacity ( , ) were calculated usin heat capacity obtained from the experiment ( Table S3 in the su materials). The results for the whole composition range of the mixture temperatures (293.15K-318.15K) were calculated and presented in Figure 6 S9 (supplementary materials). Heat capacity and thermal analysis data for the DMF + BuOH systems published earlier. As can be seen from Figure 6, the , values decrease w DMF content. With increasing temperature , values are increasin temperature differentiation is also visible in the BuOH-rich region compar of DMF that prevails. An analogous course of dependence with an incr content for this system (DMF + BuOH) and changes under the influence o was visible in the course of the previously discussed functions , , , an makes us able to conclude that our data are coherent. The analysis of changes i , with increasing concentration and temperature allows us t changes in the nature and strength of interactions between different s structure of the system that are occurring and which were discussed earl Heat capacity and thermal analysis data for the DMF + BuOH systems have not been published earlier. As can be seen from Figure 6, the C p,m values decrease with increasing DMF content. With increasing temperature C p,m values are increasing. A greater temperature differentiation is also visible in the BuOH-rich region compared to the area of DMF that prevails. An analogous course of dependence with an increase in DMF content for this system (DMF + BuOH) and changes under the influence of temperature was visible in the course of the previously discussed functions K S,m , K T,m and E p,m , which makes us able to conclude that our data are coherent. The analysis of changes in the function C p,m = f (x DMF ) with increasing concentration and temperature allows us to confirm the changes in the nature and strength of interactions between different species in the structure of the system that are occurring and which were discussed earlier. Based on these data, one can observe the transition from strong hydrogen bonds in the structure of BuOH to weak intermolecular interactions in the DMF-rich region. With the values of κ S and κ T it was also possible to calculate the values of C V,m according to Equation (25) [25]: The obtained calculation results for C V,m have been collected in Table S9 in the supplementary materials. Excess molar isobaric C E p,m and isochoric C E V,m heat capacities were calculated using Equations (26) and (27). where: C id p,m , C id V,m are the isobaric and isochoric molar heat capacities of the ideal mixture, κ id T , κ id S are the isothermal and isentropic compressibility coefficients for the ideal mixture: and C * p,m,i , C * V,m,i are the isobaric and isochoric molar heat capacities of pure compounds, BuOH (1) and DMF (2). C E p,m is presented as a function of mixture composition in Figure 7. The obtained where: , , , are the isobaric and isochoric molar heat capacities of the ideal mixture, , are the isothermal and isentropic compressibility coefficients for the ideal mixture: and , , * , , , * are the isobaric and isochoric molar heat capacities of pure compounds, BuOH (1) and DMF (2). , is presented as a function of mixture composition in Figure 7. The obtained dependency of , is analogous to , with the change in temperature. A comparison of both functions at 298.15 K is shown in Figure 8. The excess molar isobaric , ) and isochoric , ) heat capac BuOH mixture show negative values in the entire composition range reason for this reduction of , is the formation of interactions betwe molecules, which are presumably weaker compared to the hydrogen molecules of these compounds in their pure form. As the tempera observe an increase in the negative values of both functions. With incre , and , decrease and reach a minimum value when ≈ 0.5. T confirm the changes in , , , and , already analyzed in this p excess functions, including , and , reach their extreme value w obtained results of thermal properties for the DMF + BuOH m characteristic change in the strength of interactions between the c mixture with the change in the composition of the system. Interact weaken in the BuOH-rich region with an increasing amount of DMF confirm the conclusion that, due to the specific arrangement of neig around the DMF and BuOH molecules forming the bond, this intera [30]. Hence, we observe , and , negative values. Then one can n the strength of the interaction when DMF is prevailing. Otherwise formation of stronger intermolecular hydrogen bonds between differ in their pure state, the maximum would be observed [33]. In Figure 7 The excess molar isobaric (C E p,m ) and isochoric (C E V,m ) heat capacities of the DMF + BuOH mixture show negative values in the entire composition range of the mixture. The reason for this reduction of C E p,m is the formation of interactions between DMF and BuOH molecules, which are presumably weaker compared to the hydrogen bonds between the molecules of these compounds in their pure form. As the temperature increases, we observe an increase in the negative values of both functions. With increasing DMF content, C E p,m and C E V,m decrease and reach a minimum value when x DMF ≈ 0.5. The obtained results confirm the changes in K E S,m , K E T,m and E E p,m already analyzed in this paper. All analyzed excess functions, including C E p,m and C E V,m reach their extreme value when x DMF ≈ 0.5. The obtained results of thermal properties for the DMF + BuOH mixture confirm the characteristic change in the strength of interactions between the components of the mixture with the change in the composition of the system. Interactions in the system weaken in the BuOH-rich region with an increasing amount of DMF. This allows us to confirm the conclusion that, due to the specific arrangement of neighboring molecules around the DMF and BuOH molecules forming the bond, this interaction is weakened [30]. Hence, we observe C E p,m and C E V,m negative values. Then one can notice an increase in the strength of the interaction when DMF is prevailing. Otherwise, as a result of the formation of stronger intermolecular hydrogen bonds between different molecules than in their pure state, the maximum would be observed [33]. In Figure 7, the position of the minimum shifts very slightly towards lower values of the DMF with increasing temperature. An analogous course is observed for C E V,m = f (x DMF ) dependence. Materials The list of compounds used in the study in this paper, as well as the suppliers, purity, purification method, and water content, is presented in Table 1. Density and Speed of Sound The density and speed of sound of the DMF + BuOH mixture within its entire concentration range were measured at temperatures T = (293. 15, 298.15, 303.15, 308.15, 313.15, 318.15) K with the use of a DSA 5000 analyzer from Anton Paar. It is a device that allows simultaneous measurement of the density and speed of sound of a liquid sample at ambient pressure by connecting the two measuring cells in a linear manner. These cells are located in the thermostat block. The repeatability of the temperature measurement declared by the manufacturer is ±0.001 K, while its uncertainty is 0.01 K. The range of density measurement of the Anton Paar DSA 5000 M densimeter is (0-3) g·cm −3 . The repeatability of the density measurement declared by the manufacturer is ±1 × 10 −3 kg·m −3 and the determined uncertainty is equal ±2 × 10 −2 kg·m −3 considering the formula for the combined standard uncertainty for the average density measurements proposed by Fortin et al. [38]. Each series of measurements was preceded by checking the correct device operation. The verification consisted of measuring the density and speed of ultrasound propagation of ultra-pure degassed water (direct Q3 UV purification). Particular attention was paid to cleaning the entire system between measurements. Due to the combination of cells, this is extremely important. To avoid the appearance of gas bubbles, which would significantly disturb the entire measurement, the density of the tested solution was measured from the highest temperature (318.15 K) to the lowest temperature (293.15 K) with increments of five degrees. The speed of sound of the DMF + BuOH mixture was also measured in the temperature range (293.15-318.15) K. The measurement uncertainty is ±0.5 m·s −1 , and its precision is estimated to ±0.1 m·s −1 . The speed of sound is measured by measuring the propagation time of the sound signal. The sound speed cell has a circular cylindrical cavity of 8 mm diameter and 5 mm thickness. The sample is introduced between two piezoelectric ultrasonic transducers. The first transducer produces sound waves (the average frequency is 3 MHz), while the second transducer receives the waves. If necessary, calibration of the measuring equipment was also carried out according to the procedure in accordance with the manufacturer's instructions using ultra-pure, degassed water and air at 293.15 K and at 0.1002 MPa pressure. The values of water density and speed of sound, amounting to 998.204 kg·m −3 and 1482.63 m·s −1 at a temperature of 293.15 K, are consistent with those given in our previous paper [26]. Heat Capacity The measurement of the specific heat capacity, c p , of DMF + BuOH mixtures as well as c p of pure solvents was carried out by means of a differential scanning calorimeter Micro DSC III manufactured by Setaram and based on Tian-Calvet's principle. The detailed description of the measurement procedure has been described by Góralski et al. [82]. We recorded the heat flow during sample heating from 288.15 K to 323.15 K with a scanning rate of 0.35 K·min −1 . The so-called continuous with reference method was used, with a known capacity of 1-butanol as a reference substance [83]. The uncertainty of the absolute temperature value in the measuring cell is estimated to be (0.05 K). The temperature of the external cooling system is kept constant (0.02 • K) with a HAAKE type DC30 thermostat. The measuring vessel was the standard 'batch' type cell with a volume~1.0 cm 3 . The uncertainty in the c p values can be estimated to be smaller than 0.5%, excluding the effects of sample impurities [83] and the error of the absolute temperature determination of 0.05 K. The accuracy in the present investigation of C p,m value for DMF is ±0.8% and for BuOH is ±2.5% [83]. The values of the specific heat capacity as a function of the mole fraction of DMF are presented in Table S3 (supplementary materials). The calculated molar heat capacity data obtained by us for pure DMF and BuOH are compared with literature data in Table 3. Conclusions The presented work contains data and conclusions obtained using three different research methods. The results complement each other and allow one to confirm the conclusions drawn regarding changes occurring in the solution under the influence of adding the second component of the mixture and temperature. In addition, the obtained results are enriched with c p data, so far not available in the literature for the DMF + BuOH system, along with the change in composition and temperature. Based on density studies, sound velocity, and specific heat capacity of the DMF + BuOH mixture at six temperatures between 293.15 and 318.15 K, the excess functions such as V E m , E E p,m , K E S,m , K E T,m , C E p,m , C E V,m were calculated and analyzed in terms of changes occurring in the solution structure. The analysis of changes in V E m as a function of the DMF mole fraction showed that characteristic changes in the structure of the system appeared in the tested mixture. The observed positive trends in V E m values indicate that the effect due to the breaking up of self-associated structures of the components of the mixtures is dominant over the effect of H-bonding and dipole-dipole interaction between different molecules. The conclusions drawn turn out to be different from those drawn in the case of the DMF + PrOH solution. An increase in chain length by one -CH 2 -group in the alcohol molecule causes expansion of the mixture volume in the BuOH-rich region. The obtained results show how a small change in the size of the alcohol molecule can affect the strength of interactions and show the opposite tendency of the DMF + PrOH system to mutual accommodation. Additionally, the analysis of the excess partial molar volume of both components in the mixture confirms the changes in intermolecular interactions of the system, especially in the DMF-rich region. All the conclusions drawn were confirmed in the course of the remaining excess functions, making the obtained results reliable and consistent. The extrema appearing in the course of the other analyzed excess functions confirm the tendency of the system to form probably only weak hydrogen bonds between DMF and BuOH molecules and the fact that their strength changes depending on the DMF content in the mixture. Moreover, the disruption effect dominates in relation to the influence of differentiated intermolecular interactions in the studied system. Furthermore, the thermal analysis of the DMF + BuOH system (C p,m , C V,m ) provides previously unpublished data that confirms the nature of the interactions that are forming in the mixture. Despite the commonly accepted fact that the interactions in pure DMF are weak and there are no hydrogen bonds, they seem to gain importance in the studied system, especially when x DMF > 0.5.
2023-06-29T06:15:55.077Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "ac4028291a347f28d4db517d32eda7e7a7b8abcd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/28/12/4698/pdf?version=1686549585", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1bab0e1e1ad1cee0905851673632969ca35100e7", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
1908922
pes2o/s2orc
v3-fos-license
Modelling and Analysis of Infilled Frame Structures Under Seismic Loads In-filled frame structures are commonly used in buildings, even in those located in seismically active regions. Precent codes unfortunately, do not have adequate guidance for treating the modelling, analysis and design of in-filled frame structures. This paper addresses this need and first develops an appropriate technique for modelling the infill-frame interface and then uses it to study the seismic response of in-filled frame structures. Finite element time history analyses under different seismic records have been carried out and the influence of infill strength, opening and soft-storey phenomenon are investigated. Results in terms of tip deflection, fundamental period, inter-storey drift ratio and stresses are presented and they will be useful in the seismic design of in-filled frame structures. INTRODUCTION Treating infill as a non-structural component is a common practice in the seismic analysis and design of low rise buildings in developing countries such as Bhutan. The contribution of the infill to the lateral strength and stiffness of a structure is disregarded in the current seismic codes used in these countries. These codes do not have adequate guidance due to insufficient research information on the complex seismic response of infill frame structures and due to the wide variation of opening sizes and material properties of the infill. Though, some seismic codes imply the presence of infill, it is normally considered through empirical equations. Despite large amount of research performed in this field both experimentally and numerically in the last few decades, present seismic codes such as (IS1893 2002) [1] provide limited guidance which may not be adequate for the varying properties of infill. Kaushik (2006) [2] made comparative study among the different seismic codes and found inconsistency in the consideration of infill and reported that most codes do not consider infill due to its brittle nature of failure and lack of adequate information. The validity of different macro-models consisting of 4node shear panels, 4-node plane stress element and the higher order 8-node plane stress element were studied (Doudoumis and Mitsopoulou 1995) [3] and reported inaccuracy in results of macro models. Singh, Paul et al. (1998) [4] had developed a method to predict the formation of plastic hinges and cracks in the infill panels under static and dynamic loads by using 3-noded frame element, 8-noded isoparametric element and 6 noded interface element for frame member, infill panel and the interface element respectively. The study has shown good agreement with the experimental results, especially in terms of failure load and the strut width. Doudoumis (2007) [5] studied the importance of contact condition between the infill and frame members on a single storey Finite element model. It was reported that the *Address correspondence to this author at the School of Urban Development, Queensland University of Technology, GPO Box 2434,2 George Street, QLD 4001, Australia; E-mail: jigme.dori@qut.edu.au interface condition, friction coefficient, size of the mesh, relative stiffness of beam to column, relative size of infill wall have significant influence on the response of infilled frame, while the effect of orthotropy of infill material was insignificant. When the mesh density was made finer the stress pattern within the infill also improved, with maximum values of stresses at the compressive corners. The existence of friction coefficient at the interface was reported to increase the lateral stiffness of the system. However, friction coefficient is dependent on the quality of material and the workmanship CEB 1996 [6] which is difficult to define accurately, hence codes do not provide any guidance. Moghaddam and Dowling (1987) [7] reported the high initial stiffness and low deformation capacity of infill. Merabi (1994) [8] reported significant improvement of lateral stiffness, strength and energy dissipation capability of infilled structures from the analytical and experimental studies. On account of high initial stiffness, the change in structural behaviour from frame action to truss action was studied (Murty and Jain 2000) [9]. Consequently, structural member forces in the beams and columns of an infilled structure are reduced. Fardis (1996) [10] investigated the seismic response of an infilled frame which had weak frames with strong infill material and reported the strong infill is responsible for earthquake resistance of weak reinforced concrete frames. Negro and Colombo (1997) [11] investigated the effects of irregularity induced by non-structural masonry wall on a full scale four storey RC structure under pseudo-dynamic loads and observed changes in the behaviour of frame due to infill. The irregular distribution of infill has been reported to impose unacceptably high ductility demand on the frame buildings. Al-Chaar (1998) [12] performed studies on the behaviour of infilled RC frames. The frames were reported to have shown the ductile behaviour but the extent of ductility is not specified. However, the author concluded that the infill wall improves the strength, stiffness and energy absorption capacity of the plane structures which will be useful for seismic structures. Dolsek and Fajfar (2008) [13] carried out pushover analysis on a four storey structure and reported total change in distribution of damages within the structure. How-ever, the presence of infill did not cause the shear failure of columns, which is contrary to literature (Pauley & Priestley 1992) [14]. Amanat (2006) [15] reported that the amount of infill has significant influence on the fundamental period of the structure; however recommended pursuing further study in this field. Kose MM (2008) [16] conducted a study on the parameters affecting the natural period of the infilled frames. The Equivalent diagonal strut was used as the infill panels and opening was considered by varying the width of struts proposed in separate study (Asteris 2003) [17]. The height of the structure and the amount of shear wall were reported to be the main influencing parameters. A soft-storey issues associated with infilled structures was studied (Santhi, Knight et al. 2005) [18] on a single bay three storey RC frame which had no opening in infill panels. The natural frequency of the soft structure was decreased by 30% while the shear demand was increased by 2.5 times of the bare frame. The bare frame structures behaved in flexure mode while the soft structure behaved in shear mode. However, the author has not considered the opening as the presence of it may reduce shear force. Most of the past research has considered simple single storey systems or diagonal strut models for the infill, ignoring openings which are normally present. The possibility of the infill having a wide range of properties has also been treated. It is thus evident that there is inadequate research information on the seismic response of realistic RC frame structures with infill and consequently inadequate design guidance. This paper treats this research gap using finite element (FE) time history analyses. INTERFACE ELEMENT At present there is no (code) guidance on modelling the interface between the frame and infill. An appropriate interface or gap element is developed in this paper. The study on the effective stiffness of the gap element was carried out on a single storey single bay infilled frame as shown in Fig. (1). The size of the column section was 375 x 375 mm square and the beam section was 500 mm deep and 420 mm wide and the infill was 200 mm thick. The interface between the frame element and the infill wall was simulated using the gap elements. The stiffness of gap element was developed by a trial and error procedure so that the present results compared well previous research results (Doudoumis & Mitsopoulou 1995) and thus validated the computer model as shown in Fig. (2). The trends in the variation of roof displacement is similar under different interface conditions. Since the friction coefficient between the frame and infill wall is uncertain, the current model is simulated to obtain the stiffness equivalent to the average friction coefficient of 0.5 (u = 0.5). The advantage of using gap element over contact element is its simplicity in modeling and ability to transfer the forces directly to the infill wall from the exterior frame members. However, separation and sliding cannot be considered using the gap element, but these are not important in for the type of analysis treated herein. By comparing the results from the present model with those from the reference, an equation for an effective gap stiffness was developed as; Where; K g = stiffness of gap element in N/mm; K i = stiffness of the infill panel; E i = Young's modulus of elasticity of infill material and t = thickness of the infill panel. Fig. (3) shows the variation of the Gap stiffness with infill strength and the line of best fit, obtained from the validation analysis. The stiffness property of the gap element is used for modelling the interface element between the frame and the wall in the other structures treated in this paper for the parametric studies. The height of structural models was varied from three to ten storeys. All of them were designed with and without seismic codes and their member properties are shown in Table 1 & 2. The Young's modulus of elasticity and Poisson's ratio of concrete was assumed to be 24000 MPa and 0.2. The models which represent non seismic structures were designed to resist gravity loads using the existing code [19] while the aseismic models were designed to meet the requirements of present seismic code [1]. The gap element was used to connect the frame member and the infill wall. The structural member sizes were kept uniform throughout the height of the structure to make structure simple. Uniformly distributed dead and live loads of 21 KN/m and 10KN/m were applied to the beams (assuming that there is 5m width of tributary slab). The sources of mass during dynamic analyses were from the structural elements, viz, columns, beams and the infill. Time history analyses were carried out under three different earthquakes, all scaled initially to a constant peak ground acceleration (PGA) of 0.2g. Though there are two different materials such as concrete and infill, it was assumed to follow the classical damping matrix with a coefficient of 0.05% for both mass and the stiffness. Five storey 400x400 300x250 Seven storey 450x450 400x300 Ten storey 500x5050 400x300 Effect of Infill The influence of the Young's modulus of infill martial was studied on a ten storey model designed without any seismic provisions. The damping was assumed to be 5%. Since there is no appropriate guidance on the infill material, it is randomly selected depending on the availability and cost of the material. Thus, the use of solid concrete blocks, burnt clay bricks, stone and adobe blocks were common in the past. Consequently, some of the buildings in Bhutan have suffered from cracks in the infill walls during moderate seismic action, while the others survived. Thus, the effect of infill material was studied under a credible earthquake of 0.2g which gave the minimum strength requirement of the infill material. The effect of E i is significant on the fundamental period, roof displacement and inter-storey drift ratios as shown in Figs. (4)(5)(6). All these responses decrease as the E i increase, indicating that the Young's modulus of material, which is empirically related to material strength, increases the stiffness of the model, as expected. However, the increase in fundamental period and roof displacement is significant only in the lower range of E i (< 2500 MPa). On the average, the fundamental period was found to decrease by an average of 6.7% for every 2500 MPa increase in E i . Such variation in structural response cannot be captured in general engineering practices and thus it is important to include it in the standards. The stresses, in the infill wall, however, were found to increase with the increase in Young's modulus of elasticity due to the increase in stiffness of the system, attracting more forces to the infill. The increase in stresses is very small after crossing the E i value of 7500 MPa as shown in Fig. (7). Fig. (7). Variation of infill stresses with E i . This could be the upper limit of the Young's modulus of infill material which should be used for buildings under serviceable earthquake. The lower limit of E i value under the same earthquake was found to be 2000 MPa for a ten storey structure as below this value the compressive strength of material exceeded its limit. Effect of Opening Size While the consideration of the fully infilled frame is not realistic for real structures, ignoring the openings during modelling and analysis of the infilled frame construction would not give true results. The Equivalent diagonal strut method is quite vague as openings are assumed to be present on either the upper or lower side of the strut, when in reality most of the openings are present at the mid level of the floor height, typical of buildings in Bhutan. The infill wall enhances the lateral stiffness of the framed structures, however, the presence of openings within the infill wall would reduce the lateral stiffness. Since the opening is a common feature of the building, consideration of the opening should be given and its effect on the seismic resistance of the model is important. Fig. (8) shows the variation of the fundamental period with the percentage opening. The fundamental period increases as the opening size increases, as expected, due to reduction in stiffness of the model. Such variation of periods cannot be considered using the Code values. The fundamental period of the fully infilled model was 54.87% higher than that of the bare frame model. There is no clear relationship between the opening size and the fundamental period, but the opening size does have an influence on the fundamental period of the structure. The roof displacement, inter-storey drift ratios and the infill stresses increase with the increase in opening size as the frame becomes more flexible. The lateral stiffness decreases by an average value of 38.98% for every 20% increase in opening size and there is a corresponding decrease in the inter-storey drift ratios (Fig. 9) and the roof displacements. Fig. (9). Variation of inter-storey drift ratios with opening percentage. The maximum infill stress which was found to increase by 23.57% and its variation is given in Table 3. The maximum stresses were observed at the corners of the openings unlike in fully in-filled model where maximum stresses are observed at the compressive ends, as shown in Fig. (10). This indicates that the material strength of the infill should be increased as the opening size increases, if damage of the infill is to be prevented under a design earthquake. The moments in frames increase as the opening size increases, while the shear force decreases for both beams and columns. The increase in moment could be due to increase in flexibility, while the decrease in shear force is due to a decrease in the mass of the structure with the larger opening size. The column and the beam moments were increased by an average of 36.591% and 33.88% for every 20% increase in opening size, while the shear forces in the columns and beams were generally reduced. Overall the opening size of the infill opening affects the important response parameters of the structure and its consideration during modelling and analysis is important. Effect of Infill Thickness The effect of thickness was studied on a ten storey model which had the opening size of 40% (typical). The analyses were performed using a peak ground acceleration of the 0.2g Kobe earthquake on a model with an infill thickness of 125 mm and was re-analysed for the same load but had the infill thickness of 250 mm. Generally infill walls of different thicknesses are used for internal and peripheral partitions, however some clients opt for thin wall with the aim to reduce the mass of the structure. Thus, it was felt necessary to study the effect of thickness under earthquake loads as the present code does not consider the influence of infill thickness. The effect of thickness on the fundamental period of vibration is insignificant. From this study, the difference in fundamental period between the models was 1.4%. The fundamental period only slightly increases as the infill wall thickness increases, since the increase in thickness only increases the mass of the structure rather than its stiffness. Both the roof displacement and the inter-storey drift ratio increase with the increase in thickness and the percentage of increase in roof displacement and inter-storey drift ratio were 4.69% and 4.45% respectively. Thus, it is evident that there is no improvement in the lateral stiffness of the infill wall by increasing its thickness, for the cases treated herein. Since the influence of infill thickness on the global responses, particularly the natural periods, roof displacement and the inter-storey drift ratios, were not significant; the stresses in the infill walls were not affected by varying the thickness. The maximum principal stress in the infill walls was found to be 4.2 N/mm 2 for all models. Effect of Peak Ground Acceleration (Pga) The seismic resistance of all models, which are shown in Fig. (11), was studied by varying the peak ground acceleration of the earthquake and the performance of the structure was measured in terms of the inter-storey drift ratio and the onset of cracking in the infill panels. The infill was assumed to crack once the stress in the infill exceeded the ultimate compressive stress of the infill material. The Young's modulus of elasticity and the thickness of the infill walls were assumed as 5000 MPa and 250 mm respectively (as specific material properties of infill are not available). The results showed that the inter-storey drift ratio of most of the models from three storeys to ten storeys, did not exceed the inter-storey drift ratio limit given in IS 1893(2002) [1], even when the PGA was increased to 0.4g. An exception was the ten storey model, which exceeded the drift ratio limit after 0.3g PGA. This shows that the structures constructed without seismic provisions can meet the drift requirements of the current code if the appropriate infill walls are considered. Thus, the presence of infill walls significantly reduces the inter-storey ratios of the models under seismic load. However, the current results could overestimate the actual capacity of real buildings as the Young's modulus of elasticity of the infill was considered to be 5000 MPA. It also shows that the infill helps to reduce the inter-storey drift ratios, consequently reducing the structural member forces, which indicates that the infilled buildings have an additional strength to survive earthquake forces even if they are not designed to resist them. The stresses in the infill wall increase with increase in PGA, as shown in Fig. (12). However, all models performed well up to 0.4g PGA of ground acceleration, except the ten storey model whose maximum infill stress exceeded the maximum compressive stress of the material (6.66 N/mm 2 ). It is evident that the structure requires an E i value of 7500 MPa if infill is to remain un-cracked at 0.4g PGA. It was also found that the strength requirement of infill material varied with the height of the structure under a same PGA, as shown in Fig. (13). Low-rise structure will require lower infill strength than high-rise structures for the same perform-ance level. Such variation in material strength requirement should be addressed in the seismic guidance. Similar results were obtained for the models designed to conform to the seismic requirements of the IS1893 (2002) [10]. Both the inter-storey drift ratios and the infill stresses increased with the increase in PGA. This indicates that there is not much influence on the storey drift and the infill stress from the structural member sizes. It means that there is a significant stiffness contribution from the infill to overall structural behaviour. Fig. (12). Variation of stresses in the infill wall with PGA. Fig. (13). Variation of minimum strength of infill material with height of the structure under a Serviceable earthquake. The above results show that the buildings which were constructed before and after introduction of seismic Codes performed similar if the infill walls are considered. However, the strength of the infill material E i should be greater 5000 MPa. If the E i, values are low, structures will not be able to resist higher ground acceleration as the lateral stiffness will be low. Effect Of Concrete (E c ) Over the last few decades, there have been changes in the specification of concrete material for building construction in Bhutan. Moreover, many buildings were constructed using the old codes which had inferior material specification than the modern codes. Thus, there is a need to study the effect of concrete strength as the results will be useful in the assessment of old buildings under dynamic loads. The range of the concrete strength (E c ) that was considered to study the variation of structural responses was 15 to 40 MPa. The global structural responses such as fundamental period, roof displacement, inter-storey drift ratio and the infill stresses, all decrease with the increase in E c value, as expected. This is due to the increase in stiffness of the model as E c increases. It was found that the fundamental period increases by 7.8% for every 5000 MPa increase of E c , indicat-J10 J7 J5 J3 Fig. (11). Models of building structures. ing that old buildings which have used low strength concrete could have a longer period of vibration, as shown in Fig. (14). This must be considered to avoid possible resonance with seismic motions with similar dominant periods. Such variation of the period is not considered in the Empirical formulae available in the code. The effect of E c on the roof displacement is significant only for lower values of E c . For example, roof displacements were 36.6 mm, 30.6 mm and 30.8 mm for models with c E of 20 GPa, 30 GPa and 40 GPa respectively. However, the effect of E c on the inter-storey drift ratio is not significant. The average decrease in inter-storey drift ratio for every 5 GPa increase of E c was just 4.77%, as shown in Fig. (15). However, there is not much variation in the infill stress with the concrete strength. For instance, the maximum infill stress was 1.68 N/mm 2 for the model which had an E c of 15 GPa, while the maximum stress in the other model which had an E c of 40 GPa was 1.33 N/mm 2 . In summary, the concrete strength is significant only in terms of its effect on the fundamental periods. However, it does not have significant effect on the roof displacement, inter-storey drift ratios or the infill stress, provided resonance is averted. Fig. (15). Inter-storey drift ratios with E c . Soft Storey Phenomenon The presence of Arcade at the bottom storey of the building structure may induce soft-storey phenomenon during dynamic earthquake loads. Such problems are currently treated by assigning magnification factor which may or may be true to the buildings which have Arcades. This study addressed this problem by treating two models, S and S1 in which S has a uniform infill throughout the structure while S1 does not have infill at bottom storey (Fig. 16). The infill walls were assumed to have 40% opening percentage at the centre of the infill wall. The increase in inter-storey drift ratios (Fig. 17) was significant and correspondingly the moments and shear forces in the beams and columns were observed to increase. However, the magnification factor increases with increase in the amount of the infill in upper storeys as well as the height of the building. S S1 Fig. (16). S-normal model and S1-model with Arcade. The low rise model (three storeys) showed small increase in member forces while the medium rise model (ten-storey) showed significant increase in magnification factor. However, the magnification factors obtained from this research are relatively less than the values given in the current code (IS1893 2002) [1]. Fig. (17). Inter-storey drift ratios. Since the buildings do have openings, this research recommends the magnification factor of structural member forces to be as shown in Table 4. Even though the magnification factor for low rise structure could be smaller than these values, it should be acceptable to use for structure lower than ten storeys as it will be conservative and safe. DISCUSSION AND CONCLUSION At present there is no adequate information on the modelling, analysis and design of in-filled frame structures to Inter-storey drift ratio Storey level Ec-15000MPa Ec-20000MPa Ec-30000MPa Ec-40000MPa seismic loads. This paper developed an appropriate technique for modelling the interface between infill and frame and used it to study the seismic response of in-filled frame structures and investigated the influence of important parameters. The research information will be useful in the design of such structures. The main findings of the paper are listed below; • The strength of infill in terms of its Young's Modulus (Ei) has a significant influence on the global performance of the structure. The structural responses such as roof displacements, inter-storey drift ratios and the stresses in the infill wall decrease with increase in (Ei) values due to increase in stiffness of the model. It is therefore important to choose the right material for infill, know its properties and consider these in the analysis and design. • The minimum compressive strength of infill material required to maintain the structure in an un-cracked condition under a credible earthquake (with 0.2g PGA) varies with the height of the building. It has been shown that under exposure to similar seismic hazards, medium rise buildings require higher strength infill material (compared to low rise building). • The opening size of the infill has a significant influence on the fundamental period, inter-storey drift ratios, infill stresses and the structural member forces. Generally they increase as the opening size increases, indicating that the decrease in stiffness is more significant than the decrease in mass. • Under a particular level of PGA (0.2g), the increase in infill stress is not very significant beyond infill strength of Ei = 7500 MPa. This value could be considered to be the maximum limit of the Young's modulus of infill material if the infill walls are used for retrofitting old buildings. • The performances of buildings constructed with and without seismic provisions are almost similar if the infill has a minimum value of 5000 MPa for its Young's Modulus (Ei). This is because the structural capacity is greatly influenced by the type of infill walls and the values of their Young's Modulus.
2016-10-26T03:31:20.546Z
2009-04-20T00:00:00.000
{ "year": 2009, "sha1": "d061563e154dc92a627fa70156ef812ad0e793ec", "oa_license": "CCBY", "oa_url": "https://openconstructionbuildingtechnologyjournal.com/VOLUME/3/PAGE/119/PDF/", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "df257648016fb460e480599ef38fcccc4933b11e", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Engineering" ] }
253701552
pes2o/s2orc
v3-fos-license
Investigating Bank Capital on Firm Rating Analysis . This paper examined the correlation between firm ratings of banks and financial risks assessment of PEFINDO using profitability, asset quality, and liquidity as independent variables. This paper also investigated the impact of bank capital as controlling variable on bank ratings. The data which have been observed are financial reports from publicly-held banks in Indonesia and firm rating analysis released by PEFINDO during 2017-2021 consecutively, then was analyzed with regression model. This paper finds that profitability, asset quality and liquidity have no correlation with bank ratings without the existence of bank capital. As bank capital are taken into account, the correlation analysis had a significant difference and bank capital becomes determining factor in bank rating analysis. Introduction Leverage is an essential part of banking and it is higher far compared to another industry sector due to intermediary role which banks are highly leveraged that are in the business of facilitating leverage for others [10]. Besides raising funds from deposits, banks could also raise funds from capital market, like offering shares to public or issuing debt securities. Credit rating agencies help to translate the banks' ability to meet their debt obligations into a rating scale. Integrity of credit rating agency and also public accounting and tax firms are highly required so those institutions could mediate the asymmetry information between firms or banks and their stakeholder, like investors that use any financial published information from credit rating agency and public accounting firm as considerations to their strategic investment or financial decisions [8]. One of the rating agencies and acknowledged by the capital market regulator in Indonesia is PT Pemeringkat Efek Indonesia (PEFINDO). PEFINDO provide corporate ratings on non-financial institution and financial institutions such as banks, finance or insurance companies, securities firms, as well as any specific debt instruments issued. The rating analysis released by PEFINDO is only valid for twelve months and could be reassessed for the next period. According to PEFINDO's rating criteria and methodology, the key success factors for banking industry are business risk assessment and financial risk assessment. This paper used financial risk assessment of profitability, asset quality, liquidity to examine their * Corresponding author : vica.kaparang@unima.ac.id correlation to firm ratings of banks and investigated bank capital to the extent controlling the bank ratings. This paper is different from existed research which observed ratings of capital market instrument or securities issued by banks, like bond ratings, meanwhile firm rating of banks or bank ratings as dependent variable are used in this paper. Literature Review Signalling theory is a powerful theoretical foundation in management research as it assists in explaining decisions by managers since signal receivers will react when the signaller is credible due to information asymmetry [18]. Good governance reduced conflict between managers and stakeholders [3] and emphasizes the interest of stakeholder to have precise and trusted information from bank's management [11]. Regarding this paper, PEFINDO is appointed by bank to conduct credible rating analysis and rating results will be released as public information. The highest rating of Pefindo is idAAA and the lowest is idD and meanwhile minimum investment grade recognized by the regulator is idBBB-. These ratings define gradually the obligor's capacity to meet their long-term financial commitment relative to other obligors. The probability of future cash flows to the firm determined firm credit ratings as when the likelihood default increase so the firms credit ratings will decline [3]. In contrary, the assessment on credit ratings highly tend to rely on historical data considering bank will be downgraded after unfavourable financial information were known by interested parties [8]. Profitability and Bank Ratings Profitability analysis is one of the financial risk assessments of PEFINDO's rating criteria and methodology described as assessments on net interest income and margin, non-interest income, quality of earnings, cost structure, and also management strategy to control operational expenses and improve fee-based income of banks. The correlation between bond ratings issued by banks listed on the Indonesia Stock Exchange and financial performance of banks using ROA which represented bank profitability had been examined then suggested that ROA has a positive and significant correlation with bond ratings [2]. The bond ratings determinants had been examined with ROA as profitability measurement and suggested that ROA has a positive correlation but insignificant [7]. Considering ROA were used in the previous studies as ratio to measure bank profitability so the first hypothesis proposed in this paper or H1 is: ROA has a positive correlation to bank ratings. Asset Quality and Bank Ratings PEFINDO defined asset quality analysis as intensive assessments on the bank's non-performing loans, bank's loan loss reserve policy and adequacy. Existed papers on bonds rating of banks with NPL as one of independent variables then suggested that NPL have a significant and negative correlation to bond ratings [2], [14], [16]. Small NPL indicates that banks performed a well management of their assets [2]. Considering NPL were used in the previous studies as ratio to examine asset quality, so the second hypothesis proposed in this paper or H2 is: LDR has a negative correlation to bank ratings. Liquidity and Bank Ratings PEFINDO defined liquidity analysis as assessments on current market condition and its effect on the bank's liquidity, examination on the bank's liquidity management, bank's interest rate and maturity mismatches, net open position, loan to deposit ratio and evaluation on the proportion of the bank's liquid assets as compared to its short-term liabilities are also incorporated in the assessments. Existed paper used LDR as independent variable and suggested that LDR had a positive and significant correlation with bonds rating [16] while other examined LDR which represent liquidity risk and its correlation with bond ratings then suggested that it is positive and significant [17]. Considering that LDR were used in the previous studies as a ratio to examine bank liquidity, so the third hypothesis proposed in this paper or H3 is: LDR has a positive correlation to bank ratings. Bank Capital and Bank Ratings PEFINDO defined capital analysis as assessments on the bank's capital composition, level of capital adequacy ratio (total and Tier 1), internal growth rate of capital, and capital in comparison with assets Risk-based capital adequacy ratios have been a base of Basel framework as measurement of sufficient capital of banks relative to risks which simply explains that banks should have higher capital to compensate risk as banks take higher risks from their business [10]. Bank capital could be used in forecasting actual bank ratings as well as assets, financial gain or losses from securities, operating income, and yield on earning assets [8]. Other paper examined banks size by using the logarithm of total assets as controlling variable and resulting a correlation between size and bond ratings and also suggested that CAR has a positive and significant correlation to bond ratings [16]. As well as CAR increases, good corporate governance also improves [5]. In contrary, another paper suggested CAR has significant negative correlation to bond ratings. Capital is definitely needed to absorb risks but excess of capital will influence banks' ability of profit making [2]. Meanwhile other paper argued that CAR has no correlation to bond ratings [14]. Currently in Indonesia, banks are grouped into 4 (four) KBMI or bank groups based on Tier 1 capital according to OJK Regulation No. 12/POJK.03/2021 concerning Commercial Banks. This paper used Tier 1 Capital as a controlling variable representing bank capital. Results and Discussion This paper used purposive sampling which observed only publicly-held banks in Indonesia and analyzed their financial report as of December 31, 2017-2021. This paper also observed corporate rating analysis released by PEFINDO during 2017-2021 consecutively. Based on those criteria, 12 out of 47 banks have been data sample. The ratings and scales used as dependent variable in this paper are as follows: This paper used statistical descriptive and multiple regression analysis to examine the hypotheses. Followings are descriptive statistical analysis as a cross-section data series Valid N 60 equivalent to the BBB+ rating and the maximum value is 7.00 equivalent to an AAA rating with a proportion of 58.33% or 35 of the total 60 observed samples. The minimum value of profitability as measured by the ROA ratio is 0.07% and the maximum value of profitability is 4.22%. The average ROA ratio of data sample was 1.81%. The minimum value of asset quality as measured by the NPL ratio is 1.12% and the maximum value of asset quality is 5.65%. The average NPL ratio of data sample was 2.92%. The minimum liquidity value as measured by the LDR ratio is 56.47% and the maximum value of liquidity is 113.50%. The average LDR of data sample was 86.63%. The minimum value of bank capital as measured by natural logarithm of Tier 1 Capital is 15.75 equivalent to Rp6.90 trillion. The maximum value of bank capital is 19.26 equivalent to IDR 231.98 trillion. Based on the results of the Kolmogrov One-Sample test, residual unstandardized significance value of 0.2 was obtained from selected only 46 sample data which has been fit to normality data assumption with a significance value greater than 0.05. Based on the results of the multicollinearity test, the independent variables and controlling variable are greater than 0.1 for tolerance values and less than 10 for VIF values and suggested that independent variables and controlling variable used in this paper has no correlation. The heteroscedasticity test in this paper used the Glejser test which conducted by regressing independent variables and controlling variable to their absolute value of unstandardized residual. The significance value for all independent variables and controlling variable was obtained greater than 0.05 or 5% and suggested that this paper had met the assumption of heteroscedasticity. The value of Adjusted R2 without controlling variable is 0.00 or 0% and the value of Adjusted R2 with controlling variable is 0.570 or 57.0% and thus explained controlling variable contributes to the correlation between independent variables and dependent variable while remainders are influenced by other variables outside All independent variables have significance values greater than 0.05 or 5% with a coefficient for variable ROA of 0.113 and the coefficients of NPL and LDR by 0.08 and 0.008, respectively. These results confirmed to reject H0 from alternatives hypothesis in this paper, H1, H2, H3, which predict that there were correlations between independent variables and dependent variable. .065 This result confirmed that the coefficients and significance values of some independent variables have changed. ROA turned to has a negative coefficient of 0.39 with a significance less than 0.05 or 5%. NPL also turned to a negative coefficient of 0.147 with a significance value of 0.038. The Tier 1 Capital variable https://doi.org/10.1051/shsconf/202214903018 , 03018 (2022) SHS Web of Conferences 149 ICSS 2022 as a controlling variable has a coefficient of 0.489 with a significance less than 0.05 or 5%. Conclusions The results of this paper show that the existence of Tier 1 capital as controlling variables to the model resulted a significant difference of the correlation of ROA, NPL, and LDR as independent variables on the bank ratings as dependent variable which indicated by the changes in the constants and coefficients of the regression equation. Bank capital becomes the only determining factor in bank rating analysis even when profitability, asset quality, and liquidity are all zero by chance. Rating agencies systematically assigned favourable ratings to larger banks [9]. The greater the growth, firms will have favourable investment grade of bond ratings [1]. The higher the value of Tier 1 Capital, the higher the risk management ability, profitability, and performance of the bank [16]. Banks as highly-leveraged financial institutions are faced with various challenges of business complexity and volatility as well as various internal and external risk exposures so that the bank's capital becomes vital to business sustainability. In addition, bank capital can be analogous to fuel to support the bank's business expansion, cover risks, and make a profit. Larger banks indicate a larger source of funding and higher ability to disburse financing or investment as well as funding access [16]. Greater bank capital not only reduces financial distress but also liquidity creation, otherwise an optimum bank capital structure trade-off of cost of bank distress and liquidity creation [4]. The results of this paper show that the correlation between ROA and the corporate rating are significant as if Tier 1 Capital as controlling variable are taken into account. Other paper did not find any correlation between ROA and bond ratings [14] while other also find that profitability has a negative correlation to bond ratings, but insignificant [6]. However, the results of this paper are contrast to existed paper which examined profitability using ROA ratio and had a positive correlation to bond ratings [2], [7]. The change in the ROA coefficient to negative in this paper can be explained that if assuming Tier 1 Capital reflects value of bank assets, then higher capital also reflect complexity in managing its assets and definitely would require a tighter supervision. This is related to the higher systemic impact of a bank with large asset value. NPL has correlation with corporate rating and its coefficient changes to negative when bank capital are taken into account to the analysis. Other studies find negative coefficient of NPL [14], [16]. Less NPL ratio indicates bank is capable to manage assets well [2]. Meanwhile, banks with lower good corporate governance score tends to have larger NPL [12]. Regarding asset quality, quantitative calculations or using NPL ratio could not be sole analysis for corporate rating. In this case, qualitative justifications that are more reliable in ratings analysis such as credit portfolios by business sector or assessments of credit concentration, bank policies and procedures related to reserves or write-offs, and other qualitative disclosures that can determine the bank's asset quality. LDR has no correlation with corporate rating, with or without controlling variable. In contrary, LDR has a positive and significant correlation to bond ratings [6], [7], [16]. LDR is ratio that represent the core business of banks so liquidity is assumed to be stable as long as there is no financial distress. This also assumed that the operational activities of banks regarding asset and liabilities management, particularly interest gaps, maturity mismatches and foreign exchange have been anticipated for liquidity risk aligned with OJK Regulation Number 42/POJK.03/2015 as an implementation of Basel III for bank liquidity standards. The purpose this paper is to investigate the extent of Tier 1 capital controlling bank ratings using financial risk assessment. Profitability, asset quality and liquidity have no correlation with bank ratings without existence of bank capital. Bank capital has a significant role to the correlation of bank ratings and financial risk assessment rating using PEFINDO's rating methodology. There are several extensions regarding the limitations of this paper. First, this paper only used banks with public ownership. Second, this paper used financial risk assessment as an independent variable without considering the business risk assessment, such as market position, infrastructure and quality of service, and corporate governance.
2022-11-20T16:20:00.702Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "edc9092fc3131243d4da4a97c9cd11d8c2ad4911", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2022/19/shsconf_icss2022_03018.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b2426f2b5e50c4615156e62c8535f3b98b4036c2", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [] }
257560255
pes2o/s2orc
v3-fos-license
The infant feeding methods promoted by South African Instagram influencers in relation to crying and sleeping, 2018–2020: a retrospective digital ethnography Background Globally, there has been a decline in breastfeeding rates. This has resulted in increased infant mortality due to infectious diseases and inappropriate feeding practices. The aggressive marketing of breastmilk substitutes (BMS) by manufacturers has contributed, in part, to these declines. With the progressive use of social media, marketing has shifted from traditional methods to the use of influencers, who command a huge following on their social media accounts and influence the daily decisions of their followers. This study investigates the infant feeding methods and associated products promoted by South African influencers in relation to crying and sleeping and their followers’ responses. Methods This was a retrospective study, which used a mixed methods digital ethnographic approach to analyse posts related to infant feeding methods that were made by seven South African Instagram influencers between the period of January 2018 to December 2020. Framing analysis was used to analyse qualitative data and quantitative data were analysed descriptively. Results From the 62 posts that were analysed, 27 were sponsored advertisements (some violating local regulations) and 35 posts promoted breastfeeding. The 18,333 follower comments and 918,299 likes in response to the posts were also analysed. We found that influencers presented BMS products as a solution for a child who cries a lot and has trouble sleeping. BMS were framed as helpful for children who are seemingly always hungry and dissatisfied with breastmilk alone. The study also found that some influencers promoted breastfeeding on their Instagram pages. Unlike BMS posts, breastfeeding posts were not sponsored. With a few exceptions, followers tended to support and reinforce the framing of influencers. Conclusion Stiffer regulations should be enforced against companies using influencers to promote infant formula and other BMS products, with proactive monitoring of social media. Professionals giving advice contrary to the guidelines from the WHO should be reported according to Regulation 991 and made accountable. Proactive engagement with Instagram influencers to promote breastfeeding should be considered. Page 2 of 15 Pilime and Jewett International Breastfeeding Journal (2023) 18:17 Background There are many studies that highlight the importance of breastfeeding in maternal and child health. Some noted advantages include protection against diarrhoeal diseases, improvements in the lifelong health of the child, psychosocial and socio-economic benefits for both the mother and the child and significant reduction in infant morbidity and mortality [1,2]. In South Africa, breastfeeding promotion is a national health priority [3]; however, the country still has sub-optimal breastfeeding rates [4]. For children under six months of age, the estimated exclusive breastfeeding (EBF) rate for infants between 0-1 month it is estimated to be 44%, but declines to 24% for infants between 4 -5 months [4]. Problems with excessive crying and sleeping are found in approximately 20% of children and have been known to inform the parental decisions for alternate infant feeding methods [5]. Caregivers associate constant crying with inadequate breast milk production and, in addition, an infant who seems continuously hungry motivates the initiation of alternate feeding methods [6]. Caregivers' perception of infant fussiness, posseting and sleep as "problematic" also shape their infant feeding practises, often resulting in breastmilk substitute (BMS) introduction [7]. Companies that produce BMS market their products as solutions for infants who struggle with the abovementioned problems [8]. Partly as a result of their aggressive advertising tactics, breastfeeding rates have significantly dropped globally [910]. The conceptual framework shown in Fig. 1 is from a published study on the impact of BMS marketing on WHO recommended breastfeeding practices [10]. The framework illustrates the impact that different forms of marketing have on the decision to use BMS or to breastfeed. The review found evidence that BMS marketing influences social norms and attitudes, erodes the confidence of mothers to breastfeed and results in sub-optimal feeding, although they struggled to quantify how different marketing strategies contributed to these patterns. The review also did not cover social media marketing and only focused on commercial infant formula. As such, we have adapted the figure to indicate our interest in social media as an additional form of direct marketing. Of interest to this study is direct marketing of either BMS or breastfeeding to the public through social media influencers. In response to the marketing practices of BMS manufacturers and the subsequent increase in infant mortality rates, the International Code for the Marketing of Breastmilk Substitutes was developed by the World Health Fig. 1 Adapted conceptual framework of the impact of marketing of BMS on WHO recommended breastfeeding practices [10] Page 3 of 15 Pilime and Jewett International Breastfeeding Journal (2023) 18:17 Assembly (WHA) in 1981, herein referred to as the Code [11]. The Code has various articles and subsequent resolutions to regulate how governments, health systems and health care workers provide guidance relating to Infant and Young Child Feeding (IYCF). The Code also monitors responsible marketing and labelling of BMS by manufacturers [10]. Drawing from the precepts of the Code, in 2012 South Africa developed its own Regulations Relating to Food Stuffs for Infants and Young Children (R991) to regulate IYCF practices [12]. In terms of R991, there are restrictions placed on the labelling and marketing and promotion of infant follow up formulae, and powdered or liquid milk being represented as suitable for infants and young children [13]. Complementary feeding bottles, feeding cups and teats are also implied by the Code and the local R991 [14]. The marketing of BMS and products is prohibited for infants under six months of age and promotional practises of certain 'designated' products (for children under 36 months) contravenes the Code and R991. Certain country-specific regulations have expanded the definition of BMS to include pacifiers, for example Vietnam [15], but this has not been the case for South Africa. It is important for more countries to consider expanding their definitions of BMS to include pacifiers as there are risks associated with pacifier use. These include failure of breastfeeding, dental deformities, sleep disorders, tooth decay and oral ulcers [16].For this study, we monitored the marketing of pacifiers within our working definition of BMS, but did not list these as R991 contraventions (see Fig. 2 for key elements of R991). Despite the Code and country-specific regulations like R991, research has shown that companies are increasingly using the internet and social media sites as well as mobile applications to support and sell their BMS products [17]. Studies conducted by the WHO across different countries found that companies producing formula milk access personal data and engage with women through online platforms, which optimizes their marketing methods [8]. This is relevant, as in South Africa, almost 30 million out of an estimated 57 million people are active social media users [18]. With this wide usership, companies are now marketing their products on various social media platforms through the use of social media influencers [19]. Influencers are people who command a following on their social media profiles, separated into different categories based on their number of followers. We applied the following category definitions of mega influencers (more than one million followers), macro influencers (between 40,000 and 1 million followers), micro influencers ( between 1,000 and 40,000 followers), and nano influencers (less than 1,000 followers) on one social media platform [20]. We note that other social media scholars assign different thresholds, such as a macro influencer having between 100 K and 1 million followers and a nano influencer being anyone with less than 10,000 followers [21], and that these appear to be somewhat arbitrary. Whatever their following sizes, individuals called influencers have developed credibility from their followers owing to the content that they post [22,23], with some scholars arguing that there are times the micro-influencers are preferable to macro-influencers [24]. One way that influencers are identified on social media platforms is through having a verification badge or a "blue tick" next to their usernames [19], although this is not a pre-requisite and recent shifts away from this model on Twitter may have ripple effects elsewhere. Influencers can vary from celebrities, to people who are not necessarily celebrities but have acquired followings from the content that they post, thereby gaining popularity on the social media platforms and making careers out of marketing for various companies on social media [22,23]. The majority of influencers in South Africa are female (56.2%) and they are aged between 18 and 34 years old [25]. Drawing from the work of Piwoz and Huffman, and based on evidence of the growth of social media marketing in South Africa [26], this study sought to answer the following research question: How are South African Instagram influencers marketing infant feeding in the context of infant and young child feeding regulations in South Africa? Our first objective was to investigate how South African Instagram influencers presented BMS to parents and caregivers with regards to crying and sleeping during the first two years of an infant's life, in light of R991 regulations. To support contextual comparison, we also looked at breastfeeding mentions by Instagram influencers during the same study period, a method promoted by Macnamara for media content analysis [27]. Beyond frequency of mentions, our second objective was to explore how infant feeding options were framed, particularly by influencers, but also by the followers who engaged with these posts. The third study objective was to explore the conversations on infant feeding between the selected influencers and their followers. Study design In this study we used a retrospective digital ethnographic approach [28], which involved collection of behavioural data of participants in their natural real life settings without the use of questionnaires, whereby what they say and what they do can be vastly different [29]. The ethnographic approach was done online. As researchers, we observed the interactions between the influencers and their followers without interference. We studied the Instagram posts and associated images using a mixed methods approach, drawing on both qualitative and quantitative data to address the study objectives. Study setting Instagram was selected as the social media platform of choice because in South Africa about 6 million people are Instagram users and the majority (54.2%) of these users are female aged between 18-34 years old [30]. We focused on posts originating from South African Instagram influencers. Given the nature of social media, responses to posts may have originated from anywhere. Study population and sampling strategy The study population was all posts by South African Instagram influencers targeting mothers/parents and caregivers of infants that discussed infant feeding methods for the first six months of life during the study period of January 2018-December 2020, and subsequent engagements (comments and likes) on these posts. The search strategy was designed to identify infant feeding posts advertising BMS, which is defined as any foods or liquids that are marketed or presented as a total or partial replacement of breast milk [11] as well as breastfeeding. There was no requirement that the posts be overtly sponsored, given that disclosure is not ubiquitous and regulations vary by context [31]. This timeframe was selected so that the most recent data at the time of writing up the research would be presented. As there was not a central repository that defines influencers to act as a sampling frame, identification of eligible influencers and then eligible posts was done as follows: Step 1. South African Instagram Influencer Identification. The authors manually searched for South African influencers who had children and posted content related to having babies in South Africa on the Google search engine using the following Boolean search terms: Influencers AND South Africa; Celebrity AND Influencer AND Mom AND SA; Influencers AND Babies AND South Africa; and, Influencers AND Moms AND Babies AND South Africa. For each term, both authors reviewed the results from the first two pages of results and from these results selected the URLs and associated Influencer names and handles to confirm that they were South African. Seven influencers that met inclusion criteria were identified at this stage. After selecting the influencers to be analysed in the study, the first author clicked the follow button on their profiles in order to access their posts as they were shared for the entire study period. While it is possible to access posts without having formally followed the influencers, as their profiles are publicly accessible, for the purposes of this study, the researcher followed the pages of the influencers to get real-time access (access as content is shared) to ensure posts were not missed. This approach is similar to the methods of a study that was conducted in South Africa on the marketing done by companies producing BMS on social media but without using influencers [26]. We went on the Instagram handles of the selected influencers to search for posts that fit into the post eligibility criteria for inclusion in analysis. The eligibility criteria were as follows: • The post and subsequent comments (for inclusion) should have been made between January 2018 to December 2020. • The post must be publicly accessible • Post must address infant and young child feeding practices • Step 3. Data extraction. The first author downloaded the eligible posts and their subsequent comments, number of likes, link, caption and username of the influencer and saved this in an Excel database. To download the videos, we copied the links of the video posts and pasted them on a link [32], and then stored the downloaded videos in an Excel database for later analysis. To download the comments, we entered the URL from the posts on a link [33] and then pressed the scrape button. All the comments were downloaded into a file and then saved in an Excel database. Follower identities were not included to protect their identities, as they are not influencers. Data processing and analysis The final sample of collected data included 62 eligible South African Influencer posts from the seven influencers, comprising 61 images, one video and 18,333 comments (both influencers and their followers). The images, influencer and follower comments were imported into NVivo12 for qualitative framing analysis, while descriptive quantitative data were coded manually in Excel. Table 1 summarises the dimensions of the posts that the authors agreed upon jointly, prior to manual coding for the quantitative analysis. The first author did the entire quantitative coding. Influencer posts were coded for all dimensions in the table whereas only the dimension of Slant was applied in relation to follower responses. The follower responses were categorised as pro-breastfeeding (indicating follower comments supporting breastfeeding), pro-BMS (follower comments supporting BMS and co-feeding of infants 6 months and younger)and neutral, and the findings were descriptively analysed, as in Table 2. For quality purposes, partway through the coding, the second author independently coded a random sample of posts. There was full agreement with the first author, analogous to an interrater reliability of 100%. This high level of agreement is likely to have developed through a previous study where the authors used a similar coding framework to analyse BMS marketing in magazines [34]. For the qualitative aspect of the study, the first author viewed each image/video and analysed the subsequent captions and comments that followed each post using framing analysis [35]. This analysis focused on how influencers structured the delivery of information regarding infant feeding according to their own experiences and, in other cases, how this was structured according to what they were promoting. For the first round of coding, the first author went through the Influencer posts, captions and follower comments and categorizing them into themes, which she discussed with the second author, using examples. Framing analysis was done on the most prominent themes, as agreed upon by the co-authors. Throughout the framing analysis, we jointly explored how rhetoric, analogies and metaphors in common language were used to promote or undermine the use of BMS as well as breastfeeding. Given the interpretive nature of framing analysis, it is important to specify our training and positionality as researchers. Firstly, both authors are trained researchers drawing on a constructivist epistemology for qualitative Table 1 Dimensions captured for quantitative analysis Infant feeding # of posts discussing infant feeding methods for infants by the influencers fitting the inclusion criteria: EBF for infants < 6 months, breastfeeding up to 36 months, complementary feeding, BMS use Crying # of posts discussing BMS in relation to crying on the influencers' handles Sleeping # of posts discussing BMS in relation to sleeping on the influencers' handles Ads # of adverts by companies producing BMS on the selected influencers' pages. Adverts are posts made by influencers on behalf of companies, and they usually get paid for the posts Sources who was quoted in the posts as giving advice on infant feeding, e.g. categories of health care professionals, mother to mother, BMS producing companies' representatives Slant Attitudes towards breastfeeding or BMS of both original posts and follower responses: Pro-BMS, Pro-breastfeeding, neutral Violations Sponsorship # of posts potentially violating R991 and the Code # of posts that an influencer was paid to share on their profile. These posts are identified through phrases such as "paid partnership with", "#AD, #sponsored by" etc Type of BMS BMS broken down by type, ie bottles/teats, gadgets for facilitating the use of BMS, pacifiers, complementary food marketed for infants younger than 6 months and baby formula analysis. As public health practitioners, we approached analysis as supporters of breastfeeding, grounded in an understanding of local BMS regulations. However, we also drew on our experiences as mothers who breastfed, but also relied on commercial infant formula in some instances. Our training alongside our personal experiences enabled us to discuss influencers and followers in terms of their interface with science and regulations as well as from the perspective of what frames might appeal to parents who have difficult infant feeding journeys. In order to enhance the credibility of the research, triangulation, through cross checking the posts and comments, analysis of data collected as well as auditing the data for consistency was done by both authors. To enhance reliability of the study, the second author reviewed a sample of posts and checked the way the first author was coding the qualitative comments, similar to what she did with the quantitative coding. There was 100% inter-coder reliability. Furthermore, the researchers discussed and revised any inconsistencies. South African influencer reach A total of 62 Instagram posts that were influencer-initiated and met the inclusion criteria were identified for the period of January 2018 to December 2020. As shown in Table 2, the seven influencers included in the analysis had a total number of 6,790,000 followers, representing their total audience. Two were mega-influencers, with well over 1 million followers, while three were macroinfluencers and the remaining two were micro-influencers. In addition to the original posts, data were collected on the response and engagement of the followers through the number of likes and the number of comments each post received. Cumulatively, the 62 eligible posts received 918,299 likes and 18,333 comments, which were analysed over a three-month period in 2020. There were no likes on Ntando Kunene's video because she posted this on her stories, and posts made on this feature do not have a 'like' button. Therefore, it was not possible to establish how many people would potentially react to the video through likes. Influencer infant feeding focus Of the total eligible posts by influencers that were analysed, 43% were advertisements promoting products by companies producing BMS (specifically, NUK, Phillips Avent and Ella's Kitchen) and 57% of the posts promoted breastfeeding. There was no explicit endorsement of formula feeding. However, indirect facilitators of formula, such as feeding bottles were advertised by two of the three companies. Gadgets which are used to facilitate early mixed feeding (under six month of age), for example baby food makers and squeeze stations, were also advertised. Of the posts promoting breastfeeding, 43% were made during breastfeeding week, which is held from the 1 st to the 7 th of August every year. As shown in Table 3, four of the influencers posted specifically about BMS, with some potential violations to Regulation 991 recorded. There were six influencers who referred to breastfeeding during the study period and three influencers referred to both BMS and breastfeeding. Among those influencers who posted both BMS and breastfeeding posts, no clear pattern could be established in terms of the number of times they posted the different infant feeding methods. For example, macroinfluencer Azwi Rambuda had a total of 23 sponsored IF posts and 9 non-sponsored breastfeeding posts, while another macro-influencer, Nkateko Dinwiddy, had only 1 sponsored BMS post and 19 non-sponsored breastfeeding posts. Table 4 indicates the specific type of BMS advertised by the influencers. There were no posts advertising baby formula, and only one post was an advert for complementary food, directed at infants less than six months of age. Most of the advertising was done for gadgets that facilitate the use of BMS. This could be attributed to awareness by the manufacturers of the implications of illegal advertising but still finding ways to advertise that are seemingly less evident. Influencer framing of BMS in relation to crying and sleeping The methods and language used by influencers to describe infant feeding, particularly BMS, was of particular interest in terms of frames related to crying and sleeping. Table 5 highlights the breakdown of the posts in relation to crying or sleeping for those influencers who posted about BMS. Only two influencers linked BMS to these issues. Specifically, Azwi Rambuda discussed complementary feeding to help infants with sleep and to address crying infants, while Jessica Nkosi referred to a pacifier as a solution for a "fussy baby". The other frames they discussed, for example bottles and teats mimicking the breast are outlined in the qualitative analysis that follows. "Pacifiers soothe crying babies" BMS manufacturers used South African influencers to spread the narrative that pacifiers are a solution to a child who cries a lot or is fussy. In the instance highlighted below, an influencer posted an image of a Phillips Avent pacifier and framed it as the best alternative in the market that helps with a child who cries a lot. Jessica Nkosi made the same claims about pacifiers soothing crying or 'fussy' babies, advertising a different brand of pacifiers called NUK. Both of these companies market BMS products, namely bottles and teats, which are covered by R991. "Bottles and pacifiers mimic the breast" Another pattern in the framing of company-sponsored posts was the use of persuasive language to sway followers into using bottles or pacifiers, through portraying them as equal to the breast. In a Phillips Avent sponsored post, an influencer uploaded an image of a Phillips bottle with a caption describing how breastfeeding is difficult for a working mom, for the baby and the caregiver. The influencer described how the Phillips Avent bottle was the ideal solution because it mimics the breast and is ideal for combination feeding. The post explained that by using the Phillips bottle, there was some comfort guaranteed to the mother and the baby would not be fussy during the day, in this instance marketing the product in relation to crying. This post promoted combination feeding as a solution for the "challenges" that come with breastfeeding. The fact that the influencer mentioned expressing breastmilk and used the hashtag #expressmilk as opposed to promoting infant formula explicitly does not detract from the frame claiming bottle equivalency to the breast. The parents and caregivers of children who cry a lot were influenced to use pacifiers and infant formula was subtly advertised through the marketing of bottles. While there was no direct marketing of infant formula, the implication that bottles and pacifiers were as good as breasts since they are "shaped like the breast" undermined the unique benefits of breastfeeding. 'Do it your way' In a different post, Mpoomy Ledwaba shared an image of herself breastfeeding, and highlighting the benefits of breastfeeding to her followers. However, this post was sponsored by NUK, a company which produces BMS. In her caption, she directed her followers to the NUK Instagram page and encouraged them to follow the page in order to stand a chance of winning a hamper by NUK. This is an example of marketing of BMS that is not very apparent but may be considered a contravention, as NUK also makes bottle and teats and directing followers to their page may be regarded as promotional practise. In addition, the 'do it your way' mention is one often embraced by BMS companies as a way to underplay the hygiene and health risks of using their products. Exciting (early) milestones A more apparent advert of BMS was made by Nkateko Dinwiddy, who advertised baby food produced by Ella's kitchen. The image shared was of a baby being spoonfed solids and the packaging on the food packet showed that the food was suitable for infants aged four months and beyond. The caption on the image read: We've reached another exciting milestone for Suri as she's now started her weaning journey. I remember having a lot of fun and making a lot of mess with Sana when she was weaning, but I remember finding it a little daunting too. Knowing when to start, what to start and how to keep it fun! That's why I'm really happy to announce that I've partnered with @ella'skitchenuk to tell you about WEANSURY-a brilliant new online hub filled with lots of helpful information, top tips form the experts, recipes, and so much more to support you on your weaning journey -Nkateko Dinwiddy In this post, Nkateko Dinwiddy shared that she was weaning her child and had partnered with a BMS company called Ella's Kitchen. In the post, she directed her followers to the company's website to get more infant feeding advice. In this image, BMS is marketed targeting infants less than six months, which violates both the Code and R991. The food packets advertised in the image are marked "from 4 months", implying that the influencer is marketing to mothers or caregivers of infants under six months old. At the time the image was posted, her infant was less than six months old, therefore the image idealised feeding solids to infants under six months of age. Suggesting that introducing semi-solids before six months is an 'exciting milestone' directly contradicts WHO EBF guidelines. Influencer framing of breastfeeding in relation to crying and sleeping Another aim of this study was to analyse the posts on breastfeeding and how this is framed by the influencers on Instagram. From the posts analysed, 57% were images of influencers breastfeeding their children and the subsequent captions accompanying the posts encouraged their followers to practice the same. Influencers who advocated for breastfeeding highlighted how breastfeeding was ideal to calm a crying baby and to aid with sleep patterns. There was no evidence of sponsorship by companies or breastfeeding promotion organisations, e.g. GrowGreat or La Leche League, in these posts or associated hashtags. During the annual breastfeeding week, the frequency of posts related to breastfeeding significantly increased by 43% among the influencers. In this period, most posts were about influencers promoting breastfeeding and highlighting its benefits. In the example below, the influencer shared her experience with tandem breastfeeding (nursing two babies at the same time) and the benefits of EBF. The caption on the image read; One of my favourite things about motherhood is breastfeeding, so I decided I would breastfeed Nuri till she's 2 and even when I found out that I'm pregnant I was happy when my midwife told me its safe to breastfeed. However, our breastfeeding had reduced to just mornings and evenings but the closer we are to the arrival of our new baby the clingier Nuri has gotten yes I plan on breastfeeding both but it seems missy is super attached now. Moms who've breastfed more than one baby did you experience this? Or just clinginess towards the end of your pregnancy? #momtalk #breastfeedingmama #breastfeeding #breastfeedingmom #breastfeedingweek #postpartum #youngmum #breastfeedingbenefits #babies #mom #preggo #newmom #exclusivebreastmilk -Mpoomy Ledwaba Another example of a post from an influencer encouraging her followers to breastfeed is showcased in an image where the influencer was smiling and publicly breastfeeding and had the following caption: I breastfeed openly whenever and wherever Suri needs. I've learnt to block out the stares or the whispered comments form the non-approvers as I give my baby what she needs. My influence comes from my African upbringing where this approach to feeding babies was normalised by everyone around me. Don't let anyone make you feel bad for what's natural and normal. Enjoy your breastfeeding journey #normalisebreastfeeding #breastfeed-ing #breastfeedingmom -Nkateko Dinwiddy There were other posts promoting breastfeeding by the influencers and they shared their own personal journeys with breastfeeding. Despite none of the breastfeeding posts being sponsored, we found that posts encouraging breastfeeding accounted for the majority of infant feeding posts that were shared by the selected influencers. There was little reference to breastfeeding in relation to crying or sleeping. Rather, some benefits to breastfeeding shared by the influencers were being nutritional, bonding between mother and child, and weight loss for the mother. The influencers who shared breastfeeding posts also highlighted some challenges that they encountered with breastfeeding. Some other influencers shared how their support systems did not agree with the concept of EBF and since they relied on them for taking care of their infants while they worked, their children were introduced to solids before turning six months old. Their followers responded sharing their own personal experiences with infant feeding, which is highlighted in the following section. Follower responses to influencer frames It is worth noting that from this study, more infant feeding posts promoted breastfeeding (57%) as compared to those that promoted the use of BMS products (43%). However, as reported earlier in Table 1, the posts encouraging the use of BMS got as many likes and comments as the posts that encouraged breastfeeding. The framing of the followers' responses to the BMS and breastfeeding posts were analysed to explore the degree to which audiences' frames may align (or not) with Influencers. This is illustrated in Table 6. Influencer vs alternative frames in follower responses There were examples of influencer posts and subsequent conversations outweighing a follower's immediate social support system around the use of BMS, pacifiers in particular. As noted below, a follower mentioned that her grandmother was against the use of pacifiers and had advised she throw hers away. However, she was hardly sleeping at night and was reconsidering her decision following the conversations around the issue from the influencer's post. Responding to Jessica Nkosi's pacifier post, a follower had the following sentiments to share. Wow I had to throw the pacifiers in the bin because my son's paternal granny is so against them and I'm also a neat freak so wouldn't give my son something dirty. Now at night I hardly sleep cos he eats every 2 hours. Page 10 of 15 Pilime and Jewett International Breastfeeding Journal (2023) 18:17 In this case the mother's sleep took precedence over the infant feeding method that was ideal for the baby. Responses of the followers from the described case and other similar posts indicated that the consensus among the influencers and their followers was that giving babies pacifiers was an effective way to calm a crying baby (and catch up on one's own sleep). In a video post, Ntando Kunene probed the opinions of her followers after her mother had suggested giving her four-month-old son solids. She was sceptical of her mother's advice since she knew guidelines advised initiation of solids after six months of age. Responding to this, a follower mentioned how she had started her child on solids when she was hardly a month old. She gave the logic that the child cried a lot and after initiating solids, the crying decreased. From this comment, other people giving their opinions on the conversation highlighted how they found it helpful to introduce solids to children that had not even reached a month to deal with the excessive crying. … mine seems a bit crazy but my daughter started solids when she was hardly a month old , because she used to cry a lot even after the bottle so my mom suggested I start the solids. I felt it was waaaaayyy too soon but hey it worked Another response to the video was: You waited this long? My grandmother waits for a month and then she starts feeding solids. These kids cry so much when they are hungry and sometimes milk alone doesn't do justice. Some "experienced" mothers within the comments sections also opined that the recommended six months to introduce solids was unachievable, and that the mother would have lost their mind because of an excessively crying baby. I'm a mother of three...Start as soon as they start crying from hunger, you would know, 2 months and above but the books are shy to tell the truth ... they'll be like 6 months ... at which point you would be at a mental institution A notable finding from this and other influencer-initiated posts was that influencers seldom responded to the questions posed by the followers. In the case of the video, Ntando Kunene did not comment or give feedback to her followers pertaining to the conversation that she had started. Due to this, it is unclear whether her inclination from the responses of her followers was more towards breastfeeding or not. This was not a sponsored post and no BMS product was advertised. Health frames by professionals on influencers' posts Despite regulations from the Code against it, some health professionals used the influencers' platforms to give infant feeding advice. One such example is from a nurse who recommended the need for a baby to have a pacifier. In the comment, the nurse pointed out that some babies need pacifiers to soothe them even when not hungry, and that the pacifier helps with a good sleep at night. The comment read, I am a nurse, trust me I thought the same way before the baby arrived but reading more my anxieties were alleviated, some babies NEEEDDD a pacifier as they always want something to suck on even when they are not hungry, and it helps them sleep well at night not to mention prevention of sudden infant death. 18:17 In this instance, a pacifier was portrayed as a preventative method against infant death by a health professional. Despite the popularity of pacifiers among the influencers who posted them and their followers, some other responses to the pacifier posts were against their use, citing them as unsanitary. A follower's response to a pacifier advert by an influencer highlighted how she was not using one because of the risk they pose on exposing children to diarrheal infections. Another follower responded that on doctor's advice, she was not going to use a soother because she believed they are unhealthy. Despite comments like these, the general slant in the responses were more positive towards what the influencers suggested, with some followers thanking the influencers for the suggestions on which pacifier to buy and how the pacifier helped soothe their own babies as seen in this quote: "Just gave birth two weeks back and I'm planning to get one. Thank you @mrslitelu". Frames related to sleep and hunger Belief that a child who is not well fed tends to sleep less and therefore the child's diet needs to be complemented with BMS was popular among the followers. A follower mentioned that she started her child on solids at three months old, against the advice of health professionals; however it had turned out "great" for her; "Started at 3 months cos she was just not getting full with the milk only… It turned out great though nurses discourage it. " Extent and prominence of influencer posts' coverage Studies have shown that young mothers rely on social media, Instagram in particular, for advice, support and general information about breastfeeding and infant feeding methods [36]. Through Instagram influencers, content that creates discursive communities on various topics and themes related to breastfeeding is shared, and there is a high degree of interaction between the followers and the influencers [37]. The number of followers that an influencer has translates to the potential audience for these discussions and reach is established by the number of likes and comments a post receives [38]. From this study, the total number of followers of the 7 influencers that were analysed was 6,790,000. This reflects reach, the number of people who may have been directly exposed to the influencer infant feeding posts. As a 'discursive community' almost one million people directly engaged with the selected influencers through likes and 18,333 comments were made on the 62 infant feeding posts. As noted in the findings though, few influencers responded to follower comments, which suggests that the depth of discussion or discourse was relatively superficial. Unlike a study that suggests that micro and nano-influencers may engage more with their followers than macro influencers, [24] this type of pattern was not reflected in this study. In this study, three companies were identified as sponsoring influencers to market BMS. These were Ella's Kitchen, Phillips Avent and Nuk. Ella's Kitchen sponsored Nkateko Dinwiddy to post BMS in the form of complementary foods for infants aged four months old. It is worth noting that the influencer involved is based in the UK, however, her followers are mainly South African, and through her post, they could access the website for Ella's Kitchen and buy the products online. In the UK, regulations relating to the marketing of BMS may vary from those of South Africa; therefore, this could indicate the tactics that manufacturers use to market their products, in the process violating regulations. The other companies-NUK and Phillips Avent-did not explicitly market baby formula or solids, but rather marketed bottles and pacifiers; while pacifiers are not a contravention to the Code, the marketing of bottles and teats is regulated by the Code. This indicates awareness of regulations; however, companies still find ways to manoeuvre and market their products. Here we also note Giufferdi-Kahr and colleagues observation that influencers do not always disclose sponsorship and they may not even be aware of regulations [31]. BMS manufacturers take advantage of the wide audience reach the influencers have to market their products [10]. The South African influencers sponsored by BMS companies had a reach of 3,518,000 followers in this study. Other studies have found that BMS companies normally spend 10-15% of their gross profits to market their products in low and middle income countries [10]. However, with the advent of social media influencers, the reasonable assumption is that they spend less per post for the large audience that is commanded by the influencers when compared to traditional marketing methods, e.g. print advertisements [39]. This could follow that fewer resources are used in the marketing of BMS to target a significantly wider audience, making it cost effective [24]. Previous research has shown that Instagram has both negative and positive influences on its users in general [40]. Due to the large potential of social media influencers to influence behaviour, government bodies should increase monitoring of social media marketing to regulate the activities of the BMS manufacturers. In addition to unethical marketing, research has also shown that social media that focuses on breastfeeding can improve breastfeeding intentions, enhance knowledge pertaining to breastfeeding and can provide supportive communities among breastfeeding mothers [41]. These results are congruent with results from this study, in followers appreciated influencer breastfeeding posts. Violations of the code and R991 The Code, together with the local R991 prohibit the marketing of BMS for use as a total or partial replacement of breast milk [14]. A provision in the SA legislation is that employees of BMS manufacturers cannot contact members of the public to market their products, including via "internet sites", which include social media platforms [26]. This study revealed that several provisions of the Code were violated, directly or indirectly, through manufacturer-sponsored posts that influencers shared. Four influencers directly advertised BMS products to their followers, portraying them to be best decisions that parents can make for their children in terms of assisting with problems of crying and sleeping. Even if some of the products were not defined as BMS in R991 or the Code, e.g. pacifiers, the same companies also produce BMS products that are covered in the regulations. Their sponsorship of influencers to draw followers to their websites, is an indirect form of BMS marketing. In reference to a child who cries a lot, several influencers promoted the use of pacifiers. Within these posts, we found examples of followers reinforcing that this was effective in calming a crying baby, providing additional marketing support to the BMS companies. In the case of this research, some influencers posted adverts for pacifiers specifically designed for infants from birth to six months old. Despite this not violating R991 and the Code, this contravenes the UNICEF and WHO Guidelines for the Compliance for Advertising in Baby Friendly Healthcare facilities, which prohibits marketing pacifiers and nipple shields and regards the marketing of these as unacceptable [42]. This highlights ambiguity between what healthcare facilities accept and what happens in social media spaces. While physiological factors influence a woman's decision to breastfeed, societal factors have been observed to play a bigger role on a woman's infant feeding decisions [40]. Recent studies have shown that breastfeeding mothers use social media for social support, seeking advice and as a source of information [21,22]. Instagram influencers are known to create information sharing societies with their followers and BMS manufacturers use this to their advantage when marketing their products through paying influencers to endorse their brands [22]. The discursive communities created on social media through interactions of influencers and followers in the comments section have the potential of reinforcing social norms that undermine breastfeeding on posts that promote the use of BMS or related products. For example, seeing an influencer using a pacifier to help her baby sleep, and giving testimonials of how this has made her life easier, as in the post by Jessica Nkosi, might encourage her followers to do the same; they might also visit the company website, which also markets infant formula. This arguably contravenes provision 7 of the Regulation 991, stating that no person shall undertake or participate in any promotional practices in respect of BMS [13]. The use of framing techniques to influence behaviour Framing theory suggests that the way something is presented to an audience influences the choices that the audience makes pertaining to how they process the information, and can be viewed as a form of agenda setting [35]. From the posts and comments that were analysed for this study, it is clear that influencers marketing BMS products adopted similar framing techniques that have been used by industry in other contexts to increase social acceptance and desirability, including the use of metaphors, catch-phrases, and stories. According to the framing theory, metaphors are used when an idea or concept is compared to something else in order to create desirability [43]. An example of the use of metaphors from this study was when pacifiers and teats were compared to the breast. Specifically, on several posts, two influencers mentioned how the NUK pacifier and the Phillips Avent teat mimic the breast and were suitable for infants from as young as 0-6 months in the case of the pacifier. A study on whether breastfeeding babies should be given pacifiers found that early introduction of pacifiers may lead to "nipple confusion" and may lead to incorrect latching, both which undermine breastfeeding [44]. Framing theory also suggests that the use of catchphrases make a message more memorable and relatable [43]. In the language of social media, catch-phrases are sometimes presented in the form of hashtags, and they make content easier to find [45]. All of the posts that were analysed for this study contained at least one hashtag to improve the reach of the posts. Examples of when hashtags were used in the promotion of breastfeeding included #breastfeeding and #normalizebreastfeeding. However, in the adverts, BMS manufacturer companies were hash-tagged to increase visibility of the posts. Examples include #PhillipsAvent and #naturalbottles. The use of hashtags and the catch-phrases from within the hashtags increases the volumes on the posts. In addition, using company hashtags could bring Instagram followers to the main company websites, where other BMS products are marketed. Such catch-phrases, when used to market BMS, can have detrimental effects on the efforts to promote ideal infant feeding practises [45]. Through story-telling, a topic is framed in a vivid and memorable way to the effect that an audience can be drawn to it [43]. Influencer posts analysed in this study indicated that influencers use this technique through captioning their posts with a personal experience using BMS products on sponsored posts or breastfeeding on the non-sponsored posts. According to the theory, depending on how well the story is told through the captions or through images and videos, the targeted audience is likely to follow the recommendations from the posts [40]. The visuals of influencers breastfeeding and the support they receive while doing so captured follower attention, as seen through high levels of engagement. In this study, there was a high level of engagement on the posts promoting breastfeeding and followers were encouraged to practise the same. Despite the common belief that breastfeeding is practised by people of low socio-economic status [46], when the followers saw people they look up to breastfeeding, many expressed motivation to copy the behaviour in their comments. The impact of marketing could therefore be beneficial to promote infant feeding according to regulations and guidelines by health authorities. Implications of violations Even though the data analysed in this study did not show direct advertisement of infant formula or porridges, there was clear use of calculated marketing techniques by companies, which did not make the marketing apparent. This was done through marketing products used to facilitate administration of apparent BMS, like feeding bottles. While this may indicate the effectiveness of the Regulation 991 in reducing direct marketing of infant formula, conclusions can be made that companies producing BMS are aware of the regulations of the Code, but still find ways of advertising their products. The use of BMS company hashtags was a subtle way of directing followers to products that are otherwise restricted from being advertised. This finding resonates with other literature from a study looking at marketing of BMS in South Africa, which made the same conclusions [26]. According to both the Code and R991, the penalties that can be imposed for violating regulations include fining or imprisonment or both to companies found in violation [13]. However, to date there has not been any published information to that effect despite the violations by the manufacturers. Study limitations While the posts analysed received almost a million likes and about eighteen thousand comments, reflecting reach, the sample size for the influencers analysed was relatively small (a total of 7). We might have introduced bias through the influencer selection methods used, as there was no central repository of Instagram influencers available. This is a methodological area in social media research that needs further attention. Restricting the sample to South African influencers does not reflect non-South African influencers who South Africans may be following, meaning that this study likely underestimates how BMS companies may be using influencers in the South African context. With the use of Instagram, the influencer who makes a post is at liberty to delete a post after they have made it, such that some posts may have been deleted after engagement with the public and potential influence has already been made. This was the case with the one video that was analysed as a part of this study. We had however downloaded it and its subsequent comments before it was deleted. In addition to this feature on Instagram making monitoring of regulation violations more challenging, it also may influence the replicability of this type of study. The use of Instagram as compared to Facebook may also be another limitation of this study, as Facebook has more users than Instagram. We also acknowledge that this study did not directly engage with followers and relied on research from other contexts to infer the possible impact of marketing. As in other contexts, we need studies that can quantify the degree to which exposure to social media channels influences consumer behavior. Conclusions The tendency for followers to agree with the recommendations of the influencers, whether breastfeeding of BMS, aligns with literature that suggests that direct marketing by social media influences followers, though our study did not confirm this independently. Observation of online interactions around the posts also reinforces the conceptual framework in Fig. 1, suggesting that when there is limited enforcement of regulations and guidelines, as is the case with South Africa's R991, public marketing of BMS may result in creation of attitudes and social norms that favour BMS [47]. The absence of direct marketing of commercial infant formula by the influencers selected for this study suggests that there is knowledge of the regulation against marketing of BMS by manufacturers, but they still find a way of marketing other products that are linked to BMS and which undermine breastfeeding. This study identified Instagram as a channel of influence for infant feeding in South Africa. BMS marketing on social media is cause for concern that requires closer monitoring and regulation, particularly as regards violations of the Code and R991.There should be a section in the national legislation that deals specifically with the marketing of BMS on social media. In addition, BMS manufacturers should take responsibility for their marketing practises on social media platforms, such as Instagram, as part of their responsibility to comply with national legislation. Influencers also must be educated
2023-03-17T14:31:20.555Z
2023-03-16T00:00:00.000
{ "year": 2023, "sha1": "1b58878a91d213a16464721f3bcf048d5f7099da", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "1b58878a91d213a16464721f3bcf048d5f7099da", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53102156
pes2o/s2orc
v3-fos-license
The matching polynomials and spectral radii of uniform supertrees We study matching polynomials of uniform hypergraph and spectral radii of uniform supertrees. By comparing the matching polynomials of supertrees, we extend Li and Feng's results on grafting operations on graphs to supertrees. Using the methods of grafting operations on supertrees and comparing matching polynomials of supertrees, we determine the first $\lfloor\frac{d}{2}\rfloor+1$ largest spectral radii of $r$-uniform supertrees with size $m$ and diameter $d$. In addition, the first two smallest spectral radii of supertrees with size $m$ are determined. Introduction The ordering of graphs by spectral radius was proposed by Collatz and Sinogowitz [7] in 1957. Lovász and Pelikán [21] investigated the spectral radius of trees and determined the first two largest and smallest spectral radii of trees with given order. Brualdi and Solheid [2] proposed the problem of bounding the spectral radius of some class of graphs and characterizing the corresponding extremal graphs. Since then, many authors studied the spectral radius of trees with some given parameters, such as degree, diameter, etc. A hypergraph H is a pair (V, E), where E ⊆ P(V ) and P(V ) stands for the power set of V . The elements of V = V (H) are referred to as vertices and the elements of E = E(H) are called hyperedges or edges. A hypergraph H is r-uniform if every edge e ∈ E(H) contains precisely r vertices. For a vertex v ∈ V , we denote by E v the set of edges containing v. The cardinality |E v | is the degree of v, denoted by deg (v). A vertex with degree one is called a core vertex, and a vertex with degree larger than one is called an intersection vertex. If any two edges in H share at most one vertex, then H is said to be a linear hypergraph. In this paper we assume that hypergraphs are linear and r-uniform. In a hypergraph H, two vertices u and v are adjacent if there is an edge e of H such that {u, v} ⊆ e. A vertex v is said to be incident to an edge e if v ∈ e. A walk of hypergraph H is defined to be an alternating sequence of vertices and edges v 1 e 1 v 2 e 2 · · · v e v +1 satisfying that both In [16] some transformations on hypergraphs such as moving edges and edge-releasing were introduced and the first two spectral radii of supertrees on n vertices were characterized. Yuan et. al [32] further determined the first eight uniform supertrees on n vertices with the largest spectral radii. Xiao et. al [27] characterized the unique uniform supertree with the maximum spectral radius among all uniform supertrees with a given degree sequence. Recently, the first two largest spectral radii of uniform supertrees with given diameter were characterized in [28]. In this paper, we determine the first d 2 + 1 largest spectral radii of supertrees among all runiform supertrees with size m and diameter d and the first two smallest spectral radii of supertrees with size m. The structure of the remaining part of the paper is as follows: In Section 2, we give some basic definitions and results for tensor and spectra of hypergraphs. Section 3 extends the theory of matching polynomial from graphs to supertrees. By comparing the matching polynomial of supertrees, we generalize Li and Feng's results on grafting operations on graphs to supertrees in Section 4. By using the method of grafting operations on supertrees and comparing matching polynomial of supertrees, we determine the first d 2 + 1 spectral radii of supertrees among all runiform supertrees with size m and diameter d in Section 5. In Section 6, the first two smallest spectral radii of supertrees are determined. We give closing remarks in the last section. Preliminaries Let H = (V, E) be an r-uniform hypergraph on n vertices. A partial hypergraph H = (V , E ) of H is a hypergraph with V ⊆ V and E ⊆ E. A proper partial hypergraph H of H is partial hypergraph Let G = (V, E) be an ordinary graph. For every r ≥ 3, the rth power of G, denoted by G r , is an r-uniform hypergraph with vertex set V (G r ) = V ∪ (∪ e∈E {i e,1 , . . . , i e,r−2 }) and edge set E(G r ) = {e ∪ {i e,1 , . . . , i e,r−2 , }| e ∈ E}. The rth power of an ordinary tree is called a hypertree (see [14]). Note that all hypertrees are supertrees by the definition. Let P m and S m denote the path and the star with m edges, respectively. The rth power of P m and S m , denoted by P r m and S r m , are called loose path and hyperstar, respectively. Let H = (V, E) be an r-uniform hypergraph. An edge e is called a pendent edge if e contains exactly r − 1 core vertices. If e is not a pendent edge, it is called a non-pendent edge. A path P = (v 0 , e 1 , v 1 , . . . , v p−1 , e p , v p ) of H is called a pendent path (attached at v 0 ), if all of the vertices v 1 , . . . , v p−1 are of degree two, the vertex v p and all the r − 2 vertices in the set e i \ {v i−1 , v i } are core vertices in H (i = 1, . . . , p). For positive integers r and n, a real tensor A = (a i 1 i 2 ···ir ) of order r and dimension n refers to a multidimensional array (also called hypermatrix) with entries a i 1 i 2 ···ir such that a i 1 i 2 ···ir ∈ R for all The following product of tensors, defined by Shao [26], is a generalization of the matrix product. Let A and B be dimension n, order r 2 and order k 1 tensors, respectively. Define the product AB to be the tensor C of dimension n and order (r − 1)(k − 1) + 1 with entries as where i ∈ [n], α 1 , . . . , α r−1 ∈ [n] k−1 . From the above definition, if x = (x 1 , x 2 , . . . , x n ) T ∈ C n is a complex column vector of dimension n, then by (1) Ax is a vector in C n whose ith component is given by In 2005, Qi [24] and Lim [18] independently introduced the concepts of tensor eigenvalues and the spectra of tensors. Let A be an order r dimension n tensor, x = (x 1 , x 2 , . . . , x n ) T ∈ C n a column vector of dimension n. If there exists a number λ ∈ C and a nonzero vector x ∈ C n such that , then λ is called an eigenvalue of A, x is called an eigenvector of A corresponding to the eigenvalue λ. The spectral radius of A is the maximum modulus of the eigenvalues of A. be an r-uniform hypergraph on n vertices. The adjacency tensor of H is defined as the order r and dimension n tensor A(H) = (a i 1 i 2 ···ir ), whose (i 1 i 2 · · · i r )entry is The spectral radius of hypergraph H is defined as spectral radius of its adjacency tensor, denoted by ρ(H). In [10] the weak irreducibility of nonnegative tensors was defined. It was proved that an r-uniform hypergraph H is connected if and only if its adjacency tensor A(H) is weakly irreducible (see [10] and [31]). Part of the Perron-Frobenius theorem for nonnegative tensors is stated in the following for reference. The following result can be obtained directly from Theorem 2.3 and will be often used in the sequel. Theorem 2.4. Suppose that G is a uniform hypergraph, and G is a partial hypergraph of G. Then ρ(G ) ≤ ρ(G). Furthermore, if in addition G is connected and G is a proper partial hypergraph, we have ρ(G ) < ρ(G). An operation of moving edges on hypergraphs was introduced by Li et. al in [16]. Let H = (V, E) be a hypergraph with u ∈ V and e 1 , . . . , e k ∈ E, such that u / ∈ e i for i = 1, . . . , k. Suppose that The following edge-releasing operation on linear hypergraphs was given in [16]. Let H be an r-uniform linear hypergraph, e be a non-pendent edge of H and u ∈ e. Let The following result was obtained by Zhou et.al [34], we will use it in the sequel. Recently, Zhang et. al [33] obtained the following result. Based on the result above, Clark and Cooper [6] called the polynomial in Theorem 3.1 as matching polynomial of H. Set m(H, 0) = 1. We redefine the matching polynomial of H as For exmaple, the matching polynomial of N k is ϕ(N k , x) = x k , rather than 1 by Zhang's definition. The definition here seems more appropriate as it guarantees that matching polynomials of hypergraphs of the same order have the same degree and the result in Theorem 3.1 is still valid. Some classical results on matching polynomial of a graph can be extended to a hypergraph as well. However, the matching polynomial of a hypergraph has its own flavour, e.g. as shown in [6], the roots of matching polynomial of an r-uniform hypergraph with r > 2 need not necessarily be real. Theorem 3.2. Let G and H be two r-uniform hypergraphs. Then the following statements hold. Proof. (a) From the fact that each k-matching in G ∪ H consists of an s-matching in G combined with a (k − s)-matching from H for some s, the result follows immediately. By comparing the coefficients of the corresponding matching polynomial in two sides of (b), the result follows. Repeatedly using (b) of Theorem 3.2, we get Note that u is an isolated vertex of G − ∪ i∈I e i , it follows directly from (2) that Counting the number of the ordered pairs, we obtain that the number of such ordered pairs is equal to m(G, k)(n − rk), which is just the absolute value of the coefficient of On the other hand, if we choose a vertex first, say u, then the number of kmatching not covering u is equal to m(G − u, k). Then, the number of such ordered pairs is equal to The desired result follows. Proposition 3.3. Let T be an ordinary tree on n vertices, r (r ≥ 3) a positive integer. Then the matching polynomials of T and its rth power T r satisfy the following relation: Proof. It is easy to see that m(T, k) = m(T r , k) for any k. Let n denote the order of T r . Then n = n + (n − 1)(r − 2). So we have where a new variable y = x r 2 is used in the second and third equations. The ordering on forests has been introduced by Lovász and Pelikán in [21]. Now we extend the ordering on forests to superforests. Let T and T be superforests of n vertices. We call T T if does not vanish at the point x = ρ(T ). Note that T ≺ T (T T , resp.) implies ρ(T ) < ρ(T ) (ρ(T ) ≤ ρ(T ), resp.). Grafting transformations on uniform supertrees Li and Feng [17] investigated how the spectral radius change when a certain transformation is applied to the graph, and obtained the following result. the graph obtained from G by attaching a path of length p at u and a path of length q at v. Then ρ(G(u, v; p, q)) > ρ(G(u, v; p + 1, q − 1)) under any of the following conditions Since then, the result has been extensively used in spectral perturbation and proved to be efficient in ordering graphs by spectral radius. The result above is proved by comparing characteristic polynomials of graphs. The characteristic polynomial of a hypergraph is complicated and very little is known about it up to now. However the result of Theorem 3.1 makes it feasible to compare the spectral radii of supertrees by using the matching polynomials of supertrees. It is known that for any forest, its matching polynomial and characteristic polynomial coincide. Following a similar proof of Lemma 4 in [21], the following result can be obtained. Based on Propositions 3.3 and 4.2, the corresponding result for hypertree can be easily obtained. Suppose that T is an r-uniform supertree and v is a vertex in T . Let T (v; p, q) be obtained by attaching two pendent paths of length p and q at v (see Fig. 1(a)). . Proof. We first consider the case that p ≥ q = 1. Applying (b) of Theorem 3.2 on T (v; p, 1) and the pendent edge attached at v, we have Similarly, applying (b) of Theorem 3.2 on T (v; p + 1, 0) and the pendent edge of the pendent path of length p + 1 attached at v, we have By (3) and (4), we deduce that Note that (T − v) ∪ P r p−1 is a proper partial hypergraph of T (v; p − 1, 0). By Theorems 2.4 and 4.4, the desired result follows. When p ≥ q ≥ 2, applying (b) of Theorem 3.2 on T (v; p, q) and the pendent edge of the pendent path of length q attached at v, we have Similarly, By (5) and (6), we deduce that Continue this process, we get Applying Theorem 3.2 once more, we have and Substituting (8) and (9) into (7), we obtain Note that (T − v) ∪ P r p−q is a proper partial hypergraph of T (v; p − q, 0). Applying Theorems 2.4 and 4.4, we get the desired result. Suppose that T is an r-uniform supertree (with at least two edges) and u and v are two vertices incident with an edge e in T . Let T (1) (u, v; p, q) (see Fig. 1(b)) be obtained by attaching two pendent paths of length p and q at u and v, respectively. Theorem 4.6. If p ≥ q ≥ 1, then In particularly, Proof. Using the similar argument as in the proof of Theorem 4.5, we have Let H 1 and H 2 be the components of T \ e containing vertex u and v respectively, and H be the union of the remaining components. We denote H as the partial hypergraph of H obtained from H by removing r − 2 vertices contained in e. We may assume that E(H) ∪ E(H 2 ) is not empty. Otherwise, T (1) (u, v; p, q) is isomorphic to H 1 (u; p, q + 1). The result follows from Theorem 4.5. When p = q ≥ 1, applying (b) of Theorem 3.2 to T (u; 0, 0) and edge e, we have Similarly, applying (b) of Theorem 3.2 to (T − v)(u; 1, 0) and the pendent edge attached at u, Substituting (11) and (12) into (10), we obtain When p > q ≥ 1, applying (b) of Theorem 3.2 to T (u; p − q, 0) and the edge e, we have Similarly, applying (b) of Theorem 3.2 to (T −v)(u; p−q +1, 0) and the pendent edge of the pendent path of length p − q + 1 attached at u, we have Substituting (14) and (15) into (10) yields We consider the following two cases depending on whether or not E(H 1 ) ∪ E(H 2 ) is empty. Without loss of generality, we assume that E(H 1 ) = ∅. It is easily seen that (H 1 − u) ∪ P r p−q−1 is a proper partial hypergraph of H 1 (u; p − q − 1, 0). By Theorems 2.4, 4.4 and (16), we prove the desired result. In particularly, Proof. We proceed by induction on s. For the case s = 1, the assertion holds by Theorem 4.6. Let T u and T v denote the components of T \ e s containing u and v, respectively. Using the similar argument as in the proof of Theorem 4.6, we have where the last equality follows from ( Applying (c) of Theorem 3.2 to T (u; p − q, 0) and the edges incident to v in T v , we have Substituting (18) into (17), we obtain By induction hypothesis, T Then T is a uniform supertree and T ≺ T . As an application of Theorems 4.5 and 4.6, the minimal supertree can be characterized as follows. Note that the upper bound and the extremal supertree have been obtained in [16], and they are listed here for completeness. Then where the left-hand side equality holds if and only if T ∼ = P r m with v as its end vertex whereas the right-hand side equality holds if and only if T ∼ = S r m with v as its center . Extremal supertrees with given diameter Let S(m, d, r) be the set of r-uniform supertrees with m edges and diameter d. Xiao et. al [28] determined the first two largest spectral radii of supertress in S(m, d, r). In this section, we determine the first d 2 + 1 largest spectral radii of supertrees in S(m, d, r) by using edge-grafting operations and comparing matching polynomials of supertrees. Let H be an r-uniform hypergraph and u a vertex of H. Let P r d = (v 1 , e 1 , v 2 , e 2 , . . . , e d , v d+1 ) be a loose path of length d. Denote by P r d (v i , u)H and P r d (e j , u)H the hypergraphs obtained by identifying vertex u of H with vertex v i of P r d and a core vertex of P r d in e j respectively (see Fig. 4). As an immediate application of Theorems 4.5 and 4.6, we have the following result. Theorem 5.1. Let T be an r-uniform supertree, r ≥ 3. Then Proof. Note that P r d (v i , u)T and P r d (e i , u)T can be depicted as T (u; i−1, d−i+1) and (T ) (1) (v i , v i+1 ; i− 1, d − i) respectively, where T denotes the supertree consists of T and e i . The first two assertions follow directly from Theorem 4.5 and Theorem 4.6 respectively. Let H 1 and H 2 denote the two supertrees obtained from P r d (e i , u)T by moving all edges in E u ∩ E(T ) from u to v i and moving the edge e i−1 from v i to u, respectively. By Lemma 2.5, we have ρ(P r d (e i , u)T ) < max{ρ(H 1 ), ρ(H 2 )}. However, H 1 ∼ = H 2 ∼ = P r d (v i , u)T and assertion (c) holds. Using the similar approach, we can show the last assertion holds. In fact, the last two assertions in Theorem 5.1 can be generalized as follows. Theorem 5.2. Let T be an r-uniform supertree and P r d be a loose path of length d, with d ≥ 3 and r ≥ 3. Then for any 2 ≤ i ≤ d, we have Proof. Suppose that e 1 , e 2 , . . . , e s are all edges incident with vertex u in T . Applying (c) of Theorem 3.2 to P r d (e d 2 , u)T and edges e 1 , e 2 , . . . , e s , we have Similarly, Then It is easy to see that P r i−2 ∪N r−1 ∪P r d−i ≺ P r i−1 ∪P r d−i as P r i−2 ∪N r−1 is a proper partial hypergraph of P For convenience, we adopt the notation from [11]. The following results were obtained in [11] and we shall extend these results from trees to supertrees in this section. Denote by H 1 and H 2 the supertrees obtained from T by moving all edges in E w 1 ∩ E(T 1 ) from w 1 to v i and v i+1 , respectively. By Theorem 5.1, ρ(T ) < min{ρ(H 1 ), ρ(H 2 )}. The maximality of ρ(T ) implies that H 1 , H 2 ∈ {T r (m,d) ∪ T } and one of them is T . So ρ(T ) < ρ(T ). The proof is finished. By Theorems 2.6, 5.4 and Lemma 5.6, we have the following results. The second minimal supertree Let P r m−1 = (v 1 , e 1 , v 2 , e 2 , . . . , e m−1 , v m ) be a loose path of length m − 1. Denote by D m,r the supertree obtained from P r m−1 by attaching a pendent edge at a core vertex of e 2 (see Fig. 6(a)). LetP r m be the supertree obtained from P r m−1 by attaching a pendent edge at the vertex v 2 (see Fig. 6(b)). We use S(m, r) to denote the set of r-uniform supertrees with m edges. Proof. Choose a supertree T 0 from S(m, r) \ {P r m } such that T 0 T for any T ∈ S(m, r) \ {P r m }. Then T 0 either has a vertex of degree more than two or has an edge with at least three intersection vertices. We consider the two cases as follows. Case 1. There exists a vertex of degree greater than two, say v ∈ V (T 0 ) with deg(v) ≥ 3. Thus T 0 can be described as a supertree in the form of some supertrees, say T 1 , . . . , T s (s ≥ 3), attached at a single vertex v. Denoted T 0 by T 1 (v)T 2 (v) · · · (v)T s (see Fig. 7(a)). Assume that T i has m i edges for i = 1, . . . , s. Let m = m − (m 1 + m 2 ). By Theorems 4.10 and 4.5, we have T 1 (v)T 2 (v) · · · (v)T s P r m 1 (v)P r m 2 (v) · · · (v)P r ms P r m 1 (v)P t m 2 (v)P r m P r 1 (v)P r 1 (v)P r m−2 , where all loose paths P r m , P r m−2 and P r m j (j = 1, . . . , s) have v as its end vertex. By the minimality of T 0 , T 0 = P r 1 (v)P r 1 (v)P r m−2 =P r m . vertices in e (if there are) are core vertices (see Fig. 7(b)). Then T 0 may be viewed as obtained by attaching supertrees, say T 1 , . . . , T s , at v 1 , . . . , v s respectively. By Theorems 4.6, 4.10 and the minimality of T 0 , the following conclusions hold. (3) Two of T 1 , T 2 , T 3 are of length one. Therefore, T = D m,r . Combining two cases above, we have shown that T 0 ∈ {P r m , D m,r }. Further by (c) of Theorem 5.1, we haveP r m D m,r . So T 0 = D m,r . Thus we conclude that for any T ∈ S(m, r) \ {P r m }, T D m,r . Theorem 6.2. The first two smallest spectral radii of supertrees with m (m ≥ 4) edges are P r m , D m,r . Closing remarks We conclude this section with some remarks on matching polynomial of a supertree. The work in this paper is based on the relation between the roots of matching polynomial of a supertree and its spectrum developed in [33]. Using the recurrence relations of matching polynomial of supertrees, the effect on the spectral perturbation of supertree by grafting edges in various situations can be explained. The methods are initially used to compare spectral radii of supertrees in this paper. The methods are shown to be efficient in dealing with extremal supertrees with respect to their spectral radii, such as in finding the first two smallest supertrees and the first several largest supertrees with given diameter. For the corresponding problem on a hypergraph, the characteristic polynomial of adjacency tensor of a hypergraph might be used to compare spectral radii of hypergraphs. However, the degree of characteristic polynomial of a hypergraph is very high relative to its order, and very little is known about it up to now. Finally, we pose the following problem. Problem 7.1. What kind of polynomial should be associated with a hypergraph satisfying the following conditions: (1) The roots of the associated polynomial consist of the eigenvalues, especially the spectral radius of the hypergraph. (2) The coefficients of the polynomial reflect certain structural information of the hypergraph, such as matching, cyclic structure or something more complicated.
2018-07-03T13:36:16.000Z
2018-07-03T00:00:00.000
{ "year": 2018, "sha1": "42bb2fb46175b3116ac2edcbef88cfb85929a077", "oa_license": "CCBY", "oa_url": "https://www.combinatorics.org/ojs/index.php/eljc/article/download/v25i4p13/pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "42bb2fb46175b3116ac2edcbef88cfb85929a077", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
301254
pes2o/s2orc
v3-fos-license
Ethnicity and attitudes to deceased kidney donation: a survey in Barbados and comparison with Black Caribbean people in the United Kingdom Background Black minority ethnic groups in the UK have relatively low rates of deceased donation and report a higher prevalence of beliefs that are regarded as barriers to donation. However there is little data from migrants' countries of origin. This paper examines community attitudes to deceased kidney donation in Barbados and compares the findings with a survey conducted in a disadvantaged multi-ethnic area of south London. Methods Questionnaires were administered at four public health centres in Barbados and at three private general practices. Adjusted odds ratios were calculated to compare attitudinal responses with a prior survey of 328 Caribbean and 808 White respondents in south London. Results Questionnaires were completed by 327 respondents in Barbados (93% response); 42% men and 58% women, with a mean age of 40.4 years (SD 12.6). The main religious groups were Anglican (29%) and Pentecostal (24%). Educational levels ranged from 18% not completing 5th form to 12% with university education. Attitudes to the notion of organ donation were favourable, with 73% willing to donate their kidneys after their death and only 5% definitely against this. Most preferred an opt-in system of donation. Responses to nine attitudinal questions identified 18% as having no concerns and 9% as having 4 or more concerns. The highest level of concern (43%) was for lack of confidence that medical teams would try as hard to save the life of a person who has agreed to donate organs. There was no significant association between age, gender, education or religion and attitudinal barriers, but greater knowledge of donation had some positive effect on attitudes. Comparison of attitudes to donation in south London and Barbados (adjusting for gender, age, level of education, employment status) indicated that a significantly higher proportion of the south London Caribbean respondents identified attitudinal barriers to donation. Conclusions Community attitudes in Barbados are favourable to deceased donation based on a system of informed consent. Comparison with south London data supports the hypothesis that the relatively high prevalence of negative attitudes to deceased donation among disadvantaged ethnic minorities in high income countries may reflect feelings of marginalisation and lack of belonging. Background Rates of end stage renal failure are increasing worldwide, leading to greater needs for dialysis and kidney transplantation [1]. Transplantation is cost effective compared with dialysis in the treatment of end stage renal disease and achieves much improved outcomes for patients' qual-ity of life [2]. Currently living kidney donation accounts for about one-third of kidney transplants performed in the UK, with deceased donation forming the main source of kidney transplantation [2]. Black and South Asian populations (from the Indian subcontinent) living in the UK are three to four times as likely to need a kidney transplant compared with the White population. This is associated with higher rates of diabetes and hypertension that are both major causes of end stage renal failure [1,2]. However Black populations in the UK and US have relatively low donation rates [3]. This high need but low supply among minority ethnic populations presents problems in achieving an optimal match of blood group and tissue type where these are less common among the majority population, resulting in patients from minority ethnic groups spending an increased time on the transplantation waiting list [3,4]. Research investigating the lower donation rates among ethnic minorities indicates that although minority ethnic groups generally support the 'gift' of life, they are often less well informed about organ donation and more commonly hold beliefs that are identified as barriers to deceased donation [5][6][7][8]. The current research arose through the Centre for Caribbean Health that is run jointly by King's College London and the University of the West Indies (UWI). Through this collaboration we identified a lack of data on attitudes to donation in ethnic minorities' home countries, with this study in Barbados being the first such study in the Caribbean. Barbados is a middle-income country (GDP per capita $20,200) in the Eastern Caribbean. The 2000 census population (adjusted figure) is 268,792, of which 93.0% are of African origin, 3.2% white and 2.6% mixed (Barbados Statistical Service -personal communication). Life expectancy at birth in Barbados was 77.0 years in 2007 [9]. However there is a relatively high incidence of diabetes, with an estimated prevalence of 17.5% among people aged 40 years and over, leading to high rates of end stage renal failure [10]. Eight public sector polyclinics strategically located around the island provide free comprehensive primary care. In 2003, 89 private general practitioners provided care based on fee for service, with many people receiving care from both sectors. Provision of dialysis is limited; there is a small unit at the main public hospital and a few dialysis machines in the private sector that cater mainly for tourists. The public dialysis service in Barbados functions at a cost level well below that for high income countries, but is nevertheless relatively expensive in terms of both capital and running costs in relation to health service budgets [11]. Transplantation surgery is occasionally undertaken in Barbados and a few other countries in the English-speaking Caribbean, although Trinidad and Tobago is currently the only country in the Caribbean with a structured system of organ transplantation. Transplantation began in 2006 in Trinidad and Tobago and so far has only involved living donors. However there is interest in establishing a system of deceased donation which would substantially increase the donor pool [12]. This study examines two questions: What are the knowledge and attitudes of persons in Barbados to deceased donation? How do these attitudes and knowledge compare with Black Caribbean counterparts in the UK as recorded in a recent survey in south London? [5]. Design The Barbados survey was based on a convenience sample of primary care attendees. It involved four of the eight publicly funded health clinics in different parts of the island, covering both urban and rural areas, and three private group general practices in different locations. A quota sampling approach was employed to ensure a mix of gender and age (range 18-65 years). The survey was conducted at both morning and late afternoon/evening sessions. Comparative data for Black Caribbean and White respondents in London was based on a similar survey we conducted in 2005 among attendees at four large general practices in south London [5]. Questionnaire the Barbados questionnaire comprised 2 knowledge questions, 10 attitudinal questions and 9 socio-demographic questions (Additional file 1). The questionnaire was piloted among 14 attendees at health clinics to test its acceptability and comprehensibility in this setting and as a result a few modifications were made. A new question with three options was also added to identify respondents' preference in terms of an organised system of donation. Two changes were made from the initial south London questionnaire because it was assumed that the Barbados population would be less familiar with organ donation and transplantation. Firstly, the questionnaire was interview administered, as this allowed the fieldworkers to explain about organ donation and transplantation. Secondly, the original 4-point scale for attitudinal statements (strongly agree/agree/disagree/strongly disagree) was simplified to agree/disagree. A further change was that a 'not sure' response category was included for attitudinal questions as we found that 'not sure' or 'don't know' were occasionally written on the south London questionnaires. The Barbados survey was undertaken during June-August 2008. Recruitment at each health facility involved approaching patients in the waiting area, explaining the purpose of the survey and giving a one-page information sheet to potential respondents. Those agreeing to participate were asked to sign a consent form following full explanation. Community nurses administered the survey in health facilities where they were not employed and also wore their casual clothes, with this approach aiming to ensure that respondents did not give answers that they thought might be expected by health professionals. Analysis Questionnaire data were first entered into Epi-Info and validity checks were undertaken. The data were then transferred to Stata (version 9.2) for analysis. Chi-square and logistic regression analyses were used to test the effects on attitudes to donation of age, education, employment and religion. To compare responses in the Barbados and South London surveys we calculated odds ratios for nine attitudinal questions asked in both surveys, adjusting for gender, and for age, level of education and employment status (each in 3 categories). For logistic regression, the 'event' was always the response that indicated an objection (whether 'agree' or 'disagree'). 'Not sure' in the Barbados questionnaire was regarded as indicating no objection (i.e. no event). When respondents explained the reasons for their response this was usually noted on the questionnaire. Ethical approval for the Barbados study was granted by the University of the West Indies-Cave Hill/Barbados Ministry of Health Institutional Review Board (February, 2008). Results A total of 327 questionnaires (93% response) were completed in Barbados; 246 (75%) at health clinics and 81(25%) at general practices. Questionnaires each took between 5 and 15 minutes to complete. The main reasons for non-participation were that people were expecting to be called for their appointment, were occupied with children, did not feel well or did not want to be bothered. Twelve questionnaires (3.7%) had some missing data. Characteristics of respondents Respondents comprised 42% men and 58% women, with a mean age of 40.4 years (SD 12.6) ( Table 1). Altogether 96% were Black, with others self-identifying as Indian or mixed race. The main religious groups were Anglican (29%) and Pentecostal (24%). The highest educational level completed identified a considerable range; for example, 18% had not completed 5 th form whereas 12% had a university education. Altogether 78% of respondents were in either full or part-time employment, 3% were students and 19% were retired, not working or doing occasional work. Analysis of occupations indicated that a small number were in professional posts such as teacher, pharmacist, and accountant. The majority were engaged in routine non-manual occupations, such as clerical officer and receptionist, or in skilled manual occupations such as taxi driver, electrician and chef. A small number were in semi-skilled occupations, such as postman, fisherman and cleaner. Respondents in professional occupations were recruited through general practices and those in semi-skilled occupations through public health clinics. Otherwise there was a considerable mix of occupations among attendees at public health clinics and private general practices. Compared with population estimates for Barbados (2008), men were slightly under represented in the sample (42% compared with 48% in population). The age and ethnic distributions were almost identical with the population figures, with the exception of non-representation in the sample of the 3% white population. Knowledge and general attitudes to donation Altogether 30% of respondents had known someone (mainly a relative) with a severe kidney problem. Despite the lack of a transplantation service in Barbados, 20% of respondents were aware that immediately after your death it is possible for your kidneys to be transplanted in someone else, with this knowledge mainly gained from health programmes on North American television channels. Reported attitudes to the notion of organ donation were very favourable; 73% responded that they would be willing to donate their kidneys after their death and a further 22% were 'not sure', with only 5% definitely against this ( Table 2). There were no significant associations between willingness to donate and gender, age, education, religion or occupational status ( Table 2). Logistic regression analysis (including age, gender, education and occupation) identified only occupation (working/nonworking) as significantly predictive of willingness to donate. The odds ratio (adjusting for age, gender and education) for those not working compared with those working was 0.51, p = 0.029 (CI 0.28-0.93). A further question regarding possible payment to their relative indicated that this was not an important influence on attitudes to donation. Specific attitudes to donation Responses to nine questions concerning specific attitudes to donation were generally positive; 18% had no concerns, 72% had 1 to 3 concerns, and 9% had 4 or more concerns. The highest level of concern and 'not sure' responses were given for the statement 'I am not confident that medical teams would try as hard to save the life of a person who has agreed to donate organs' (Table 3). This statement also appeared to present some difficulty and required the most thought. Questioning revealed that this was often because respondents were concerned about individual variability in trustworthiness, because 'we are all human' and 'different doctors may act differently'. Neither religious beliefs concerning the body nor a desire to keep the body intact with no parts removed appeared to be major barriers. These beliefs were also not significantly associated with membership of a particular religious group. Most respondents did not mind who received their kidney. However in considering this question a few respondents described moral criteria, stating that they would not be happy if their kidney was given to a person who had 'done bad things' or who had been in prison. The attitudes of respondents who were previously aware of deceased donation and transplantation were similar to those who had not previously known about this. The only exception was that 46% of those who already knew about organ donation and transplantation were not confident that medical teams would try as hard to save the life of a person who had agreed to donate their organs, compared with 69% who had not known about donation (difference 23%, 95% CI -37% to -8%, p = 0.0034). Those who had previous knowledge of deceased donation were also rather less likely to be concerned about their body being cut after their death compared with those who had not previously known (90% v 82%, difference 18%, CI -3% to 18%). There was no significant association between age, gender, education or religion and individual attitudinal barriers. However, 76% of respondents with fewer than 4 concerns were willing to donate compared with 42% with 4 or more concerns. Comparison with South London The south London sample included 338 Black Caribbean and 808 White primary care attendees [5] (Table 1). In terms of knowledge, a significantly higher proportion of south London respondents were aware of the possibility of leaving their kidneys for transplant in someone else after their death; 94% White and 87% Black Caribbean respondents in the south London survey compared with the 20% of the Barbadian respondents. Comparison of attitudes to donation in South London and Barbados (adjusting for gender, age, level of education and employment status) indicated that a significantly higher proportion of south London Caribbean respondents gave responses to attitudinal questions that identified barriers to donation ( Table 4). The only attitudinal question responded to more positively by the south London Caribbean respondents was feeling confident that medical teams would try as hard to save the life of a person who has agreed to donate organs. Discussion This study is the first to examine attitudes to deceased donation in the Caribbean and indicated that attitudes to donation in Barbados were generally positive, with few religious or specific cultural barriers. There were also no differences in attitudes to donation by age, gender or education, although increased knowledge of the possibilities of donation and transplantation had some positive effect on attitudes. The only question for which a large proportion of Barbadian respondents identified a potential barrier to donation was in relation to trust in medical teams and health care system. Questioning indicated that respondents' worries about this mainly reflected their concerns about individual variability, including possible variability among health professionals. This contrasts with a more fundamental distrust of medicine and the health system that appears to be increasingly prevalent in high-income countries [13]. The significantly higher prevalence of worries and concerns regarding deceased donation among Caribbean respondents in south London compared with Barbados held after adjusting for socio-demographic characteristics. There were some differences in response categories between the two surveys. However as responses were analysed as dichotomous agree/disagree this would reduce any effects of the larger number of scaled response categories in south London. Responses recorded as 'not sure' were more common in Barbados, probably influenced by the inclusion of this category rather than merely noting uncertainty on the questionnaire as in south London. 'Not sure' in each case was regarded as indicating no objection, thus possibly increasing favourable responses. However, grouping 'not sure' with 'not willing' to donate did not produce significant differences in the responses presented in Table 2. For specific attitudes to donation the percentage of 'not sure responses' was less than 7%, with the only exception of 14% giving a 'not sure' response to a statement regarding confidence that medical teams would try as hard to save the life of a person who has agreed to donate organs ( Table 3). Respondents of Caribbean origin in south London were predominately from Jamaica and may therefore have differed in their cultural attitudes compared with migrants from Barbados. It would be interesting to investigate this and to undertake a similar survey of attitudes to organ donation among primary care attendees in Jamaica. However we hypothesise that the more negative attitudes identified among ethnic minorities in south London may arise not so much from cultural differences as from feelings of marginalisation that are associated with member- Am concerned that an intact body with no parts removed is needed for the life hereafter. Feel that agreeing when alive to donating kidneys as a gift after death is like tempting death. ship of a minority ethnic group, particularly when combined with low socio-economic status. This hypothesis is supported by a qualitative study of the factors associated with low rates of participation by the African Caribbean population in local community networks in a multi-ethnic area of south London [14]. A key factor accounting for the low participation among African Caribbean respondents was identified as a lack of trust and confidence in formal organisations, with their social relationships and participation therefore mainly occurring within their own community networks [14]. Our previous in-depth study with people of Caribbean origin (predominately from Jamaica) living in relatively disadvantaged multi-ethnic areas of South London also indicated that these respondents did not regard themselves as fully accepted and integrated into British society. For example, they described experiences of discrimination and disadvantage in relation to the educational system, job opportunities and the police, and referred to the existence of several 'societies' rather than seeing themselves as part of a single unified society [15]. These feelings of marginalisation and lack of belonging to British society were associated with an idealised desire to return home to the Caribbean for burial with their body 'whole', or at least to be buried in this way following a Caribbean style funeral in the UK. Funerals and burial thus appeared to have a symbolic value in reconciling the experience of a divided identity in life [15]. In contrast to the situation of disadvantaged minority ethnic groups in the UK, British and Irish emigrants to south-eastern Spain have been shown to express more positive attitudes to organ donation compared with the local community [16]. This migrant group is however described as relatively affluent and 'perfectly integrated into the social structure', rather than experiencing the feelings of marginalisation that characterise disadvantaged minority ethnic groups. Further evidence to support the notion that social position and feelings of marginalisation shape attitudes and participation in civic society is provided at a societal level by evidence of a relationship among high income countries between increased income inequality and a decline in public trust [16]. Wilkinson and Pickett hypothesise that it is income inequality that affects trust, and that in a society where there is a high level of trust this leads to people feeling secure, worrying less and viewing others as co-operative rather than competitive [17]. Socio-economically disadvantaged members of minority ethnic groups living in high income countries with considerable income inequality may thus experience a reduced sense of trust in the health system and less desire to act altruistically for the public good, whereas citizens of more equal countries have a greater public trust that is likely to be reflected in a greater willingness to give for the public good. This research, like the south London study, was based on attendees at primary care facilities. It is possible that attendance at primary care sensitises people to health issues and may also over-represent people with chronic health conditions. Despite these limitations, primary care is often regarded as a suitable and low cost sampling frame. This reflects the high proportion of the population who attend over a year, with attendance occurring for acute and preventive/administrative reasons as well as for chronic health problems. The selection of respondents in both Barbados and the UK from primary care centres also means that any bias would also be present in both samples, thus not invalidating the comparison. Conclusions The study indicates that Barbados has a positive social environment for setting up or participating in a wider Caribbean transplantation programme based on the 'gift' of life, with an overwhelming preference for an opt-in system that is practised in the UK. The data also support the hypothesis that relatively negative attitudes to organ donation reported by disadvantaged minority ethnic groups in the UK, US and other high income countries are associated with feelings of lack integration and belonging. Champions from minority groups may thus be important in increasing trust and have a positive influence on attitudes to donation among migrant communities.
2014-10-01T00:00:00.000Z
2010-05-21T00:00:00.000
{ "year": 2010, "sha1": "8aa396f50ccf1a0463751f5aa785f99deec93bdc", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-10-266", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "32e6c26163cc03f8da9ebf3b2e8b02f38b3cadb6", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
8652820
pes2o/s2orc
v3-fos-license
Circuits for Presaccadic Visual Remapping 25 Saccadic eye movements rapidly displace the image of the world that is projected onto 26 the retinas. In anticipation of each saccade, many neurons in the visual system shift 27 their receptive fields. This presaccadic change in visual sensitivity, known as 28 remapping, was first documented in the parietal cortex and has been studied in many 29 other brain regions. Remapping requires information about upcoming saccades via 30 corollary discharge. Analyses of neurons in a corollary discharge pathway that targets 31 the frontal eye field (FEF) suggest that remapping may be assembled in the FEF’s local 32 microcircuitry. Complementary data from reversible inactivation, neural recording, and 33 modeling studies provide evidence that remapping contributes to transsaccadic 34 continuity of action and perception. Multiple forms of remapping have been reported in 35 the FEF and other brain areas, however, and questions remain about reasons for these 36 differences. In this review of recent progress, we identify three hypotheses that may 37 help to guide further investigations into the structure and function of circuits for 38 remapping. 39 WE PERCEIVE THE ENVIRONMENT as continuous, but neurons in the brain take samples that are constrained in space and time. Specific ranges of stimuli drive a neuron to produce action potentials. From the 1960s through the 1970s, it was essentially dogma that the stimulus-spike relationship of a sensory neuron revealed its "receptive field," the portion of the world to which a sensory neuron responds. Response gain could be modulated by internal factors such as attention or external factors such as contrast, but the structure of receptive fields seemed immutable and their function, to report what is out there, seemed passive. In the case of the visual system, neurons were assumed to have static receptive fields relative to the fovea. Evidence for exceptions to this rule emerged in the 1980s, and the dogma was overturned in the 1990s, when behaving animal preparations allowed for studies in which eye movements were incorporated as factors in experimental design. It became clear that around the time of saccades the spatial structure of receptive fields can change. Here we summarize the advances in our understanding of labile receptive field location with a focus on circuit-level studies from the past 20 years. Progress has been made in elucidating the mechanisms of receptive field remapping both at the macrocircuit level (pathways between brain areas) and the microcircuit level (processing within brain areas). Recent physiological and modeling experiments have leveraged this circuit information to explore the visuomotor and perceptual functions of remapping. Taken together, a new picture of the genesis and role of presaccadic remapping emerges, yielding testable hypotheses for future work. Presaccadic Visual Remapping The first hint that visual receptive fields can be spatially dynamic was reported by Mays and Sparks (1980) in a study of neurons located in the intermediate layers of the macaque superior colliculus (SC). A subset of neurons, termed "quasivisual cells," had strong, sustained visual responses and modest motor responses for saccades made to single targets. In a task requiring sequential saccades to two flashed targets, however, the neurons fired for the second stimulus even while it was outside of the receptive field. In the mid-1980s, when Bruce, Goldberg, and colleagues (Bruce et al. 1985;Bruce and Goldberg 1985) characterized the frontal eye field (FEF), they noticed a similar effect: some FEF neurons fired for visual stimuli used as targets for second saccades in a sequence, even though those stimuli were never in the receptive field (Goldberg and Bruce 1990). The effect seemed purely visual, as it occurred for visually responsive neurons even if they lacked saccade-related activity. Duhamel et al. (1992) made sense of the effect with a landmark study of neurons in the lateral intraparietal area (LIP), which is reciprocally connected with the FEF anatomically and functionally (Blatt et al. 1990;Chafee andGoldman-Rakic 1998, 2000;Stanton et al. 1995). They demonstrated that LIP neurons predictively report visual stimuli that will be in their receptive field after a saccade is completed. The visual sensitivity of the neurons shifts from the classical receptive field to what became known as the "future field." Duhamel et al. (1992) postulated that the effect could be related to the percept of visual stability across saccades. Since that work, presaccadic remapping has been found in a number of visual and oculomotor areas (V1, V2, V3, and V3A: Nakamura and Colby 2002;V4: Neupane et al. 2016aV4: Neupane et al. , 2016bSC: Churan et al. 2012;Walker et al. 1995;FEF: Mayo et al. 2016;Sommer and Wurtz 2006;Umeno andGoldberg 1997, 2001), although not all of them (MT: Hartmann et al. 2011;Inaba and Kawano 2014;Ong and Bisley 2011;Yao et al. 2016). Some neurons in V1, LGN, the SC, area MST, and area VIP exhibit a transient, presaccadic decrease in activity that may be related to the perceptual effect of saccadic suppression (reviewed by Ibbotson and Krekelberg 2011), but this phenomenon seems to occur in the absence of presaccadic remapping to a location outside the classical receptive field. Presaccadic remapping is a form of visual response because it requires visual stimulation in the future field. But a remapped response differs from a classical visual response in two major ways. First, the spatial location of a remapped response depends on both the location of the classical receptive field and the vector of the upcoming saccade (Duhamel et al. 1992;Goldberg and Bruce 1990;Sommer and Wurtz 2006). Second, the timing of a remapped response is unrelated to the visual latency of a neuron (Umeno and Goldberg 1997); it aligns well, however, with saccade initiation (reviewed by Sommer and Wurtz 2008a). For many neurons (ϳ30 -40% in LIP and FEF), the remapping is considered predictive because it precedes saccade initiation (Duhamel et al. 1992;Kusonoki et al. 2003;Sommer and Wurtz 2006) and the arrival of postsaccadic (reafferent) visual responses (Umeno and Goldberg 1997). Neurons that remap, therefore, must be receiving information about when the next saccade will start and where it will go. Such information about imminent movement is called corollary discharge (or efference copy ;Sperry 1950;Von Helmholtz 1962;Von Holst and Mittelstaedt 1950). The source of saccadic corollary discharge, and how it may be combined with visual input to create remapping, is discussed in the next two sections. Macrocircuits for Remapping Research into the neural circuits that support remapping began with attempts to identify its corollary discharge input. Guided by anatomical data from Lynch et al. (1994), Wurtz and Sommer (2004) proposed four criteria for identifying corollary discharge and applied them to a pathway they studied. First, a corollary discharge signal must originate in a region known to control movement. In primates, Lynch et al. (1994) used retrograde viral tracer injections in FEF to identify disynaptic input pathways from three subcortical structures. One pathway originated in the intermediate layers of the SC. They could not distinguish the thalamic relay for the pathway because several nuclei showed first-order labeling, but other anatomical studies implicated the lateral edge of the mediodorsal nucleus (MD; Benevento and Fallon 1975;Goldman-Rakic and Porrino 1985). Electrophysiologically, this connectivity was confirmed by using antidromic and orthodromic stimulation techniques to identify the pathway's source neurons in SC, relay neurons in MD, and recipient neurons in FEF ( Fig. 1; Sommer and Wurtz 1998, 2002, 2004a. Second, a corollary discharge pathway should convey movement-related signals that start prior to the movement and represent its temporal and spatial parameters. In the SC-MD-FEF pathway, most MD relay neurons conveyed presaccadic bursts of activity that met this criterion, as did the SC source neurons and FEF recipient neurons (Sommer and Wurtz 2004a). Third, elimination of a corollary discharge pathway should not affect movements that do not require corollary discharge. Consistent with this criterion, inactivation of lateral MD had no effect on single saccades except for a slight, omnidirectional increase in reaction times (Sommer and Wurtz 2004b; see also Tanaka 2006). In Fig. 1. Pathway for corollary discharge of saccades. A: electrophysiological identification of the pathway's relay neurons in mediodorsal thalamus (MD). Individual neurons were double-identified as orthodromically activated from the superior colliculus (SC) and antidromically activated from the frontal eye field (FEF). Bottom: traces show repeated, superimposed recording traces of an example MD neuron. Just after stimulation of the SC (left), the neuron fired with a variable latency. The same neuron showed a consistent latency of activation from the FEF (right). B: schematics depicting a close-up of the target of the pathway adapted from Lynch et al. (1994) contrast, inactivation of the SC itself causes marked deficits in contralateral saccade trajectory, latency, and speed (Aizawa and Wurtz 1998;Quaia et al. 1998a), and inactivation of the FEF causes similar contralateral deficits in the context of making saccades to remembered stimuli (Chafee and Goldman-Rakic 2000;Dias et al. 1995;Dias and Segraves 1999;Sommer and Tehovnik 1997). Fourth, elimination of a corollary discharge pathway should disrupt performance on tasks that require corollary discharge. Inactivation of lateral MD caused significant, contralateral deficits in compensating for the first saccade in a two-saccade sequence Wurtz 2002, 2004b) and drastically reduced remapping associated with contralateral saccades in the FEF (Sommer and Wurtz 2006). More recently, MD inactivation was shown to impair perceptual localization of visual stimuli after contralateral saccades (Cavanaugh et al. 2016). The SC-MD-FEF pathway therefore meets all four criteria for a "macrocircuit" for corollary discharge. Presaccadic remapping in the FEF depends on activity provided by the SC-MD-FEF pathway (Sommer and Wurtz 2006), and that pathway provides corollary discharge of saccades (reviewed by Sommer and Wurtz 2008a). The implication is that the corollary discharge combines with "passive" visual inputs to create the remapping. But can we rule out a simpler hypothesis, that remapping is transferred from the SC up to the FEF? The SC itself has visual responses and remapping (Churan et al. 2011(Churan et al. , 2012Mays and Sparks 1980;Walker et al. 1995), and the SC-MD-FEF pathway conveys visual signals in addition to presaccadic bursts of activity (Sommer and Wurtz 2004a). Visual signals in MD relay neurons of the pathway have not yet been tested for presaccadic remapping. Two lines of evidence, however, suggest that remapping in the FEF is not simply inherited from the SC. First, there are striking differences between remapping in the SC and FEF, as discussed next. Second, FEF recipient neurons of the pathway exhibit unique presaccadic modulations of their visual responses that suggest de novo construction of remapping, as described in Microcircuits for Remapping. Walker et al. (1995) and Churan andcolleagues (2011, 2012) examined remapping in the SC intermediate layers. Both groups found that a hallmark of remapping in SC is a bimodal distribution of activity: a burst before the saccade, a pause, and then a burst after the saccade. That is, remapped visual responses are transiently quenched during the saccade, similar to saccade-related suppression of visual responses found in the superficial SC and FEF (Mayo and Sommer 2008;Richmond and Wurtz 1980;Robinson and Wurtz 1976). If remapping in the SC were simply conveyed to the FEF, one might expect to see the same saccade-related quenching of remapping in the FEF. Instead, remapped visual signals in FEF are relatively unperturbed (Sommer and Wurtz 2006;Umeno and Goldberg 1997). Some saccade-related suppression can be detected in FEF with modeling analyses (Joiner et al. 2013b), but its magnitude is small and alignment with saccade initiation poor, yielding at most "dips" in remapped responses rather than stark bimodal distributions as in the SC. A commonality between remapping in the SC and the FEF is weaker future field responses compared with receptive field responses. But this is more pronounced for the SC, in which future field responses are ϳ50 -60% weaker than receptive field responses ( of Walker et al. 1995), compared with only around a 30% difference in the FEF (Fig. 9 of Umeno and Goldberg 1997). The final notable difference between remapping in the FEF and SC concerns visual sensitivity between the receptive field and the future field. Neurons in the SC exhibit significant responses to visual stimuli presented at the midpoint between the fields (Churan et al. 2012), but neurons in the FEF do not. Rather, FEF neurons exhibit a "jump" of visual sensitivity from receptive field to future field with no activation in the middle (Mayo et al. 2016;Sommer and Wurtz 2006). Neurons in LIP are like those in the SC in that they respond at the midpoint. Fine-scaled spatial and temporal testing of this effect in LIP suggests, moreover, that visual sensitivity spreads through the midpoint and onward to the future field (Wang et al. 2016). This spread might reflect local connections that propel remapping (Quaia et al. 1998b). If that is how remapping is created, and if LIP sends only the outcome of the process to the FEF, it would explain why FEF neurons show a jump of visual sensitivity to the future field. But the reverse scenario is also plausible. Remapping could originate in the FEF through mechanisms that yield a clean jump of visual sensitivity from receptive field to future field. Through divergence of FEF outputs and convergence onto LIP, the jump in FEF might be "smeared" into an apparent spread in LIP. The hypothesis that remapping originates in the FEF is consistent with many lines of evidence. The FEF almost certainly receives more corollary discharge input than extrastriate areas ( Fig. 2A). The pathway from SC to FEF arises from the SC intermediate layers, with little contribution from its superficial layers (Lynch et al. 1994;Lyon et al. 2010). The pathways from the SC to extrastriate areas are just the opposite (Adams et al. 2000;Benevento and Rezak 1976;Clower et al. 2001;Leichnetz 2001). The FEF distributes its signals widely. It has a dense monosynaptic connection to LIP (Blatt et al. 1990;Bullier et al. 1996;Petrides and Pandya 1984;Stanton et al. 1995) and sends a diversity of signals, including remapped visual responses, to the intermediate SC (Shin and Sommer 2012;Sommer and Wurtz 2000, 2001, 2006. LIP sends a profusion of signals to SC as well, but it is unknown whether they include remapping (Ferraina et al. 2002;Wurtz 1997, 2001;Wurtz et al. 2001). If the FEF creates remapping through a combination of corollary discharge and visual input, it could send its signals to LIP directly and other extrastriate areas directly or polysynaptically ( Fig. 2B; Merriam et al. 2007;Nakamura and Colby 2002). The extrastriate areas and FEF all could send remapping to the SC. The extrastriate areas would exclude area MT, where there is little evidence of remapping as noted above (Hartmann et al. 2011;Inaba and Kawano 2014;Ong and Bisley 2011;Yao et al. 2016). The broad convergence of remapped visual signals at the SC could contribute to the complex contextdependent nature of remapping in those layers (Churan et al. 2011). The hypothesis that FEF creates remapping makes three clear experimental predictions. First, it predicts that remapping is conveyed out of the FEF. This has been confirmed for FEF output to the SC with antidromic activation (Shin and Sommer 2012; Sommer and Wurtz 2006). It could be tested in the same way for FEF output to extrastriate cortex. Second, the hypothesis predicts that remapping happens in FEF first. This could be tested by comparing the timing of remapping in FEF and other areas with constant experimental parameters (ideally, simultaneous recording from different areas of the same monkey). Third, the hypothesis predicts that FEF is necessary for remapping elsewhere. This can be tested by reversibly inactivating the FEF while recording from neurons at another site such as LIP. Remapping in the neurons should be eliminated. Microcircuits for Remapping If remapping is created in the FEF, what are the mechanisms? Remapped visual information needs to be modulated by corollary discharge, but how are those signals combined to enable transient shifts in visual sensitivity? What little we understand about this comes from a comparison of remapping across layers in the FEF. Corollary discharge from the SC is relayed by MD thalamus and arrives in layer IV and deep layer III of the FEF (Giguere and Goldman-Rakic 1988; for brevity, we call this layer IV). FEF layer IV neurons innervated by this pathway can be identified by disynaptic activation from electrical stimulation in the SC Wurtz 1998, 2004a). One would expect them to exhibit presaccadic activity, since that is what corollary discharge is, and they do. But the most remarkable characteristic of these FEF layer IV neurons is their near-ubiquity of visual responses: 94% of the sample in Sommer and Wurtz (2004a). This is a much higher incidence of visual responses than found in the MD neurons that relay signals from SC to FEF (Sommer and Wurtz 2004a), suggesting that the FEF layer IV neurons receive considerable visual input from extrastriate cortex. Projections and visual signals from extrastriate cortex to FEF do, in fact, target its layer IV (Ferraina et al. 2002;Schall et al. 1995). Visual responses are less common in the FEF as a whole (43% of neurons as reported by Bruce and Goldberg 1985). From these properties alone-prolific visual responses and known corollary discharge input-FEF layer IV neurons are promising candidates for contributing to the generation of remapping. Shin and Sommer (2012) studied the contribution of FEF layer IV neurons to remapping by identifying them through orthodromic activation from the SC and testing their presaccadic visual sensitivity at the receptive field and future field. As a point of comparison, they ran the same tests on output neurons of layer V in FEF as identified by antidromic activation from the SC. An important analysis was to distinguish between the putative cell types in layer IV. Unlike layer V corticotectal neurons that are entirely excitatory (pyramidal), neurons in layer IV can be either excitatory or inhibitory; both cell classes reside in layer IV and receive thalamic input (Benevento and Fallon 1975; Giguere and Goldman-Rakic 1988; Goldman-Rakic and Porrino 1985). Using analyses of action potential width (Chen et al. 2008;Cohen et al. 2009;Constantinidis and Goldman-Rakic 2002;Song and McPeek 2010), with the known layer V pyramidals serving as a reference set of known excitatory neurons, Shin and Sommer (2012) segregated their layer IV neurons into putative excitatory and inhibitory neurons, with a smaller class of neurons that were indeterminate ("ambiguous" neurons). In FEF layer V, many individual neurons and the population response exhibited full remapping, defined as a presaccadic decrease in visual sensitivity at the receptive field coupled with an increase at the future field (Shin and Sommer 2012). In contrast, none of the layer IV neurons did. The layer IV putative excitatory neurons increased their sensitivity at the receptive field just before a saccade, but they did not remap to the future field. The layer IV ambiguous neurons did remapthey increased their sensitivity at the future field just before a saccade-but they showed no decrease in sensitivity at the receptive field. The putative inhibitory neurons showed no modulation in visual sensitivity at all. The conclusion was that "pieces" of remapping are distributed across cell types in FEF layer IV, as if full remapping were in an early stage of assembly. What were the ambiguous neurons? Not only were they the only layer IV neurons that remapped, they were also the neurons with the strongest delay activity in a memory-guided saccade task, suggesting that they have unusual abilities to extend signals in both space and time. This capacity to sustain long-lasting firing rates and their intermediate action potential widths suggested that the ambiguous neurons may be lowthreshold spiking, somatostatin-positive inhibitory interneurons (Shin and Sommer 2012). New techniques in which opsins are expressed in a cell type-specific manner include the use of promoters to make somatostatin-positive interneurons selectively responsive to light (e.g., Wilson et al. 2012;reviewed by Hangya et al. 2014). With more development, these advances in the mouse model may be translated to primate models to genetically confirm the identity of the FEF layer IV ambiguous neurons. How the pieces of remapping in FEF layer IV may be combined to create full remapping in layer V is unknown, but it could involve local microcircuits extending into layer II/III ( Fig. 3; modified from Shin and Sommer 2012; based on interlaminar connectivity reviewed by Douglas and Martin 2004). Just before a saccade, layer IV neurons that project to layer II/III would increase their sensitivity to visual stimulation at the receptive field and decrease their sensitivity to visual stimulation at the future field (in Fig. 3 represented as 1RF and 2FF). This is the opposite of full remapping, so in layer II/III the signals would need to be inverted (2RF and 1FF). Regarding that process and subsequent steps, there are no data yet to indicate mechanisms. Layer II/III-specific recordings with laminar probes, or a computational model version of the Fig. 3 conceptual diagram, could help to advance our understanding of the microcircuitry. Microcircuits that create remapping need input from all of visual and oculomotor space, because remapping is a full-field phenomenon: it occurs in association with saccades made into both the contralateral and ipsilateral hemifield (e.g., Heiser and Colby 2006;Sommer and Wurtz 2006). This seems paradoxical for the FEF given that it is a highly lateralized structure, as is its source of corollary discharge, the SC. Both the FEF and SC represent contralateral space almost exclusively. For FEF neurons to remap in both directions, they would need to receive corollary discharge from both SCs, left and right. Do they? Crapse and Sommer (2009) examined this question, using orthodromic activation to identify recipient neurons in FEF, using stimulation in both the homolateral SC (as done previously by Wurtz 1998, 2004a) and the opposite SC. They showed that some neurons in the FEF do receive input from both SCs and that the relative strengths of those inputs predict the lateralization of the FEF neurons' receptive and movement fields. Therefore, the FEF is the target of bilateral inputs that effect an elegant structure-function relationship and could provide information about all saccades. A related question about full-field remapping is how FEF neurons are able to respond to visual stimuli far outside their classical receptive field, even in ipsilateral space. One possible answer is that they receive visual information from large swaths of the visual field, beyond what their classical receptive fields imply. That is, they have a "covert" receptive field at their synaptic inputs that is larger than the "overt" receptive field at their spiking output. Studies of primary visual cortex demonstrate that this is plausible anatomically (e.g., Angelucci et al. 2002) and physiologically (macaque: Bair et al. 2003;cat: Bringuier et al. 1999;reviewed by Gilbert et al. 1996). Direct study of such input-output relationships can be accomplished in vivo in mouse visual cortex with two-photon imaging of dendrites combined with whole cell recordings Remapping is piecewise in the recipient neurons (1, presaccadic increase in visual sensitivity; 2, presaccadic decrease; RF, receptive field; FF, future field). Only putative excitatory neurons (Exc) and presumed lowthreshold spiking (LTS) inhibitory interneurons show presaccadic changes to their visual sensitivity. Putative parvalbumin-positive neurons (not shown) have visual responses, but they do not change before saccades. Layer IV projects predominantly to layers II/III via excitatory interneurons (reviewed in Shin and Sommer 2012). The changes in visual sensitivity at the RF and FF must be inverted and combined to create the full remapping that is found in layer V corticotectal neurons. Whether the resultant, full remapping is sent to extrastriate visual areas is unknown, but it is depicted here as a hypothesis. White triangles, excitatory synapses; black triangles, inhibitory synapses. of visual signals in the FEF akin to that proposed for LIP (Quaia et al. 1998b). As discussed above, evidence for this idea in the form of a spread of visual activity has not been found in the FEF (Mayo et al. 2016;Sommer and Wurtz 2006). Behavioral Implications of Remapping Presaccadic remapping has long been postulated to maintain visual continuity across saccades (Duhamel et al. 1992). But despite decades of careful psychophysical work on the behavioral implications of remapping (Bansal et al. 2015;Binda et al. 2009;Bridgeman et al. 1975;Collins et al. 2009;Deubel et al. 1998;Jayet Bray et al. 2016;Melcher 2007;Rao et al. 2016a; for reviews see Hall and Colby 2011; Melcher and Colby 2008; Wurtz 2008a, 2008b), a link between remapping and perception has only recently gained neurophysiological traction (Crapse and Sommer 2012; Bisley 2012, 2016). Particularly compelling was a recent, causal experiment that affirmed a key role for MD thalamus. Cavanaugh et al. (2016) trained monkeys to report their perceived saccade vector during MD thalamus inactivation and no-inactivation control trials. Building on recent efforts to establish monkeycompatible paradigms for perceptual reporting (Joiner et al. 2013a), they showed that inactivation of the lateral edge of MD thalamus led to marked changes in the reported perception of where the eyes moved, unrelated to where the eyes actually moved. The results provide the strongest evidence yet that MD-mediated corollary discharge, and by extension the FEF remapping that depends on it (Sommer and Wurtz 2006), has an influence that extends to perception. How might remapping affect perception at the neural level? One possibility is that single neurons may report transsaccadic visual change, that is, whether a visual stimulus remains stable or not across saccades. Crapse and Sommer (2012) demonstrated that some FEF neurons exhibit such a report. When the receptive fields of the neurons were brought onto a visual stimulus by a saccade, the neurons responded differently to the stimulus depending on whether it had remained stable the whole time or moved during the saccade, and the reafferent responses were tuned to the amount of intrasaccadic movement. Surprisingly, the neurons also reported featural changes (e.g., color), suggesting that they play multiple roles in transsaccadic change detection (Crapse and Sommer 2012). The transsaccadic tuning occurred in neurons that remapped, as predicted, but also in other visual neurons, suggesting a propagation of the effect throughout FEF. The results point to a high-level change detection program that emerges from microcircuits of the FEF and, perhaps, interconnected cortical areas. The final link from neural activity in FEF to perceptual continuity across saccades still needs to be understood. The following is one hypothetical framework, presented as a sequence of operations that could link neurophysiology and perception. Corollary discharge and visual inputs combine in FEF to generate presaccadic remapping, which provides a mechanism for sampling the postsaccadic receptive field before the eyes move (Fig. 4A). Sent to extrastriate cortex, the remapped response provides a prediction of what will be in the receptive field after the saccade that is compared with reafferent responses from V1 ( Left: just before a saccade, a neuron with a receptive field (RF) as shown remaps its visual sensitivity to the future field (FF). In this example, the FF is at the star. Right: after the saccade, the presaccadic FF is replaced by the postsaccadic RF (New RF) at the same location. The response at the FF, in effect, provides a prediction of what the neuron will "see" in its New RF after the saccade. B: circuit schematic of how the FF prediction might be used. The FF prediction is sent from the FEF to extrastriate cortex (dashed arrow). It is compared with New RF input that arrives from V1 (solid arrow). Any discrepancy, or "prediction error," is fed back to FEF and possibly used locally to modulate reafferent visual responses (dotted arrow). Modified from Crapse and Sommer (2008) with permission from Elsevier. C: the end result is that reafferent responses are tuned for transsaccadic change, as found in FEF by Crapse and Sommer (2012). The cartoon depicts reafferent responses to a star that either changed during the saccade (⌬ଝ, dashed line) or stayed the same (ଝ, solid line). D: at the population level, the transsaccadic change-induced modulation may provide an input to saliency maps. First panel: example scene that includes the star along with other objects. For simplicity, all objects are presumed to have equal salience. Second panel: during a saccade, the star moves slightly. Third panel: after the saccade, the reafferent response across the population of neurons is enhanced for the star. Fourth panel: this increased reafferent response is read out as increased salience (or priority) of the star relative to the rest of the scene. Other influences to saliency/priority maps (e.g., attention) could be active as well but are not shown. scene and populations of neurons, the transsaccadic visual change signal may provide a top-down input to saliency maps ( Fig. 4D; Arcizet et al. 2011;Treue 2003) or, more precisely, priority maps (Bisley and Goldberg 2010;Fecteau and Munoz 2006). A prediction of this hypothetical framework is that modulations in FEF reafferent responses correlate with perception of transsaccadic visual changes. Preliminary data support this hypothesis (Crapse andSommer 2010a, 2010b). A broader question is whether neural correlates of transsaccadic visual change are found only in FEF or throughout the "where" pathway for spatial visual perception (Mishkin et al. 1983). Another prediction of the hypothesis is that remapped responses convey information about visual images. Moreover, if remapped responses are compared with reafferent responses, it would seem advantageous for the future field and receptive field to respond identically to visual stimuli. The predictions seem to be confirmed for LIP, where some neurons have remapped responses with visual tuning similar to their receptive field tuning (Mirpour and Bisley 2012;Subramanian and Colby 2014). However, the degree of similarity is difficult to quantify because the retinal eccentricity corresponding to the future field and receptive field will generally be different, so that the two fields may "see" images at different visual acuity. In the FEF, receptive fields have long been thought to have little feature tuning (Mohler et al. 1973), suggesting that remapped responses may contribute to visual processing only indirectly, as pointers in the service of spatial attention (see Cavanagh et al. 2010;Mayo and Sommer 2010). However, the visual capabilities of FEF neurons may be underappreciated. Reports have found feature selectivity in FEF (e.g., Bichot et al. 1996;Peng et al. 2008) and support a role for it in top-down control of feature attention (Zhou and Desimone 2011). One subtlety of remapping in the FEF is that remapped responses can occur long after the disappearance of the visual stimulus that elicited them (Umeno and Goldberg 2001). The hypothetical process of Fig. 4 can accommodate such neurons despite their unusual "memory remapping." If an image disappears, the neurons would continue to produce remapped responses despite the absence of reafferent input, yielding a large prediction error. But it does not matter, because the prediction error has nothing to modulate. There are no reafferent visual responses. The memory remapping signals may contribute to other processes of spatial memory maintenance as discussed by Umeno and Goldberg (2001). Circuit-Based Modeling Many computational models have examined the origin and role of presaccadic visual remapping. They have incorporated varying degrees of biologically based model design, with some meant to be abstract and others more neuromorphic. An example of the latter was the all-to-all network of Quaia et al. (1998b), one of the first models to provide a hypothesis for the neural instantiation of remapping based on physiological data and the circuitry of the primate brain. Related models used topographically organized layers that produced visual updating with velocity commands (Bozis and Moschovakis 1998; Droulez and Berthoz 1991), a functional architecture likened to that in the primate oculomotor system. More recent work in neural networks abstract away from known circuits and brain regions to focus on the properties of trained networks (White and Snyder 2004;Xing and Andersen 2000), linear and nonlinear computations in spatial updating (Deneve et al. 2007;Pouget et al. 2002;Salinas and Sejnowski 2001), and geometric aspects of remapping (Keith et al. 2010;Keith and Crawford 2008). In the tradition of Quaia et al. (1998b), Rao et al. (2016b) designed a circuit-based model for investigating remapping (Fig. 5A). The model served as the visuosaccadic system of a robot. Rao et al. (2016b) used the system to test useful visual stability as a proxy for perceptual visual stability. Could the robot be trained to make accurate visually guided reaches despite saccades of its camera "eye"? Would this be achieved through the emergence of presaccadic remapping? A simulated FEF received retinotopic visual input and corollary discharge from a simulated SC that controlled the eye. Importantly, the FEF was not trained directly; rather, the model as a whole was trained to achieve accurate reaches during saccades. Rao et al. (2016b) found that in the trained model the neural locus of activity shifted in the simulated FEF just before each saccade (Fig. 5A, FEF sheet). The effect corresponded to presaccadic remapping in the visual field as observed in neurophysiology experiments ( Fig. 5B; see Rao et al. 2016b for details). Simulated inactivations of the corollary discharge pathway from SC to FEF replicated and helped to explain inactivations of MD thalamus performed in vivo (Rao et al. 2016b;Sommer and Wurtz 2006). Intriguingly, the model suggested the importance of eye position signals to remapping. An eye position sheet was included in the model, and its instantaneous representation of eye position was summed with FEF output to achieve a head-centered reference frame that the robot needed for reaching. During training, the timing of presaccadic remapping in FEF became synchronized to the updating of eye position (from its presaccadic to its postsaccadic location). Time-locking of the signals was necessary for continuously accurate reaching. In the real brain, eye position updates are often predictive. Many neurons in the thalamus provide eye position signals that update before the eyes move (Schlag-Rey and Schlag 1984;Tanaka 2007;Wyder et al. 2003), and such signals seem to influence cerebral cortical visual processing (Graf and Andersen 2014; Morris et al. 2012Morris et al. , 2016Sereno et al. 2014). Including predictive eye position signals in the model yielded time-locked, predictive remapping in FEF. The implication is that, in the real brain, the timing of remapping may be synchronized to the timing of eye position updates in order to maintain a continuously useful coordinate transformation. This is different from the prior assumption that remapping is synchronized to saccade initiation (e.g., Sommer and Wurtz 2008a). The new hypothesis maintains that the remapping-saccade correlation is indirect, while the true functional relationship is between remapping and eye position updating. The hypothesis is supported by a comparison between papers that reported the timing of remapping in FEF and the timing of eye position update signals as recorded in thalamus and inferred psychophysically (Fig. 5C). Reciprocal, monosynaptic projections between the FEF and thalamic eye position areas are profuse (Barbas and Mesulam 1981;Huerta et al. 1986;Kievit and Kuypers 1977;Stanton et al. 1988), providing a structural basis for the putative synchronization. There are several neurophysiological approaches for testing the hypothesis in vivo, as described in Rao et al. (2016b). Unresolved Issues The study of presaccadic remapping has become a mature area of research. Little disagreement remains about the core finding that visual sensitivity moves just before a saccade. Yet some discrepancies between studies persist. The most puzzling issue relates to the spatial attributes of remapping. Using the nomenclature of Marino and Mazer (2016), who recently summarized the discrepancies, some experiments find "forward remapping," in which visual sensitivity moves parallel to the saccade, while others find "convergent remapping," in which visual sensitivity moves toward the target of the saccade. Both forms of remapping have gained theoretical support from models having different suppositions and architectures (e.g., forward mapping: Quaia et al. 1998b;Rao et al. 2016b;convergent remapping: Hamker and Zirnsak 2006;Zirnsak et al. 2010). We evaluate the physiological evidence for forward and convergent remapping with a focus on technical differences between studies that hint at reasons for the discrepancies in their results. Direction of saccade. In most of the brain areas where remapping occurs, neurons fire for saccades made into con-tralateral space. The SC and the FEF contain significant motorrelated presaccadic activity, often found in the same neurons that have visual responses ("visuomovement neurons") (e.g., Bruce and Goldberg 1985;Wurtz et al. 2001). Extrastriate visual areas such as LIP exhibit similar presaccadic surges of activity with saccades (Barash et al. 1991a(Barash et al. , 1991bWurtz et al. 2001). The region of the visual field into which saccades cause elevated firing (the "movement field") generally overlaps with the visual receptive field. Hence it is critical to evaluate the extent to which apparent remapping represents a response to the saccade rather than to the visual stimulus. One can make this evaluation using "saccade-only" control trials in which a visual stimulus is not presented. Confounds from saccaderelated activity can be reduced or eliminated by directing saccades outside the movement field. Churan et al. (2011), Duhamel et al. (1992, Kusunoki and Goldberg (2003), Nakamura and Colby (2002), and Walker et al. (1995) dealt with this issue in LIP and the SC by directing saccades outside of the receptive/movement field. In the FEF, Sommer and Wurtz (2006) and Umeno and Goldberg Rao et al. (2016b), shown overlaid onto macrocircuits for remapping to illustrate concepts. Brain areas were modeled as sheets of neurons. The FEF sheet received visual and corollary discharge input mediated by a recurrent hidden layer. FEF output was combined with internal eye position (EP) signals to achieve craniotopic coordinates. The network was trained to optimize accuracy of a robot that points to a red ball (depicted as the red circle in the sheets) while making saccades. Once the model was trained, each saccade command in the SC sheet (yellow arrow) induced an equal and opposite presaccadic shift in the locus of neural activity in the FEF sheet (white arrow). B: the presaccadic shift in the FEF sheet implied a shift of sensitivity in the visual field. Consider a neuron located up and to the left in the FEF sheet (at the location of the red circle). Its classical receptive field (RF) is up and to the left. Long before the saccade, it is inactive because the red ball is not in its RF. Just before the saccade, it becomes active. It is responding to the red ball, even though the ball has not moved. The neuron's visual sensitivity has remapped to the FF. In the trained model, this remapping required corollary discharge (CD) but was time-locked to a different signal, the update in eye position from its initial location (EP) to its final location (EP=). C: evidence for synchronization of remapping and eye position updating from physiological and psychophysical studies. Shown superimposed are cumulative distributions of remapping onset times in FEF (red curve), eye position update times in thalamus (orange dashed curve), and the time course of the internal representation of eye position in humans (green curve). Based on data from Sommer and Wurtz (2006), Tanaka (2007), andHonda (1991), respectively. Modified with permission from Rao et al. (2016b). CC-BY 4.0, modified (http://journal.frontiersin.org/article/10.3389/fncom.2016.00052/full). (1997,2001) used a sequence of saccades. The first saccade was made to a location outside the receptive/movement field, and then a second saccade was made away from the location of the visual stimulus used to probe the future field. Knowing that the FEF contributes to planning the generation of sequences of movements (Schall 2002(Schall , 2013, the inclusion of the second saccade reduced the possibility that the observed neural activity was a motor plan toward the visual probe. The confounding effects of such motor plans were shown by Walker et al. (1995;their Fig. 15B). In all of these studies, forward remapping was found. Even the use of trials in which visual probes were placed near the saccadic target (Sommer and Wurtz 2006) revealed no evidence of convergent remapping. In contrast, convergent remapping was reported by Tolias et al. (2001) in V4 and by Zirnsak at al. (2014) in FEF. Neupane et al. (2016b) followed up on the V4 work with a study that tested the influence of saccade direction. They found that convergent remapping occurred only for contralateral saccades and began well after saccade initiation. Early after saccade initiation, forward remapping occurred for all directions of saccades. To clarify the dynamics of remapping for contralateral saccades, they designed a new task in which saccades went partway toward the receptive field (Neupane et al. 2016a), a geometry that disambiguates forward and convergent remapping by forcing them to go in opposite directions. Forward remapping dominated at all time points (Neupane et al. 2016a). The presence of convergent remapping in some experiments remains curious. In V4 and FEF, it seems to occur only for contralateral saccades (Neupane et al. 2016b;Zirnsak et al. 2014). In that configuration, the saccade target-a major attractor of attention (Deubel and Schneider 1996;Kowler et al. 1995)-is located in the same hemifield as the receptive field. This nearby attentional focus likely shifts a neuron's visual sensitivity toward the target (Connor et al. 1996(Connor et al. , 1997Hamker and Zirnsak 2006;Zirnsak et al. 2010) and distorts forward remapping. Data alignment and temporal averaging. Nearly all studies of presaccadic remapping have analyzed the remapped activity aligned to saccade initiation, which has been shown repeatedly to provide a tighter response distribution than alignment to stimulus onset (Joiner et al. 2013b;Nakamura and Colby 2002;Sommer and Wurtz 2006;Umeno and Goldberg 1997;Walker et al. 1995;reviewed by Sommer and Wurtz 2008a). This makes sense because neurons should remap only when the saccadic system has committed to moving the eyes, as supported by psychophysical studies (Atsma et al. 2014) and modeling (Rao et al. 2016b). Neupane et al. (2016b) demonstrated that the temporal window for quantifying the remapping activity is important as well. In V4, forward remapping occurs only during the first 200 ms after a contralateral saccade. After that, convergent remapping takes over. How data are aligned and averaged, therefore, can impact conclusions about forward vs. convergent remapping. The one study that found convergent remapping in FEF (Zirnsak et al. 2014) aligned the data differently from most other studies-to stimulus onset rather than saccade initiation-and used an averaging window of 50 to 350 ms after stimulus onset, or roughly 20 ms before to 280 ms after saccade initiation (given that their stimuli were presented ϳ70 ms before saccade initiation). The extent to which the findings of that study may have hinged on these analysis choices is an open question. Coarseness of spatial mapping. The density of spatial testing varies appreciably between studies and may contribute to different interpretations of the spatial nature of remapping. For example, in LIP Kusunoki and Goldberg (2003) tested the temporal properties of remapping in detail but therefore could not test more than a couple of spatial locations, at the receptive field and at the future field. A recent study from the same lab, studying LIP with higher spatial resolution, found spatial dynamics that were unrecognized previously (Wang et al. 2016). Though experimentally challenging, an exhaustive assessment-in any brain area-of the full spatiotemporal dynamics of remapping still needs to be accomplished. One step toward that goal is to test a large region of the visual field with a grid of stimuli, an approach pioneered by Tolias et al. (2001), followed up by Zirnsak et al. (2014), and refined by Neupane et al. (2016b). A new, different approach uses probabilistic modeling (Mayo et al. 2015(Mayo et al. , 2016. As monkeys fixate, FEF receptive fields are mapped in spatial and temporal detail using rapid, sparse sampling of the visual field and a generalized linear model. Applied during saccades, the method could settle the controversy of forward vs. convergent remapping in FEF. Preliminary results appear to confirm forward remapping (Mayo et al. 2016). Differences across cortical layers. As described above in Microcircuits for Remapping, remapping is known to differ across cortical layers, at least in the FEF (Shin and Sommer 2012). Physiologically identified layer V neurons exhibit full remapping in the forward direction (Shin and Sommer 2012; Sommer and Wurtz 2006). A caveat to this conclusion is that a large grid of probe locations was not used, although if convergent remapping occurred it should have been detected by probes near the saccade target (Sommer and Wurtz 2006). Zirnsak et al. (2014) provided a much denser sampling of the visual field and concluded that FEF neurons engage in convergent remapping. Their neurons were recorded throughout the FEF, with laminar locations unknown. Putting these conclusions together and taking both at face value yields the possibility that the direction of remapping (forward vs. convergent) may vary between layers and output channels of the FEF. Convergent remapping in layer II/III neurons that project to other cerebral cortical regions could contribute to focusing attention around the time of the saccade, while forward remapping in the layer V neurons that project to the SC could be more involved in the maintenance of a stable visual representation. A better understanding of laminar-specific output signals of the FEF could play a key role in untangling its multitude of interacting signals (e.g., attention, target selection, and movement generation; Mayo and Verhoef 2014; Schall 2002Schall , 2013. Conclusions Just before saccades, neurons in many parts of the visual system remap-they transiently sample regions of the visual field outside of their classical receptive fields. We have come a long way in understanding the circuits that mediate these central effects. In this review, we highlighted three main hypotheses: H1. The FEF is the source of presaccadic remapping that is sent to the rest of the brain (based on analyses of macrocircuits and microcircuits). H2. This remapping compares pre-vs. postsaccadic visual space, which, in turn, modulates reafferent visual responses in FEF and extrastriate cortex to signal violations of visual continuity (based on consideration of the behavioral implications of remapping). H3. The timing of remapping is synchronized to eye position updates in order to maintain coordinate transformations for seamless visually guided behaviors (based on circuit-inspired computational modeling). Note that H3 implies that the ultimate reason why predictive remapping occurs is to optimize visually guided movements. The ability to perceive visual continuity across saccades (H2) would be a secondary benefit (for more on this idea, see Rao et al. 2016b). This conclusion is consistent with the broader idea that motor system evolution forms the basis for higher cognitive and perceptual functions (for a recent, comprehensive review, see Mendoza and Merchant 2014). All of hypotheses H1-H3, though speculative, are based on a body of interrelated literature. All of them are experimentally testable. Evaluating them may provide a systematic, promising pathway forward to revealing the functions of presaccadic visual remapping. More broadly, the progress made in this field of research serves as a prompt, we hope, for investigators of other sensory systems and animal models to examine the receptive fields of their neurons for evidence of spatial lability-a remarkable form of active sensingduring behavior.
2016-11-08T18:56:27.780Z
2016-09-21T00:00:00.000
{ "year": 2016, "sha1": "c92ead841897abe339d28126e0b0fedbffdaf82c", "oa_license": "CCBY", "oa_url": "https://dukespace.lib.duke.edu/dspace/bitstream/10161/13101/7/RaoEtAl2016-CircuitsForPresaccadicRemapping.pdf", "oa_status": "GREEN", "pdf_src": "Crawler", "pdf_hash": "6f9fee9f5c8da4c05337683fa90a10c2f18f2766", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
52974991
pes2o/s2orc
v3-fos-license
Is C‐11 Methionine PET an alternative to 18‐F FDG‐PET for identifying recurrent laryngeal cancer after radiotherapy? Objective 18F FDG‐PET is superior to other imaging techniques in revealing residual laryngeal cancer after radiotherapy. Unfortunately, its specificity is low, due to FDG uptake in inflammation and in anaerobic conditions. PET imaging with the amino acid‐based radiopharmaceutical C11‐methionine (MET) should be less influenced by post‐radiation conditions. The aim of this study was to investigate the potential of MET in diagnosing recurrent laryngeal cancer after radiotherapy as compared to 18F‐FDG. Methods Forty‐eight patients with a clinical suspicion of local residual disease at least 3 months after completion of radiotherapy or chemoradiotherapy for a T2‐4 laryngeal carcinoma, along with an indication for direct laryngoscopy, were included. They received MET‐PET and FDG‐PET prior to the direct laryngoscopy. One senior nuclear medicine physician assessed both the FDG‐PET and MET‐PET images visually for the degree of abnormal uptake. The gold standard was a biopsy‐proven recurrence 12 months after PET. The nuclear physician had no access to the medical charts and was blinded to the results of the other PET. Sensitivity, specificity and positive and negative predictive value were calculated. Results The sensitivity of FDG was 77.3% and the specificity 56.0% after the conservative reading, with these values equalling 54.5% and 76.0% for MET. The positive predictive value of FDG was 60.7% and the negative predictive value 73.7%. The PPV of MET was 66.7%, and the NPV was 65.5%. The McNemar test within diseased (sensitivity comparison) shows a p‐value of 0.125, and the McNemar test within non‐diseased (specificity comparison) shows a P‐value of 0.180. Conclusion MET‐PET is not superior to FDG‐PET in terms of identifying recurrent laryngeal cancer. growth pattern, embedded in oedema and inflammatory tissue ( Figure 1). 1 In some patients, these features will persist over the ensuing years. A positive biopsy, by means of endoscopy, is the gold standard for confirming residual or recurrent disease, but a negative biopsy does not necessarily exclude residual or recurrent disease. Several direct laryngoscopies may be necessary to prove the presence of residual or recurrent disease. [2][3][4] In addition, the tissue damage caused by biopsies may exacerbate the already existing inflammation, oedema and fibrosis. 5 Conventional imaging techniques to detect recurrent laryngeal carcinoma after radiotherapy include CT and MRI. The sensitivity of both imaging techniques ranges from 58% to 72%. 6,7 In clinical practice, these figures may indicate that one needs to perform a direct laryngoscopy, despite negative CT or MRI findings. FDG-PET appeared to be helpful for select patients with clinical suspicion of recurrent laryngeal carcinoma after radiotherapy, when direct laryngoscopies under general anaesthesia with biopsies were indicated. 8 A systematic review by Brouwer and colleagues shows that FDG-PET can help to reveal residual disease after radiotherapy, with a sensitivity of 89% and a specificity of 74%. 3 An explanation for the relatively low specificity could be the uptake of FDG in activated macrophages. As in tumour cells, activated macrophages have an abundance of GLUT-1 receptors and will therefore have a high uptake of FDG. 9,10 The conditions after radiotherapy are characterised by non-vital tumour cells and macrophages dominating the former tumour site, regardless of the presence of residual disease or not. 11 The uptake of amino acids-methionine, for example-is high in tumour cells but low in inflammatory tissues and could therefore be a good alternative to FDG. 12 C-11 MET is an established radiopharmaceutical and has been widely used to visualise intracranial lesions. 13 Methionine (MET) has also been successfully used in visualising primary head and neck cancer. [14][15][16][17][18] In addition to preclinical studies validating MET in the evaluation of radiotherapy/chemoradiotherapy, preclinical studies also showed a fast decline for MET in the post-radiation phase. 19,20 Autoradiography shows that MET uptake is predominantly located in viable tumour cells, with low uptake in macrophages and nonviable tumour cells. 21 In this study, we hypothesise that MET-PET is better than FDG-PET in detecting recurrent disease in patients with clinical suspicion of recurrent laryngeal carcinoma after radiotherapy. | Patients The protocol was approved by the ethics committee as required in the Netherlands under the Medical Research Involving Human Subjects Act. All patients provided written informed consent. Two university hospitals recruited patients for the study. Forty-eight patients with a clinical suspicion-although no obvious local residual, recurrent disease or second primary at least 3 months after completing radiotherapy/chemoradiotherapy with curative intent for a resectable T2-4 laryngeal squamous cell carcinoma-who had a clinical indication for direct laryngoscopy and biopsy under general anaesthesia were included. Suspicion of recurrent disease was raised by the patient's complaints and changes on physical examination that included fibre-optic laryngoscopy. Exclusion criteria were no younger than 18 years, a clinically evident recurrence, and pregnancy. One patient had to be excluded because parts of the PET scan registration were lost. The patient received MET-PET and FDG-PET prior to direct laryngoscopy. The maximum allowed timeframe between scans and laryngoscopy was 1 month. | Ethical considerations The protocol was approved by the ethics committee as required in the Netherlands under the Medical Research Involving Human Subjects Act. All patients provided written informed consent. Two university hospitals recruited patients for the study. | Procedures C11-methionine was prepared in our laboratory by 11C -methylation of L-homocysteine thiolactone using a Zymark robotic system. To this end, a solution of L-homocysteine thiolactone in a NaOH/ethanol mixture was put into a C18 cartridge followed by the passage of C11-methyl iodide. When the radioactivity on the cartridge was maximal, C11-methionine was eluted with a phosphate buffer through a second C18 cartridge and a sterile filter to a sterile vial containing saline. This end product was ready for injection and met the following radiopharmaceutical criteria-radiochemical purity > 95%, specific activity > 10 000 GBq/mmol, sterile and pyrogen free, F18-FDG was produced according to the method of Hamacher and colleagues, using an automated synthesis module. 14 The radiochemical yield was 65.9% ± 7.1% (decay corrected). The patients were scheduled for separate C11-MET-PET only and F18-FDGPET only scans, shortly before the direct laryngoscopy. For both scans, patients were instructed to fast for at least 6 hours. A5 MBq/kg C11-MET or 5 MBq/kg F18-FDG was injected intravenously and again after 20 or 60 minutes (for C11-MET or F18-FDG, respectively). The scanning was performed on an ECAT EXACT HR +PET camera (Siemens/CTI Inc) at both institutions, according to the Netherlands protocol for standardisation and quantification of FDG whole body PET studies in multi-centre trials. 22 The scanned trajectory included skull base to the pelvis (Figures 1 and 2). PET (Table 2) . | DISCUSSION Because the majority of recurrent laryngeal cancers after radiotherapy can be salvaged if detected in a timely fashion, early detection of recurrent disease is of importance. 23 However, it may be difficult to differentiate between recurrence and post-radiation changes. 5 In the present study, FDG-PET was able to detect recurrent laryngeal cancer after radiotherapy, with results worse than those results obtained in other studies. 6,8,24,25 This can be explained by the selection of our population. In most of the studies, patients with obvious residual/recurrent disease were included, while we excluded these patients. Three recurrent diseases were not demonstrated by PET after the conservative reading: two were not demonstrated by MET-PET, and one not by either FDG-PET or MET-PET. These recurrent diseases were also not diagnosed by a direct laryngoscopy with taking of biopsies. Two or more direct laryngoscopies were necessary to diagnose the residual cancer. This shows that a false-negative PET had no more influence on the time of laryngectomy than a traditional workup. These findings are in agreement with the literature. 5,26 In addition to reliable detection of residual or recurrent laryngeal carcinoma after radiotherapy, PET is able to detect distant metastases. [27][28][29] Although the metastases were revealed both by FDG and MET-PET, they were more clearly visualised by FDG-PET. FDG-PET is our preference for detecting distant metastases. To make a reliable selection for direct laryngoscopy, a combination of high sensitivity and high negative predictive value is mandatory. The sensitive reading of the FDG results meets these demands. Unfortunately, the positive predictive value is much lower. This could result in a considerable number of unnecessary direct laryngoscopies, if PET were used to select patients for this procedure. To avoid unnecessary direct laryngoscopies under general anaesthesia, a higher positive predictive value is needed. The search for an alternative to FDG was mainly driven by a desire to improve the positive predictive value. The main goal of this study-an improvement in the positive predictive value without reducing the negative predictive value-was not achieved. The positive predictive value of MET for the conservative reading was slightly higher, although not significant, than the positive predictive value obtained with FDG. The negative predictive of 65.5% was too low, which implies that MET-PET cannot be used to select patients for a direct laryngoscopy. It would be interesting to know whether MET-PET is able to dis- We have no explanation for these disappointing results. It is known that recurrent disease after radiotherapy is usually scattered and embedded in inflammatory tissue. This is why we conducted this pilot study. This small study did show that even early laryngeal cancers (T1-2 glottic) were excellently visualised with MET. 14 Since the major part of this study was carried out using an older generation PET camera, one would expect that the results in this study might have been better due to the improved sensitivity of the new generation of PET cameras. The limited size of recurrent disease found should therefore not be an important reason for the low sensitivity observed. In contrast to FDG, MET visualises more than a single pathway. MET has a considerable non-protein synthesis part, which makes MET unsuitable for quantitative analyses. 12 However, the nonprotein synthesis pathways are more strongly activated by malignancies than inflammation. The negative effects of non-protein pathways on the visualisation of recurrent disease should therefore be limited. Although high salivary gland activity is demonstrated by MET-PET, it is unlikely that this hampered interpretation of PET, because the larynx is out of the field of the submandibular and parotid glands. A more likely explanation could be the tumour-to-background ratio. The tumour-to-background ratio of FDG is higher than the ratio obtained with MET. 12 Although the uptake of FDG in inflammatory tissues is thought to be higher than the uptake of MET, recurrent disease is probably better detected due to the absolutely stronger uptake of FDG in a malignancy (Figures 2 and 3.). Most malignancies show, in addition to an increased glucose metabolism, an increased metabolism of amino acids, nucleosides and phospholipids. Parts of malignancies may be hypoxic and may have molecular targets on their cells. Several amino acids, the nucleoside thymidine, the precursor for the biosynthesis of phospholipids choline, hypoxia tracers and labelled monoclonal antibodies are incorporated in radiopharmaceuticals, which could be alternatives to FDG. The literature shows only a few studies in which an alternative to FDG is used to visualise recurrent head and neck carcinoma after radiotherapy. These studies have their limitations because none of the studies include more than 20 patients, and they frequently deal with more than one sub-site. These studies show that 11C-choline and FLT visualise primary and recurrent head and neck cancer slightly worse than FDG does. [28][29][30][31] The amino acid tyrosine (TYR) and O-(2-[18F] fluor ethyl)-L-tyrosine (FET) are the only radiopharmaceuticals that show results that are equal or better than those obtained with FDG for visualising recurrent or residual head and neck carcinomas. Unfortunately the laborious production processes are a serious limitation to using TYR and FET on a larger scale. [32][33][34] To answer the question of whether other radiopharmaceuticals are more suitable for revealing recurrent laryngeal disease after radiotherapy, studies designed like ours need to be conducted.
2018-11-01T06:46:16.195Z
2018-11-18T00:00:00.000
{ "year": 2018, "sha1": "263c092e4dd1387c9d2efe914f41f86cb8f0a13a", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/coa.13242", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "263c092e4dd1387c9d2efe914f41f86cb8f0a13a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247793980
pes2o/s2orc
v3-fos-license
Operation of an ambulance fleet under uncertainty We introduce two new optimization models for the dispatch of ambulances. The first model, called the ambulance selection problem, is used when an emergency call arrives to decide whether an ambulance should be dispatched for that call, and if so, which ambulance should be dispatched, or whether the request should be put in a queue of waiting requests. The second model, called the ambulance reassignment problem, is used when an ambulance finishes its current task to decide whether the ambulance should be dispatched to a request waiting in queue, and if so, which request, or whether the ambulance should be dispatched to an ambulance staging location, and if so, which ambulance staging location. These decisions affect not only the emergency call and ambulance under consideration, but also the ability of the ambulance fleet to service future calls. There is uncertainty regarding the locations, arrival times, and types of future calls. We propose a rolling horizon approach that combines the current decisions to be made as first-stage decisions with second-stage models that represent the ability of the ambulance fleet to service future calls. The second-stage optimization problems can be formulated as large-scale deterministic integer linear programs. We propose a column generation algorithm to solve the continuous relaxation of these second-stage problems. The optimal objective values of these second-stage continuous relaxations are used to make approximately optimal first-stage decisions. We compare our resulting dispatch policy with popular decision rules for Rio de Janeiro emergency medical service, based on data of more than 2 years of emergency calls for that service. These tests show that our proposed policy results in smaller response times than the popular decision rules. Introduction This paper proposes new optimization models for aiding ambulance dispatch decisions. The paper also presents a solution method for these problems, and tests the proposed approach with real emergency medical service data. 1.1. Ambulance Fleet Operations. We consider requests for emergency medical service that arrive at a call center. A call center telecommunicator receives the call and obtains data from the caller, including the nature of the emergency and the location of the emergency. (Often multiple calls are received related to the same emergency. It is an important question how to determine whether or not a particular call is related to an emergency that has already been reported. We do not address that question in this paper.) The telecommunicator records the data and decides whether to request that an ambulance be dispatched to the 1 emergency. If so, a decision is made which ambulance should be dispatched to the emergency, or whether the request should be placed in a queue of requests waiting for an ambulance to be dispatched. Sometimes multiple ambulances are dispatched to an emergency. Here we consider the more typical situation in which a single ambulance is dispatched to an emergency. Usually, if an ambulance is dispatched to an emergency, it is an available ambulance, that is, an ambulance that is not on the way to an emergency, or busy at an emergency, or transporting patients from an emergency to a hospital. That is, an available ambulance is an ambulance that is either waiting at an ambulance staging location for an assignment, or on its way to an ambulance staging location after attending to an emergency. Some emergency medical services also consider dispatching an ambulance that is on the way to another emergency, that is, an ambulance that has already been dispatched to an (apparently less urgent) emergency is preempted and dispatched to the new (apparently more urgent) emergency. In some systems the telecommunicator makes the decision, and in other systems a separate ambulance dispatcher makes the decision. If an ambulance is dispatched to the emergency, then it takes the ambulance some time to arrive at the location of the emergency. The amount of time that elapses from the moment the first call related to an emergency is received until the first ambulance personnel arrive at the patient(s) is called the response time. (In practice, distinction is made between different response times, for example, the elapsed time can start when the first call related to an emergency is received or when the telecommunicator requests that an ambulance be dispatched or when the ambulance personnel receive the dispatch instructions, and the elapsed time can end when the first ambulance arrives at the location of the emergency or when the first ambulance personnel arrive at the patient(s). In this paper we consider one of these times as the response time.) It has been found that mortality odds increase as the response time increases [8]. More specifically, the effectiveness of a treatment often depends on the response time. For example, it has been found that if an ambulance arrives 10 minutes after the onset of myocardial infarction (heart attack), then defibrillation reduces prehospital mortality from 6% to 2%, but if an ambulance arrives 60 minutes after the onset of infarction, then defibrillation reduces prehospital mortality from 13% to 12% [15]. In addition, response times of emergency medical services (EMSs) are relatively easy to measure, and most EMSs record and report (for example, to the National Emergency Medical Services Information System NEMSIS) response time data. Therefore many EMSs as well as academic papers put great emphasis on the response time performance metric. As summary statistic of response time data, EMSs often measure specific quantiles of the response time empirical distribution. For example, the 0.8 and 0.9 empirical quantiles are measured (often pooling all emergency data, that is, without distinction between different types of emergencies), and for performance to be acceptable, the considered quantiles have to be less than specified threshold values [34,61,51]. It has been pointed out that response time is not the only factor under control of emergency medical services that affects the survival probabilities of patients, and also that the impact of emergency medical care and response time depends on the type of emergency. For example, it has been found that advanced pre-hospital life-support has greater impact on mortality and morbidity of patients who had respiratory distress than patients who had cardiac arrest [68]. A potential concern with using the type of emergency in dispatch decisions is that emergencies are often misclassified by telecommunicators due to limited or incorrect data. Based on an analysis of medical emergency data in the UK under two systems for classifying emergencies into priority classes, [58] found that the benefits of priority dispatch outweighed the risk of misclassification. Therefore, it makes sense to take the characteristics of the emergency into account when deciding whether to immediately dispatch an ambulance to the emergency, and if so, which ambulance to dispatch, as opposed to considering only specific quantiles of the response time distribution irrespective of the nature of the emergencies. Also, the capabilities of the ambulance and personnel can affect the survival probability of the patient, depending on the type of emergency. Although many academic papers consider all ambulances to be the same, typically ambulances are not the same. For example, some EMSs make a distinction between basic life support (BLS) ambulances and advanced life support (ALS) ambulances [64], some EMSs make a distinction between first responders staffed by emergency medical technicians and second responders staffed by paramedics, and some EMSs have stroke units in addition to BLS and ALS ambulances. At a sufficient level of detail, every ambulance is unique, because the crew members of different ambulances have different qualifications and experience. [59] compared a targeted EMS system that dispatches BLS or ALS ambulances according to the nature of the emergency and an EMS system that provides uniform service with all ALS ambulances. The study focused on witnessed ventricular fibrillation cardiac arrest emergencies, and compared system performance in terms of four outcomes: return of spontaneous circulation, survival to hospital admission, survival to hospital discharge, and survival to 1 year. All performance measures were better for the targeted system that dispatched BLS or ALS ambulances according to the nature of the emergency. For the reasons discussed above, our model distinguishes between different types of emergencies, and each individual ambulance is modeled. After arriving at the location of the emergency, the ambulance crew treat the patients as the crew deem best, and decide whether to transport patients to a hospital. In some fraction of cases, it is decided not to transport any patients to a hospital, and thus the ambulance becomes available after treatment at the emergency site has been completed. In other cases, the ambulance transports patients to a chosen hospital, and becomes available thereafter. Sometimes ambulances are used for scheduled transportation of patients, for example, to transfer patients from one hospital to another. We model such transportation tasks as a specific type of "emergency". After patients have been transported in an ambulance, the ambulance has to be cleaned. The cleaning is regarded as part of the ambulance's work related to an emergency. After an ambulance has completed its work related to an emergency, a decision has to be made where to dispatch the newly available ambulance to. If any requests are waiting in queue, the ambulance can be dispatched to a chosen waiting emergency. The ambulance can also be used to preempt another ambulance that is on its way to the location of an emergency, for example, if the newly available ambulance is closer to the emergency than the previously dispatched ambulance. Otherwise the ambulance can be sent toward an ambulance staging location, and it is regarded as available during such a journey. There are various types of ambulance staging locations. Most EMSs have one or more facilities that make provision for ambulance maintenance, office space, and training. Such facilities can be used for ambulance staging, but typically a larger number of smaller facilities are also used for ambulance staging. These smaller facilities may make provision for ambulance personnel to relax during less busy times. During busy times, ambulances may use public facilities such as parking lots for ambulance staging. For the purpose of this paper, all ambulance staging locations will be called stations. This paper proposes an optimization-based method to make two types of decisions mentioned above: (1) When a request arrives, the decision whether an ambulance should be dispatched to the emergency, and if so, which ambulance to dispatch to the emergency, or whether to add the request to a queue of waiting requests. We will call this decision the ambulance selection decision. (2) When an ambulance becomes available (after completing its service at the location of an emergency or at a hospital), the decision whether to dispatch the ambulance to an emergency waiting in queue, and if so, to which emergency to dispatch the ambulance, or whether to send the ambulance to a station, and if so, to which station to send the ambulance. We will call this decision the ambulance reassignment decision. We do not include the two types of preemption alternatives mentioned above, that is, (1) when a request arrives, the alternative to preempt an ambulance already on its way to an emergency and to dispatch it to the newly arrived emergency, and (2) when an ambulance becomes available, the alternative to preempt an ambulance already on its way to an emergency and to dispatch the newly available ambulance to the emergency (called "diversions" by [51]). [50] simulated and compared four dispatch policies, obtained by switching each of the two alternatives mentioned above (the first alternative was called "Reroute Enabled Dispatch" and the second alternative was called "Free Ambulance Exploitation Dispatch") on and off. The results suggested that these alternatives add very little benefit. In addition, preempting ambulances have practical disadvantages, and thus it is questionable whether use of these alternatives is a wise decision. There is a large literature on the location and relocation of emergency facilities, including stations and ambulances. Here we give a brief overview of this work with emphasis on the aspects that are relevant for ambulance operations. For surveys, see [12,62,9,29,49,3]. A number of static problems have been proposed to choose locations for stations or ambulances. Many of these problems use the notion that a demand point is covered if a station or ambulance is located within a specified distance or travel time of the demand point. The location set covering problem (LSCP) was proposed by [71] to determine the minimum number of emergency service facilities (stations) that covers a given set of demand points. A related problem is the maximal covering location problem (MCLP), proposed by [14], to maximize the weighted set of demand points that is covered by a given number of facilities. Many variations of these two models have been proposed to capture aspects that are relevant for the location and relocation of ambulances. Many of these variations consider the policy of assigning ambulances to stations (called the ambulance's "home base" or "depot"), and when an ambulance becomes available and is sent to a station, to always send it to its home base. (In contrast with this policy, and similar to [65], we allow an ambulance to be sent to any station to improve coverage.) For example, [7] used a LSCP to locate stations, and simulation to determine the number of homogeneous ambulances to allocate to each station. [64] proposed two models to choose locations for multiple equipment types. [17] proposed an extension of LSCP that rewards the objective with the number of additional ambulances that can cover a demand point, to make provision for the possibility that some ambulances may be busy when an emergency call is received. Similarly, [36] proposed extensions of MCLP that also reward demand that is covered by more than one facility/ambulance. [22] proposed an extension of MCLP called the double standard model (DSM) that maximizes the demand covered by at least 2 ambulances, subject to 2 constraints on the coverage of all demand. One shortcoming of the deterministic location problems mentioned above is that they do not explicitly model uncertainty regarding important problem parameters, such as when and where calls arrive, and where the available ambulances are when a call arrives. One of the first related quantities to be modeled as a random variable was whether an ambulance is busy or available when a call arrives. A very influential paper in this regard was [16], which proposed the maximum expected covering location problem (MEXCLP) that uses a binomial distribution for the number of busy ambulances (with the busy/available random variables of different ambulances being independent, with the same busy probability for all ambulances) to explicitly model the probability (or fraction of time) that a demand point is covered by different numbers of ambulances. [63] proposed two versions of the maximum availability location problem (MALP), version I with the same busy probability for all ambulances, and version II that allows different busy probabilities at different stations. The objective of MEXCLP is to maximize the expected demand that is covered by a given number of ambulances, whereas the objective of MALP is to maximize the demand-weighted set of points that is covered with probability at least equal to a specified quantity α ∈ (0, 1) by a given number of ambulances. [67] proposed a problem with the same objective as MEXCLP, but that allows different busy probabilities at different stations, similar to MALP II, and their simulation results demonstrated that the resulting model consistently produces better solutions than MEXCLP and MALP II. Various further extensions have been proposed. [60] proposed an extension of MEXCLP that models time-varying demand for ambulances. [37] proposed a heuristic for the MEXCLP with random delay times (prior to ambulance travel) and random travel times, and with different busy probabilities for different ambulances. [20] compared the solutions of the MCLP, the MEXCLP, the MCLP with random response times (MCLP+PR), the MEXCLP with random response times (MEXCLP+PR), and the MEXCLP+PR with different busy probabilities for different ambulances, in terms of the fraction of calls with response time less than a threshold, using data from Edmonton, Alberta, Canada. A few papers proposed ambulance location models that did not use the notion of coverage. For example, [74] proposed a method to locate a given number of ambulances to minimize expected response time. [69] proposed a branch-and-bound approach to locate stations that uses simulation to evaluate the expected response time objective. [70] used simulation and a heuristic, also to minimize expected response time. [21] developed an approximation based on an M/G/∞ queue for the probability that an ambulance is busy, used it to search for ambulance locations that minimize expected response time, and applied it to choose ambulance locations for the city center of Los Angeles. [43] proposed a continuous-time Markov process called the hypercube queueing model that can be used for performance modeling of emergency operations. In the model there are N distinct servers and multiple demand locations. [43] proposed an algorithm to compute the transition rates for any given fixed preference policy, that is, a policy that specifies for every demand point a preference list of all the servers from most preferred to least preferred (with ties allowed) independent of the state of the process. Then, when a call arrives from a demand point, the most preferred available server in the preference list for that demand point (one of the most preferred available servers in case of ties) is dispatched to serve the call. The algorithm exploits the similarity of the most preferred available servers for adjacent states of the Markov process, that is, states that differ in the availability of only one server, to reduce the effort to compute the transition rates. After the transition rates have been computed, a system of 2 N linear equations can be solved to compute the stationary probabilities, and then various long-run average performance metrics can be computed. [44] proposed an approximation procedure to compute busy probabilities for the servers of the hypercube queueing model. [35] applied the hypercube queueing model for ambulance location and design of ambulance response districts in Boston. [41] proposed a probability model that uses an approximation procedure similar to that of [44], but unlike the hypercube queueing model, allows general service time distributions that depend on both the station as well as the emergency location. [26,27] also proposed a probability model that uses an approximation procedure similar to that of [44], but unlike the hypercube queueing model, includes travel time from station to the emergency location as well as on-site service time that is allowed to depend on the emergency location (and, in the case of [26], is also allowed to depend on the emergency type). Then the ambulance location problem was formulated as a mixed integer nonlinear program. [28] proposed and compared methods to solve the system of N nonlinear equations. The hypercube queueing model of [43] can accommodate ties in the given fixed preference policy by enumerating all permutations of the tied servers. However, this is inefficient, and therefore [10] proposed a more efficient model that explicitly formulates the balance equations allowing for ties. [61] proposed two models that use the Erlang loss formula to approximate the fraction of calls with response time more than a threshold and to allocate ambulances to stations. [19] proposed to use patient survival probability as objective, and they compared, in terms of the expected number of survivors, the solutions of the problems considered in [20] with the solutions of modifications of these problems in which the objective is to maximize patient survival probability. [42] proposed to use different functions of patient survival probability as a function of response time for different patient types (unlike [19] that used the same patient survival probability function for all patients), and they allocated ambulances to stations to maximize the expected total number of survivors. [66] extended the DSM to a multistage model with time-dependent travel times and time-dependent ambulance location decisions. [13] solved the problem of location of both trauma centers and ambulances (in their case, helicopters). Compared with the work on static location problems for stations and/or ambulances, relatively little work has been done to optimize ambulance operations. In ambulance operations, two types of dispatch decisions mentioned in Section 1.1 are important: (1) the ambulance selection decision, and (2) the ambulance reassignment decision. The solutions of the static location problems mentioned above are sometimes used for ambulance reassignment decisions by assigning a home base to each ambulance, and then when an ambulance becomes available and is not dispatched to a request waiting in queue, the ambulance is sent to its home base [26,34,61,4,42,51,56,5]. An extension of this approach for ambulance reassignment is to choose which station to send an ambulance to when an ambulance becomes available and is not dispatched to a request waiting in queue. This way ambulances can be dynamically positioned when the ambulances become available to better cover demand points. This approach is followed in [65] and in this paper. A further extension is to also allow the decision to send an ambulance at any time from one station to another station to improve the configuration of available ambulances. Various terms are used for such decisions, including ambulance redeployment, repositioning, relocation, move-up, or system-status management. One approach to ambulance redeployment is to solve, up-front, an ambulance location problem for each possible number of available ambulances, and to store the solutions in a compliance table. Then, whenever the number of available ambulances changes, the available ambulances are redeployed according to the solutions stored in the compliance table [1]. This approach has some shortcomings -it ignores the cost of frequently redeploying the ambulances, which can be considerable [6], and it does not look into the future, for example to take into account ambulances that should become available soon. Many researchers have studied various ambulance redeployment problems, including [23,9,24,2,57,55,65,53,54,18,38,6,72,73]. As pointed out by [32,2,65,4,56,5,6], there are practical problems associated with redeploying ambulances from one station to another. For example, [65] mentioned that it is illegal in Austria to redeploy ambulances from one station to another. Also, [45] showed, with both simple models and numerical examples, that the mean response time is not very sensitive to the location of ambulances. More specifically, it was shown that the mean travel time obtained by locating emergency facilities randomly is close to the mean travel time obtained by locating emergency facilities optimally. For the reasons mentioned above, when an ambulance becomes available we dynamically choose to which station to send the ambulance, but we do not send an ambulance that is already at one station to another station. For the ambulance selection decision, the closest available ambulance rule is simple and popular [7,25,33,34,52,55,53,1]. Some emergency services partition the service region into response areas or districts, and apply some fixed preference policy to make ambulance selection decisions. For example, for each district, a preference list of districts is chosen in advance. Typically, for each district, that district appears first in its preference list. Then, when an emergency call located in a district arrives, the first district in the emergency district's priority list with an available ambulance is determined, and the ambulance in that district closest to the emergency location is dispatched. [11] conducted a detailed study for a setting with 2 ambulances, and characterized the optimal response area for each ambulance. [69] compared two dispatch rules, the closest available ambulance rule, and a service district rule that works as follows: Each ambulance is assigned to a service district, with one ambulance assigned to each district. When an emergency call arrives in a district, if the ambulance assigned to that district is available, then it is dispatched to the call, even if it is temporarily outside its district; otherwise, if the ambulance assigned to that district is not available, then the closest available ambulance is dispatched to the call. [42] proposed a dispatch rule based on a static preference matrix ρ. The service region is partitioned into "demand nodes", and each available ambulance is at a station. Then ρ i,j denotes the jth most preferred station to use for an emergency at demand node i. An ambulance is dispatched from station ρ i,j to an emergency at demand node i if and only if there is no available ambulance at stations ρ i,1 , . . . , ρ i,j−1 and there is at least one available ambulance at station ρ i,j . [56] proposed a heuristic to partition the service region into districts, with a number of ambulances in each district. They used simulation to compare the performance of four dispatch policies for a setting with two priority levels; two types of policies specifying ambulance selection decisions if there is an ambulance available in the same district as the emergency, combined with two types of policies specifying ambulance selection decisions if there is no ambulance available in the same district. If there is an ambulance available in the same district as the emergency, then the first type of policy evaluated dispatches the closest available ambulance within the same district, and the second type of policy evaluated applies a heuristic ambulance selection rule to each district. If there is no ambulance available in the same district as the emergency, then the first type of policy evaluated assumes that an alternate emergency response, for example provided by the fire department, is automatically dispatched within the same district, and the second type of policy evaluated dispatches an ambulance from another district using a preference list of ambulances. Under both policies, if an emergency call arrives and all ambulances are busy, then the emergency is handled by an outside service, that is, there is no queue of waiting requests in the simulation. Also, the simulation returns an ambulance to its home base when it becomes available. Therefore, the simulation includes an ambulance selection decision and a very restrictive ambulance reassignment decision. Various dispatch policies that require real-time computation have been proposed. [2] proposed heuristics for ambulance dispatch and relocation for a system with three priority levels, based on a measure of "preparedness" for each zone. For priority 1 calls, they dispatch the closest available ambulance. They allow preemption of ambulances already dispatched to lower priority calls. For priority 2 and 3 calls, they dispatch the ambulance with expected travel time less than a specified threshold that will result in the least decrease in the minimum preparedness measure over all zones. For the available ambulance relocation decision, they proposed an integer nonlinear program that minimizes the relocation time subject to constraints that specify the maximum number of ambulances that may be relocated and the minimum preparedness measure after relocation. They did not specify how they select towards which request waiting in queue to dispatch a newly available ambulance. [46] considered the same preparedness measure as [2], and showed that it resulted in worse performance than the closest-available-ambulance rule. Then two modifications of the preparedness-based dispatching rule were proposed. The first modification dispatches the available ambulance that maximizes the minimum preparedness measure over all zones divided by the travel time from the ambulance to the emergency location. The second modification replaces the minimum preparedness measure over all zones in the calculations with other aggregates of the preparedness measures of different zones. If an ambulance becomes available and there are requests waiting in queue, then the newly available ambulance is dispatched to the request in queue closest to the ambulance. [47] proposed a rule for the ambulance reassignment decision that takes into account both the distances or times between the newly available ambulance and requests waiting in queue, as well as a centrality measure of each request waiting in queue. It was not specified how to select a station to which to send the newly available ambulance if there are no requests waiting in queue, or how the ambulance selection decision is made. [65] proposes a dynamic programming formulation for the ambulance selection decision and the ambulance reassignment decision, with the restriction that if an ambulance becomes available and there are requests waiting in queue, then the newly available ambulance is dispatched to the next request in first-come-first-served order. An approximate dynamic programming method is used to produce solutions, and it is shown that after sufficient training these solutions outperform policies that combine the closest-available-ambulance dispatching rule with the rule to send a newly available ambulance to its home base (or to a random station) if there are no requests waiting in queue. [4] formulated an ambulance dispatching problem with two priority levels and exponentially distributed service times as a continuous-time Markov Decision Process (MDP), and showed (for a sufficiently small number of ambulances to enable solving the MDP) that the closest-available-ambulance dispatching policy is suboptimal. It was assumed that if an emergency call arrives and all ambulances are busy, then the emergency is handled by an outside service, that is, there is no queue of waiting requests in the model. It was also assumed that an ambulance returns to its home base when it becomes available. Therefore, the MDP includes an ambulance selection decision and a very restrictive ambulance reassignment decision. Similarly, [5] proposed an ambulance dispatching heuristic that takes emergency priorities into account, and uses simulation to compare the performance of the heuristic and the closest-available-ambulance dispatching rule for a setting with two priority levels. The same assumptions as in [4] are made, and therefore the model also includes an ambulance selection decision and a very restrictive ambulance reassignment decision. [48] used simulation to compare two dispatch policies, a policy that dispatches the closest available ambulance to all calls, and a policy that dispatches the closest available ambulance to priority 1 calls, and the ambulance within a specified response time radius which has the least utilization to priority 2 and 3 calls. It was assumed that if an emergency call arrives and all ambulances are busy, then the call is lost. It was also assumed that an ambulance returns to its home base when it becomes available. Therefore, the simulation also includes an ambulance selection decision and a very restrictive ambulance reassignment decision. [39] compared two dispatch policies with the closest-available-ambulance policy. In all policies, ambulance reassignment decisions are made according to the first-come first-served rule: if an ambulance becomes available and there are calls waiting in queue, then the ambulance is dispatched to the waiting call that first entered the queue. If an ambulance becomes available and there is no call waiting in queue, then the ambulance returns to its home base. One dispatch policy is based on a Markov decision process with a simplified state, that represents the location of the currently considered emergency, and the set of available ambulances, assuming that each available ambulance is at its home base. The second dispatch policy, called DMEXCLP, works as follows: for each available ambulance that can reach the emergency location within the threshold time, the objective value of the MEXCLP with the available ambulances excluding that ambulance is computed. Then the available ambulance that can reach the emergency location within the threshold time with the largest objective value of the MEXCLP without that ambulance is dispatched. If no available ambulance can reach the emergency location within the threshold time, then the available ambulance, irrespective of travel time to the emergency location, with the largest objective value of the MEXCLP without that ambulance is dispatched. Simulation results showed that the heuristic has a much lower fraction of late arrivals than the closest-available-ambulance policy, but that the heuristic also has a much greater mean response time than the closest-available-ambulance policy. [40] assumed that all ambulances are dispatched from their home bases, that there is always an ambulance available at its home base when an emergency call arrives, and that an ambulance must always be dispatched immediately to an emergency. They also assumed that after service each ambulance returns to its home base, and that the time that elapses from the moment the ambulance arrives at the emergency location until the ambulance is back at its home base is deterministic and is the same for all emergencies and all ambulances, that is, the elapsed time does not depend on the emergency location or the ambulance home base location. Thus they considered a very restrictive ambulance reassignment decision. They showed with an example that for any online policy the ratio of fraction of late arrivals to the offline minimum fraction of late arrivals can be arbitrarily large, and thus no online policy can have a finite competitive ratio. They also used simulation to compare the expected performance ratios of the closest-available-ambulance policy and of DMEXCLP, and showed that the expected performance ratio of DMEXCLP is better than that of the closest-available-ambulance policy. 1.3. Contributions. The contributions of this paper are as follows. (A) Modeling the operation of an ambulance fleet under uncertainty. We propose a model for optimizing ambulance dispatch decisions, including ambulance selection decisions and ambulance reassignment decisions. In particular, the proposed model has the following features: (1) The model takes into account the type of each emergency. The type of emergency affects the type of ambulance and crew needed, the marginal value of response time, and the set of appropriate hospitals for the emergency. As pointed out in Section 1.2, most existing models consider only one emergency type, that is, the effects of emergency type mentioned above are ignored. Some existing models distinguish a small number (2 or 3) priority levels. However, as pointed out above, the emergency type has more dimensions than just priority level. (2) The model considers each ambulance and crew as unique. As pointed out in Section 1.1, each ambulance and crew has unique capabilities and skills. For example, some crews may have special training or experience in the handling of stroke victims, and some crews may be skilled in coping in dangerous situations such as rioting. As pointed out in Section 1.2, most existing models consider ambulances as interchangeable, that is, the only attributes of ambulances and crews taken into account are the availability and location of the ambulance. A small number of existing models make a distinction between BLS and ALS ambulances. (3) The model allows the hospital for each emergency to be chosen, taking into account the emergency type and the location of the emergency relative to hospitals. Most existing models ignore choices among multiple hospitals; in fact, most existing models either have no hospital entity in the model, or have a single "hospital" entity irrespective of the type and location of the emergency. (4) The model makes provision for a queue of waiting emergencies. This is both necessary if all ambulances are busy, and desirable if the emergency is not urgent and few ambulances are available. Many existing models ignore the possibility of a queue of waiting emergencies. Instead, it is assumed that if all ambulances are busy then some unlimited outside service will take care of the emergency. (5) The model makes provision for both ambulance selection decisions as well as nontrivial ambulance reassignment decisions. When an ambulance becomes available, it can be assigned to an emergency in queue, and it can also be sent to a chosen station. Since many existing models ignore the possibility of a queue of waiting emergencies, such models cannot accommodate decisions to assign available ambulances to emergencies in queue. Also, many existing models assume that when an ambulance becomes available, it will go to its home base, even if sending it to a different station would improve coverage greatly. (6) The model allows ambulances on their way to a station to be dispatched to an emergency. Because of the triangle inequality, that would result in a smaller response time than waiting until the ambulance reaches the station before dispatching the ambulance to the emergency. Also, with modern communication technology dispatchers are in constant contact with ambulances, and such en-route dispatching is easy to execute. As pointed out in Section 1.2, most existing models allow only ambulances at stations to be dispatched. (7) Dispatch decisions have consequences not only for the emergency and the ambulance under consideration, but also for future emergencies and for other ambulances that have to take care of future emergencies. It is challenging to take these future consequences of dispatch decisions into account, because future consequences are a complicated function of current decisions, and because the future consequences are uncertain. The model takes into account that future consequences are uncertain. Most existing models either do not take uncertainty into account, or incorporate uncertainty into a simulation model. (B) Solution method for solving the optimization problems to dispatch ambulances under uncertainty. In principle, the problem of optimizing ambulance operations can be formulated as a Markov decision process or as a multistage stochastic integer program. However, these problems would be intractable. Therefore, we propose to use the following rolling horizon approach (see for instance [31], [30]). Each time an ambulance selection decision or an ambulance reassignment decision has to be made, a two-stage stochastic optimization problem is formulated and solved. The first-stage decisions are either ambulance selection decisions or ambulance reassignment decisions, as appropriate for the decision at hand. The secondstage decisions are sequences of ambulance selection decisions and ambulance reassignment decisions over a considered time horizon. For each first-stage decision, the setting for the decision is known. For example, if an ambulance selection decision has to be made then the type and location of the newly arrived emergency is known, or if an ambulance reassignment decision has to be made then the location of the newly available ambulance is known, and in both cases the current state of the system is known. In contrast, the settings for second-stage decisions are not known yet, and the uncertainty is represented with a set of second-stage scenarios. The number of first-stage alternatives is relatively small. If an ambulance selection decision has to be made, then the first-stage alternatives correspond to the available compatible ambulances combined with the candidate hospitals, as well as the alternative to add the emergency to the queue. If an ambulance reassignment decision has to be made, then the first-stage alternatives correspond to the emergencies in queue as well as the stations. Therefore, each two-stage problem can be solved by computing the expected second-stage cost for each feasible first-stage alternative, and then choosing the first-stage alternative with the best combination of first-stage and second-stage cost. Therefore, to solve the two-stage problems fast enough, we need to be able to quickly compute the expected second-stage cost for every feasible first stage decision. The second-stage cost is given by the mean cost over a finite set of scenarios, and its computation requires solving a sequence of ambulance selection problems and ambulance reassignment problems for each scenario. That is, the second-stage cost can be computed by separately solving a deterministic sequence of ambulance selection problems and ambulance reassignment problems for each scenario. To facilitate faster selection of first-stage decisions, we consider a continuous relaxation of the second-stage problem for each scenario. In addition, due to the huge number of second-stage decision variables for each scenario, we solve the continuous relaxation of the second-stage problem with column generation, and we devise an approach to quickly solve the column generation subproblems to find decision variables with negative reduced cost. (C) Numerical tests with real emergency medical service data. We used more than 2 years of emergency call data of the Rio de Janeiro emergency medical service to calibrate models of emergency call arrivals. We used the resulting models to compare the performance of the proposed ambulance dispatch policies with the closest-available-ambulance rule (for the ambulance selection problem) and the closeststation rule (for the ambulance reassignment problem). The results showed that the proposed dispatch policy yields smaller response times. The results also showed the benefit of the proposed column generation algorithm to solve the two-stage stochastic problems. In addition, it was observed that reasonable response times can be obtained even with quite a small number of ambulances and stations. The rest of this paper is organized as follows. In Section 2, we specify the optimization models. In Section 3, we explain how these problems are solved. The numerical results for the Rio de Janeiro EMS are presented in Section 4. Optimization Models In this section we describe the two optimization problems to be solved in rolling horizon fashion to make ambulance dispatch decisions: (1) The ambulance selection problem: When a request arrives, the problem to decide whether an ambulance should be dispatched to the emergency, and if so, which ambulance to dispatch to the emergency, or whether to add the request to a queue of waiting requests. (2) The ambulance reassignment problem: When an ambulance becomes available, the problem to decide whether to dispatch the ambulance to an emergency waiting in queue, and if so, to which emergency to dispatch the ambulance, or whether to send the ambulance to a station, and if so, to which station to send the ambulance. Both models "look ahead" until the end of a chosen time horizon (for example, a few hours or until the end of the day) to approximate the impact of current decisions on the objective function in the future. Both problems minimize a combination of the cost of the immediate decision and the expected future costs affected by the immediate decision over the planning horizon. First we present a deterministic formulation of the problem, as though the arrival times, locations, and types of the emergency calls over the planning horizon are known. In Section 2.2 we describe the extension of this model to incorporate random arrival times, locations, and types of emergency calls. 2.1. Deterministic Models. Ambulances can be dispatched from stations, from emergency locations (if the ambulance is not needed to transport patients to a hospital), from hospitals, and from intermediate locations while traveling towards a station. The models do not allow ambulances to be dispatched while busy with service -while traveling towards an emergency location, or while providing on-site emergency care, or while traveling with patient(s) towards a hospital. That is, we do not model preemption or "forward" dispatching of ambulances. An ambulance that is not busy with service must either be at a station, or traveling toward Table 1. Problem input parameters a station. If ambulances are allowed to wait at a hospital for a dispatch, then the hospital is also a station in the model. Problem input parameters. In this section we describe the input parameters of the models. These parameters are summarized in Table 1. Basic problem parameters. Both time and space are discretized for the model. Let t = 0 denote the current time (which can be any time instant during the current day) and let t = 1, . . . , T denote the time steps until the end T of the time horizon. Let L denote the set of discrete locations, used for representing emergency call locations as well as ambulance locations. Each emergency call is characterized by its arrival time, its location, and its type. Let C denote the set of call types, let A denote the set of ambulance types, let B denote the set of ambulance stations, and let H denote the set of hospitals. A call of type c ∈ C can be served by a subset A(c) ⊂ A of ambulance types, and a call of type c ∈ C at location ∈ L can be sent to a subset H(c, ) ⊂ H of hospitals. For each ambulance type a ∈ A, let C(a) := {c ∈ C : a ∈ A(c)} denote the set of call types that can be served by ambulance type a. Initial conditions. Other input includes the initial conditions for the problem. If a call arrives at time 0, then the location and the type of the call that has just arrived are denoted by 0 and c 0 respectively. If an ambulance completes service at a hospital at time 0, then the ambulance type and the hospital are denoted by a 0 and h 0 respectively. For each emergency type c ∈ C and emergency location ∈ L, let C 0 (c, ) denote the number of calls at time 0 waiting in queue for an ambulance to be dispatched to serve the call. This includes the call that has just arrived at t = 0 at location 0 . For each ambulance type a ∈ A and ambulance station b ∈ B, let A 0 (a, b) denote the number of ambulances of type a at b available for dispatch just before the dispatch of ambulances at time 0. Similarly, for each ambulance type a ∈ A and hospital h ∈ H, let A 0 (a, h) denote the number of ambulances of type a at h available for dispatch just before the dispatch of ambulances at time 0, including the ambulance of type a 0 at hospital h 0 that just became available. Keeping track of ambulance locations. As mentioned above, ambulances can also be dispatched while traveling towards a station. To model this, and in general to keep track of ambulance locations both while stationary and while moving, it is useful to determine where ambulances can be while traveling to specific destinations. First, for each time t ∈ {0, 1, . . . , T − 1}, ambulance type a ∈ A, hospital h ∈ H, and destination station b ∈ B, let L(t, a, h, b) ∈ L denote the forecasted location of the ambulance at time t + 1 if the ambulance starts from h at t to travel towards b. In addition, for each current ambulance location 1 ∈ L at time t, and destination station b ∈ B, let L(t, a, 1 , b) ∈ L denote the forecasted location of the ambulance at time t + 1 if the ambulance continues to travel towards b. Next, for each time t ∈ {0, 1, . . . , T }, let For each ambulance type a ∈ A, station b ∈ B, and initial ambulance location 1 ∈ L(0, a, b), let A 0 (a, 1 , b) denote the number of ambulances of type a at location 1 traveling towards b available for dispatch just before the dispatch of ambulances at time 0. In addition, for each call type c ∈ C, ambulance type a ∈ A(c), initial ambulance location 1 ∈ L, emergency location ∈ L, and hospital h ∈ H(c, ), let A 0 (c, a, 1 , , h) denote the number of ambulances of type a at location 1 at time 0 traveling to an emergency type c at location and from there to hospital h. Also, let A 0 (c, a, 1 , h) denote the number of ambulances of type a ∈ A(c) at location 1 ∈ L at time 0 traveling with emergency type c ∈ C patient(s) after on-site emergency care has already been provided, to hospital h ∈ ∪ ∈L H(c, ). Forecasts of emergency calls, service times, and travel times For each time t ∈ {1, . . . , T }, call type c ∈ C, and emergency location ∈ L, let λ(t, c, ) denote the forecasted number of calls of type c at location in time period t. For each dispatch time t ∈ {0, 1, . . . , T }, call type c ∈ C, ambulance type a ∈ A(c), initial ambulance location 1 ∈ ∪ b∈B L(t, a, b) ∪ B ∪ H, emergency location ∈ L, and hospital h ∈ H(c, ), let τ (t, c, a, 1 , , h) denote the forecasted time for ambulance type a to travel from 1 at time t to , provide on-site emergency care for call type c at , travel with patient(s) from to hospital h, and deliver the patient(s) at h. Also, for each call type c ∈ C, ambulance type a ∈ A(c), initial ambulance location 1 ∈ L, and hospital h ∈ ∪ ∈L H(c, ), let τ 0 (c, a, 1 , h) denote the forecasted time for ambulance type a to travel with patient(s) from 1 at time 0 after on-site emergency care has already been provided, to hospital h, and deliver the patient(s) of type c at h. 11 2.1.2. Decision variables for the ambulance selection problem. The following first-stage (for t = 0) decision variables are used for the ambulance selection problem: • x 0 (c, a, b, , h) = the number of ambulances of type a ∈ A(c) dispatched at time 0 from station b ∈ B to serve calls of type c ∈ C at location ∈ L, and transport them to hospital h ∈ H(c, ); this includes both calls in queue as well as the call that arrived at time 0; • x 0 (c, a, 1 , b, , h) = the number of ambulances of type a ∈ A(c) at location 1 ∈ L(0, a, b) at time 0 traveling toward station b ∈ B dispatched to serve calls of type c ∈ C that arrived at location ∈ L at time 0, and transport them to hospital h ∈ H(c, ). The following second-stage (for t = 1, . . . , T ) decision variables are used for the ambulance selection problem: • x t (c, a, b, , h) = the number of ambulances of type a ∈ A(c) dispatched at time t from station b ∈ B to serve calls of type c ∈ C at location ∈ L, and transport them to hospital h ∈ H(c, ); this includes both calls in queue as well as the calls that arrived at time t; • x t (c, a, 1 , b, , h) = the number of ambulances of type a ∈ A(c) at location 1 ∈ L(t, a, b) at time t traveling toward station b ∈ B dispatched to serve calls of type c ∈ C that arrived at location ∈ L and transport them to hospital h ∈ H(c, ); • x t (c, a, h , , h) = the number of ambulances of type a ∈ A(c) dispatched at time t from hospital h ∈ H to serve calls of type c ∈ C at location ∈ L, and transport them to hospital h ∈ H(c, ); • y t (a, h, b) = the number of ambulances of type a ∈ A instructed at time t to move from hospital h ∈ H towards station b ∈ B; • C t (c, ) = the number of calls of type c ∈ C waiting in queue at location ∈ L at the beginning of time t; • A t (a, b) = the number of ambulances of type a ∈ A at station b ∈ B at the beginning of time t; • A t (a, 1 , b) = the number of ambulances of type a ∈ A at location 1 ∈ L(t, a, b) moving towards station b ∈ B at the beginning of time t. Next we make a few remarks regarding these decision variables. Remark 2.1. First, recall that the decision variables above are written for a deterministic problem, but these variables will be used as part of a stochastic problem. If the realized emergency calls during the time horizon coincided with the forecasted calls, then an optimal solution of the deterministic optimization problem would give optimal dispatch decisions for the entire time horizon. However, typically the realized calls will not coincide with the forecasted calls, in which case the optimal first-stage decision variables give a useful immediate dispatch decision but the optimal second-stage decision variables may not be useful as decisions -the purpose of the second-stage model is to approximate the effect of the first-stage decisions on future costs, and not to fix useful decisions for the future. Second, recall that times t = 1, 2, . . . , T , correspond to a discretization of the time horizon, but that t = 0 can be any time when an emergency call arrives (for the ambulance selection problem) or when an ambulance becomes available (for the ambulance reassignment problem). Therefore, we assume that it does not happen simultaneously that an emergency call arrives and an ambulance becomes available. This explains why at (each) time t = 0, either an ambulance selection problem or an ambulance reassignment problem is considered. Third, first-stage decision variables x 0 (c, a, b, , h) and x 0 (c, a, 1 , b, , h) can be restricted to pairs (c, ) corresponding to calls in queue at t = 0. Although these restrictions are implemented in code, for simplicity of exposition we do not introduce notation for such restrictions. When there is no call in queue for a given pair (c, ), then the constraints below will imply that the corresponding variables x 0 (c, a, b, , h) and x 0 (c, a, 1 , b, , h) are zero. 2.1.3. Decision variables for the ambulance reassignment problem. The following first-stage decision variables are used for the ambulance reassignment problem: • x 0 (c, a, h, , h ) = the number of ambulances of type a ∈ A(c) dispatched at time 0 from hospital h ∈ H to serve calls of type c ∈ C at location ∈ L, and transport them to hospital h ∈ H(c, ); • y 0 (a, h, b) = the number of ambulances of type a ∈ A instructed at time 0 to move from hospital h ∈ H towards station b ∈ B. The second-stage (for t = 1, . . . , T ) decision variables for the ambulance reassignment problem as the same as the decision variables for the ambulance selection problem. Next we make a few remarks regarding the decision variables for the ambulance reassignment problem. Remark 2.2. First-stage decision variables x 0 (c, a, h, , h ) and y 0 (a, h, b) are needed for a = a 0 and h = h 0 only. In addition, similar to the third point of Remark 2.1, first-stage decision variables x 0 (c, a, h, , h ) can be restricted to pairs (c, ) corresponding to calls in queue at t = 0. Although these restrictions are implemented in code, for simplicity of exposition we do not introduce notation for such restrictions. When a = a 0 or h = h 0 , then A 0 (a, h) = 0 and the constraints below will imply that the corresponding variables x 0 (c, a, h, , h ) and y 0 (a, h, b) are zero, and when there is no call in queue for a given pair (c, ), then the constraints below will imply that the corresponding variables x 0 (c, a, h, , h ) are zero. 2.1.4. Constraints for the ambulance selection problem. The following five sets of constraints apply to the first-stage variables for the ambulance selection problem only: (S1) Flow balance equations at the stations: For each a ∈ A, b ∈ B, x 0 (c, a, b, , h) The left side in (2.1) is the number of ambulances of type a at station b at the beginning of period t = 1. This is equal to the initial number A 0 (a, b) of ambulances of type a at station b available for dispatch minus the number of ambulances of type a that leave station b in the first stage plus the number of ambulances of type a that arrive at station b during the first stage. This latter is the number of ambulances of type a that would have arrived at station b during the first stage based on their status at t = 0 minus the number of these ambulances that are dispatched while en-route to station b to attend a call. The remaining flow constraints follow the same logic and therefore are given without detailed explanation. The left side of (2.7) incorporates the requirement that every ambulance that finishes service at a hospital in time period t has to be sent either to a call in queue or to a station (even if the station is at the hospital itself). The right side of (2.7) represents the number of ambulances of type a that finish service at a hospital h in time period t. In particular, the terms A 0 (c, a, 1 , , h) and A 0 (c, a, 1 , h) represent the number of ambulances of type a in service at t = 0 (either going to an emergency and then to hospital h or traveling to hospital h after on-site emergency care has been provided) that finish that service in time period t. (At3) Flow balance equations at the locations between hospitals and stations: For each t = 1, . . . , T , a ∈ A, b ∈ B, 1 ∈ L(t, a, b), (At4) Flow balance equations for the queues: For each t = 1, . . . , T , c ∈ C, ∈ L, x t (c, a, 1 , b, , h). (2.10) where A max (b) denotes the maximum number of ambulances that can park at station b. 2.1.5. Constraints for the ambulance reassignment problem. The following four sets of constraints apply to the first-stage variables for the ambulance reassignment problem only: (R1) Flow balance equations at the stations: For each a ∈ A, b ∈ B, , 1 , b). (R3) Flow balance equations at the locations between hospitals and stations: For each a ∈ A, b ∈ B, 1 ∈ L(0, a, b), (R4) Flow balance equations for the queues: For each c ∈ C, ∈ L, a, h , , h). x t (c, a, 1 , b, , h). 2.1.6. Objective function for the ambulance selection problem . Let f t (c, a, b, , h) denote the cost per call if ambulance type a ∈ A(c) is dispatched at time t from station b ∈ B to serve a call of type c ∈ C at location ∈ L, and transport them to hospital h ∈ H(c, ); this includes a penalty for the waiting time and other costs. Similarly, let f t (c, a, h , , h) denote the cost per call if ambulance type a ∈ A(c) is dispatched at time t from hospital h ∈ H to serve a call of type c ∈ C at location ∈ L, and transport them to hospital h ∈ H(c, ); let f t (c, a, 1 , b, , h) denote the cost per call if ambulance type a ∈ A(c) at location 1 ∈ L(t, a, b) at time t traveling toward station b ∈ B is dispatched to serve a call of type c ∈ C that arrived at location ∈ L at time t, and transport them to hospital h ∈ H(c, ); let f t (a, h, b) denote the cost if ambulance type a ∈ A is dispatched at time t from hospital h ∈ H to station b ∈ B; let g t (c, ) denote the penalty per call of type c ∈ C waiting in queue at location ∈ L at the beginning of time t; let g t (a, b) denote the cost per ambulance of type a ∈ A at station b ∈ B at the beginning of time t; and let g t (a, 1 , b) denote the cost per ambulance of type a ∈ A that moves from location 1 ∈ L(t, a, b) to location L(t, a, 1 , b) during time t. For the ambulance selection problem, the objective is to minimize c, a, b, , h)x t (c, a, b, , h c, a, 1 , b, , h)x t (c, a, 1 , b, , 2.1.7. Objective function for the ambulance reassignment problem. Using the notation of the previous sections, for the ambulance reassignment problem, the objective is to minimize Stochastic Model. In this section we describe a stochastic model of ambulance dispatch operations. The stochastic model is used to generate scenarios for the two-stage optimization problem, and is used in a simulation to test the performance of different ambulance dispatch policies. The same space discretization is used for the optimization problem and the simulation, but the time discretization applies to the optimization problem only. The main random variable is the sequence of emergency calls of each type at each location. Other quantities, such as travel times, are modeled as deterministic. Emergency calls arrive in continuous time according to an exogenous stochastic process. We assume that the stochastic process has the property that with probability 1, at most one emergency call arrives at a continuous time point. (In the numerical tests, emergency calls of type c for location arrive according to a nonhomogeneous Poisson process with rate λ(τ, c, ) at time τ .) Let ω(τ ) denote a random sequence of emergency calls during time interval (τ, T ), where T T denotes a specified simulation time horizon, and let ξ(τ ) denote a random sequence of emergency calls during time interval (τ, τ + T ). For any time τ , let s(τ ) denote the state of the process at time τ . That is, s(τ ) contains information about the location and current assignment of each ambulance, and the calls in queue at time τ , including the data of a call, if any, that has just arrived. Let π denote any deterministic policy that takes the state s(τ ) at any time τ as input, and specifies either an ambulance selection decision or an ambulance reassignment decision π(s(τ )), as appropriate. The simulation generates a sample path ω(0) according to the specified stochastic process, and keeps track of the state and performance metrics of the process. When an ambulance selection decision or an ambulance reassignment decision has to be made at time τ , the simulation provides s(τ ) as input to π, receives the decision π(s(τ )) as output from π, and updates the state of the process accordingly. Next we describe how the optimization problems specified in Section 2.1 are used in a corresponding policy π. First, suppose that π is called at time τ with a request for an ambulance selection decision, and with state s(τ ) provided as input. Then time τ in the simulation corresponds to time t = 0 in the current optimization problem, and the time interval (τ, τ + T ) is partitioned into discrete time periods indexed by t = 1, . . . , T for the purpose of the optimization problem. Also, N independent and identically distributed "secondstage scenarios", denoted by ξ 1 (τ ), . . . , ξ N (τ ), each being a sequence of emergency calls during time interval (τ, τ + T ), is generated according to the specified stochastic process. Note that scenarios ξ 1 (τ ), . . . , ξ N (τ ) are independent of the sample path ω(τ ) used to evaluate the policies. Also note that all the input of the optimization problem either does not change, or can be derived from the state s(τ ) provided as input by the simulation and the generated scenarios ξ 1 (τ ), . . . , ξ N (τ ). For example, the set C of emergency call types does not change, the type c 0 of the emergency call at t = 0 is part of the state s(τ ), and the number λ n (t, c, ) of emergency calls of type c for period t and location under scenario n can be derived from ξ n (τ ). Let x 0 := x 0 (c 0 , a, b, 0 , h), x 0 (c 0 , a, 1 , b, 0 , h), a ∈ A(c 0 ), b ∈ B, 1 ∈ L (0, a, b), h ∈ H(c 0 , 0 ) denote the first-stage decision variables of the ambulance selection problem, and for each scenario n, let (x n , y n ) := x n,t (c, a, b, , h), x n,t (c, a, 1 , b, , h), x n,t (c, a, h , , h), y n,t (a, h , b), C n,t (c, ), A n,t (a, b), A n,t (a, 1 , b), t = 1, . . . , T, c ∈ C, a ∈ A(c), b ∈ B, ∈ L, 1 ∈ L(0, a, b), h ∈ H(c, ), h ∈ H denote the second-stage decision variables for scenario n. Let denote the first-stage part of the ambulance selection problem objective, and for any scenario n, let the remaining (second-stage) part of the ambulance selection problem objective be denoted by G(s(τ ), ξ n (τ ), x n , y n ). Then, for each first-stage decision x 0 and each scenario n, the continuous relaxation of the second-stage of the ambulance selection problem is Similarly, suppose that π is called at time τ with a request for an ambulance reassignment decision, and with state s(τ ) provided as input. Letx 0 := x 0 (c, a 0 , h 0 , , h), y t (a 0 , h 0 , b), c ∈ C, b ∈ B, ∈ L, h ∈ H(c, ) denote the first-stage decision variables, and for each scenario n, let (x n , y n ) := x n,t (c, a, b, , h), x n,t (c, a, 1 , b, , h), x n,t (c, a, h , , h), y n,t (a, h , b), C n,t (c, ), A n,t (a, b), A n,t (a, 1 , b), t = 1, . . . , T, c ∈ C, a ∈ A(c), b ∈ B, ∈ L, 1 ∈ L(0, a, b), h ∈ H(c, ), h ∈ H denote the second-stage decision variables for scenario n. Let denote the first-stage part of the ambulance reassignment problem objective, and for any scenario n, let G(s(τ ), ξ n (τ ), x n , y n ) denote the remaining (second-stage) part of the ambulance reassignment problem objective. Then, for each first-stage decisionx 0 and each scenario n, the continuous relaxation of the secondstage of the ambulance reassignment problem is Column Generation We propose a column generation algorithm to solve the optimization problems (2.20) and (2.21). Problems (2.20) and (2.21) are large linear programs that have to be solved fast in practice. In addition, in both problems the number of decision variables is large relative to the number of constraints, so that in a basic feasible solution most decision variables have value zero. This motivates the column generation algorithm proposed in this section to solve these problems. We describe the algorithm for problem (2.20). A similar algorithm is used for problem (2.21). The dual variables for problem (2.20) are as follows: • For each t = 1, . . . , T , a ∈ A, b ∈ B, let β t (a, b) denote the dual variable associated with the flow balance constraint (At1) at station b. • For each t = 1, . . . , T , a ∈ A, h ∈ H, let α t (a, h) denote the dual variable associated with the flow balance constraint (At2) at hospital h. • For each t = 1, . . . , T , a ∈ A, b ∈ B, 1 ∈ L(t, a, b), let ψ t (a, b, 1 ) denote the dual variable associated with the flow balance constraint (At3) at location 1 . • For each t = 1, . . . , T , c ∈ C, ∈ L, let φ t (c, ) denote the dual variable associated with the flow balance constraint (At4) for the queue of call type c at location . • For each t = 1, . . . , T + 1, b ∈ B, let ν t (b) ≥ 0 denote the dual variable associated with the capacity constraint (At5) for station b. • For each t = 1, . . . , T , a ∈ A, b ∈ B, 1 ∈ L(t, a, b), let θ t (a, b, 1 ) ≥ 0 denote the dual variable associated with the ambulance supply constraint (At6) for location 1 . • For each t = 1, . . . , T , a ∈ A, b ∈ B, let γ t (a, b) ≥ 0 denote the dual variable associated with the ambulance supply constraint (At7) for station b. For problem (2.20), the Lagrangian relaxation is to minimize The column generation algorithm starts with an initial feasible solution for problem (2.20), for instance obtained with the closest-available-ambulance rule to allocate ambulances to calls and the closest-station rule to choose the station to which to send an ambulance when it finishes its task and there are no calls waiting in queue. Then an optimal primal-dual pair is computed for the restriction of problem (2.20) with only the decision variables that are nonzero in the initial feasible solution. If the optimal dual solution of the restricted problem is feasible for the dual of problem (2.20), then the primal-dual pair is optimal for problem (2.20). Otherwise, there exists a primal decision variable with negative reduced cost, and variables with negative reduced cost are added to the restricted problem to obtain the next restricted problem. Every iteration adds a number of primal decision variables with negative reduced cost until the optimal dual solution of the restriction of problem (2.20) is feasible for the dual of problem (2.20). Next we explain how to find primal decision variables with negative reduced cost. We will consider column generation for three types of primal decision variables: x t (c, a, 1 , b, , h), x t (c, a, b, , h), and x t (c, a, h , , h). x t (c, a, 1 , b, , h). The column generation subproblem to determine the variable x t (c, a, 1 , b, , h) with the smallest objective coefficient in the Lagrangian relaxation is Column Generation for Variables where the dual variable values are optimal dual values for the previous restricted problem. Solving this column generation subproblem exactly can be time consuming, and is unnecessary for most iterations. Next we show how to compute variables x t (c, a, 1 , b, , h) with negative reduced cost under the following simplifying assumptions: (A1) The time τ (t, c, a, 1 , , h) for ambulance type a to travel from 1 at time t to , provide on-site emergency care for call type c at , travel with patient(s) from to hospital h, and deliver the patient(s) at h, does not depend on the call type c. Thus this time is denoted with τ (t, a, 1 , , h). (A2) Similarly, the cost f t (c, a, 1 , b, , h) for ambulance type a to travel from 1 at time t to , provide on-site emergency care for call type c at , travel with patient(s) from to hospital h, and deliver the patient(s) at h, does not depend on the ambulance station b or the call type c. Thus this cost is denoted with f t (a, 1 , , h). Then the calculations can be streamlined as follows: (1) For each time t and emergency location ∈ L, let c * t ( ) ∈ argmin {φ t (c, ) : c ∈ C} denote the critical call type at time t and location . (2) For each time t, ambulance type a ∈ A, and intermediate location 1 Then the column generation subproblem (3.22) reduces to (3.23) In most iterations, the column generation subproblem does not have to be solved to optimality. Recall that in each column generation iteration, it is sufficient to find a variable x t (c, a, 1 , b, , h) with negative objective value in the column generation subproblem, or to verify that no variable x t (c, a, 1 , b, , h) has negative objective value. Also, for any emergency location , an (available) ambulance at an intermediate location 1 that is close to is more attractive than an ambulance at an intermediate location 1 that is far from (this is also the intuition underlying the popular closest-available-ambulance dispatch heuristic). These observations motivate Algorithm 1 below to solve problem (3.23). 3.2. Column Generation for Variables x t (c, a, b, , h). Next, we consider the column generation subproblem to find a variable x t (c, a, b, , h) with the smallest objective coefficient in the Lagrangian relaxation, that is, t ∈ {0, . . . , T }, c ∈ C, a ∈ A(c), b ∈ B, ∈ L, h ∈ H(c, )} . Then the calculations can be streamlined as follows: (1) For each time t and emergency location ∈ L, let c t ( ) = c * t ( ) ∈ argmin {φ t (c, ) : c ∈ C} denote the critical call type at time t and location . Use the optimal dual variables for the restricted version of the continuous relaxation of problem (2.1)-(2.12) to compute c * t ( ), b * t (a, 1 ), and h * t (a, 1 , ); 7: for t ∈ {0, . . . , T } do 8: for a ∈ A do 9: for ∈ L do 10: negative found ← false; 11: for 1 ∈ L( ) (from closest to furthest) while not negative found do 12: if objective value of column generation subproblem (3.23) is negative then 13: negative found ← true; 14: optimality verified ← false; 15: Add variable x t (c * t ( ), a, 1 , b * t (a, 1 ), , h * t (a, 1 , )) to the restricted version of the continuous relaxation of problem if not optimality verified then denote the critical hospital at time t for ambulance type a, station b, and emergency location . As mentioned before, typically the set H( ) is small, and thus the computation ofĥ t (a, b, ) is quick. Then the column generation subproblem (3.24) reduces to For any emergency location , an ambulance at a station b that is close to is more attractive than an ambulance at a station b that is far from . These observations motivate Algorithm 2 to solve problem (3.25). x t (c, a, h , , h). Next, we consider the column generation subproblem to find a variable x t (c, a, h , , h) with the smallest objective coefficient in the Lagrangian relaxation, that is, Column Generation for Variables Consider the following simplifying assumptions: (C1) The time τ (t, c, a, h , , h) for ambulance type a to travel from hospital h at time t to , provide on-site emergency care for call type c at , travel with patient(s) from to hospital h, and deliver the patient(s) at h, does not depend on the call type c. Thus this time is denoted with τ (t, a, h , , h). 21 1: For each emergency location ∈ L, construct a list B( ) of bases b ∈ B sorted from closest to to furthest from ; 2: Find an initial feasible solution, and solve the restricted version of the continuous relaxation of problem (2.1)-(2.12) with the decision variables that are nonzero in the initial feasible solution; 3: optimality verified ← false; 4: while not optimality verified do 5: optimality verified ← true; 6: Use the optimal dual variables for the restricted version of the continuous relaxation of problem (2.1)-(2.12) to computeĉ t ( ) andĥ t (a, b, ); 7: for t ∈ {0, . . . , T } do 8: for a ∈ A do 9: for ∈ L do 10: negative found ← false; 11: for b ∈ B( ) (from closest to furthest) while not negative found do 12: if objective value of column generation subproblem (3.25) is negative then 13: negative found ← true; 14: optimality verified ← false; 15: Add variable x t (ĉ t ( ), a, b, ,ĥ t (a, b, )) to the restricted version of the continuous relaxation of problem (2.1) (1) For each time t and emergency location ∈ L, leť c t ( ) = c * t ( ) ∈ argmin {φ t (c, ) : c ∈ C} denote the critical call type at time t and location . (2) For each time t, ambulance a ∈ A, hospital h ∈ H, and emergency location ∈ L, leť h t (a, h , ) ∈ argmin f t (a, h , , h) − α t+τ (t,a,h , ,h) (a, h) : h ∈ H( ) denote the critical hospital at time t for ambulance a, hospital h , and emergency location . Then the column generation subproblem (3.26) reduces to min f t (a, h , ,ȟ t (a, h , )) + α t (a, h ) − α t+τ (t,a,h , ,ȟt(a,h , )) (a,ȟ t (a, h , )) + φ t (č t ( ), ) : t ∈ {1, . . . , T }, a ∈ A, h ∈ H, ∈ L . For any emergency location , an ambulance at a hospital h that is close to is more attractive than an ambulance at a hospital h that is far from . These observations motivate Algorithm 3 to solve problem (3.27). 1: For each emergency location ∈ L, construct a list H( ) of hospitals h ∈ H sorted from closest to to furthest from ; 2: Find an initial feasible solution, and solve the restricted version of the continuous relaxation of problem (2.1)-(2.12) with the decision variables that are nonzero in the initial feasible solution; 3: optimality verified ← false; 4: while not optimality verified do 5: optimality verified ← true; 6: Use the optimal dual variables for the restricted version of the continuous relaxation of problem (2.1)-(2.12) to computeč t ( ) andȟ t (a, h , ); 7: for t ∈ {1, .., T } do 8: for a ∈ A do 9: for ∈ L do 10: negative found ← false; 11: for h ∈ H( ) (from closest to furthest) while not negative found do 12: if objective value of column generation subproblem (3.27) is negative then 13: negative found ← true; 14: optimality verified ← false; 15: Add variable x t (č t ( ), a, h , ,ȟ t (a, h , )) to the restricted version of the continuous relaxation of problem (2.1) if not optimality verified then We recall that the stopping criterion (controlled by flag optimality verified) of Algorithms 1, 2, 3, and 4 is that an optimal dual solution of a restricted version of problem (2.20) is feasible for the dual of problem (2.20). In this case, we have indeed found an optimal solution to the primal and dual of the relaxation of the ambulance selection problem. Gathering our previous developments, we also propose Algorithm 4 below which is a single column generation algorithm for variables x t (c, a, b, , h), x t (c, a, h , , h), and x t (c, a, 1 , b, , h) with negative reduced cost for problem (2.20). We recall that the stopping criterion (controlled by flag optimality verified) of Algorithms 1, 2, 3, and 4 is that the dual solution of the restricted version of the continuous relaxation of problem (2.1)-(2.12) is feasible for the dual of the continuous relaxation of the original (not restricted using a subset of columns) ambulance selection problem. In this case, we have indeed found an optimal solution to the primal and dual of the relaxation of the ambulance selection problem. 4.1. Comparison Between the Proposed Policy and the Closest-Available-Ambulance Heuristic on a real emergency medical service. The performance of the optimization-based policy for ambulance dispatch decisions described in Section 2 was evaluated using simulation. The problem data were obtained from the Rio de Janeiro emergency medical service (SAMU 1: For each emergency location ∈ L, construct a list B( ) of bases b ∈ B sorted from closest to to furthest from ; 2: For each emergency location ∈ L, construct a list H( ) of hospitals h ∈ H sorted from closest to to furthest from ; 3: For each emergency location ∈ L, construct a list L( ) of intermediate locations 1 ∈ L sorted from closest to to furthest from ; 4: Find an initial feasible solution, and solve the restricted version of the continuous relaxation of problem (2.1)-(2.12) with the decision variables that are nonzero in the initial feasible solution; 5: optimality verified ← false; 6: while not optimality verified do 7: optimality verified ← true; 8: Use the optimal dual variables for the restricted version of the continuous relaxation of problem (2.1)-(2.12) to computeĉ t ( ) =č t ( ) = c * t ( ),ĥ t (a, b, ),ȟ t (a, h , ), h * t (a, 1 , ), and b * t (a, 1 ); 9: for t ∈ {0, . . . , T } do 10: for a ∈ A do 11: for ∈ L do 12: negative found ← false; 13: for b ∈ B( ) (from closest to furthest) while not negative found do 14: if objective value of column generation subproblem (3.25) is negative then 15: negative found ← true; 16: optimality verified ← false; 17: Add variable x t (ĉ t ( ), a, b, ,ĥ t (a, b, )) to the restricted version of the continuous relaxation of problem if t ≥ 1 (variables x t (c, a, h , , h) are defined only for t ≥ 1 (not for t = 0)) then 21: negative found ← false; 22: for h ∈ H( ) (from closest to furthest) while not negative found do 23: if objective value of column generation subproblem (3.27) is negative then 24: negative found ← true; 25: optimality verified ← false; 26: Add variable x t (č t ( ), a, h , ,ȟ t (a, h , )) to the restricted version of the continuous relaxation of problem Sum of lambda values heatmap Figure 1. Discretization of the service region of the Rio de Janeiro SAMU into 10 × 10 rectangles, and a heatmap of the mean intensity of emergency calls to the Rio de Janeiro SAMU for the period January 2016-February 2018. stations and 10 associated hospitals. A rectangle that contains the service region of the Rio de Janeiro SAMU was discretized into 10 × 10 rectangles. Of these 100 rectangles, 76 have an intersection with the SAMU service region, and are shown in Figure 1. Distances between rectangles were approximated with geodesic distances, and for travel time calculations the ambulance speeds were fixed to 30 km/h. For the optimization problem, time was discretized into 30 minute intervals. Emergency call data for the SAMU for the period January 2016-February 2018 were used to calibrate the intensities λ(t, c, ) of calls for each 30 minute period t of each day of the week, for each call type c, and for each discretized location , of a Poisson process call arrival model via maximizing a regularized likelihood function. Figure 1 also shows a heatmap of the sum of the intensities of the Poisson process (sum over the call types and time periods) for each discretized location . For the numerical tests, we considered a time window with high intensities of call arrivals (specifically, Fridays between 18h00 and 20h00), and we simulated the call arrivals and dispatch decisions during this time window. The algorithms were implemented using C++14 and run on a computer with a Ryzen 5 2600 processor with 8GB of RAM memory, in a Ubuntu 20.04 OS. We present two sets of results. First we show that the proposed method results in smaller response times than the response times obtained with the popular closest-available-ambulance rule. Second, we show that if the ambulance selection problem or the ambulance reassignment problem is of large size, then the column generation algorithms result in smaller solution times than a state-of-the-art solver (Gurobi) used to solve the problems without column generation. In this section, we compare the response times of the method described in Section 2 with the response times of the closest-available-ambulance heuristic, for a time window of 2 hours (Friday 18h00-20h00). The setting was simplified by assuming that the patients of each call must be taken to the closest hospital and that each ambulance must go to the closest station if the call queue is empty. The average response time, denoted by Mean Heuristic and Mean Model, and the maximum response time, denoted by Max Heuristic and Max Model, for respectively the closest-available-ambulance heuristic and the policy obtained with our model, are reported in Table 2. The following notation is used in the table headings: • T : the number of time periods used in the optimization problem; • L = |L|: the number of discrete locations used in the model; Table 2. Comparison between the closest available ambulance heuristic and our optimization model, for different numbers of scenarios, ambulances, hospitals, and bases. • B = |B|: the number of stations (a subset of the 48 stations of the Rio de Janeiro SAMU); • H = |H|: the number of hospitals (depending on the instance this is either a subset of the set of 10 public hospitals that the Rio de Janeiro SAMU works with, or all 10 hospitals); • A: the number of ambulances; • S: the number of scenarios used in the second-stage problems. In each setting, both the mean and the maximum response times are smaller for the proposed policy. An increase in the number of hospitals and stations results in a decrease in response times, and an increase in the number of ambulances results in an even greater decrease in response times. An increase in the number of scenarios seems to have relatively little effect on the response times. 4.2. Runtime Comparisons with and without Column Generation on simulated instances. We considered several simulated instances of the ambulance selection problem and of the ambulance reassignment problem in the plane, and we solved the continuous relaxations of the second-stage problems either using Gurobi without column generation or using the column generation algorithm from Section 3. For these experiments, we used emergency calls generated according to a homogeneous Poisson process in a square, and considered the following discretizations of the square: (i) into 12 × 12 = 144 identical smaller squares, (ii) into 13 × 13 = 169 identical smaller squares, (iii) into 14 × 14 = 196 identical smaller squares, (iv) and into 15 × 15 = 225 identical smaller squares, while distances were calculated using the Euclidean norm. The CPU times required to solve the ambulance selection problem and the ambulance reassignment problem for these instances are reported in respectively Tables 3 and 4. The following notation is used in the table headings: • T : the number of time periods used in the optimization problem; • A: the number of ambulances; • C: the number of calls; • B: the number of stations; • H: the number of hospitals; • L: the number of discrete locations; • K: the number of closest hospitals to choose from to deliver the patient. For larger problem sizes, the column generation algorithm is faster than using Gurobi without column generation. Conclusion The main contribution of this paper is a model of ambulance fleet operations. The model incorporates many important aspects of ambulance fleet operations that are ignored in most of the existing literature, including the following: (1) The model incorporates different emergency types and the consequences of emergency types, such as ambulance requirements, hospital requirements, and marginal value of response time. Table 3. Runtime comparison to solve instances of the ambulance selection problem with and without column generation Algorithm 1 (column generation for variables x t (c, a, 1 , b, , h)). Table 4. Runtime comparison to solve instances of the request selection problem with and without column generation Algorithm 1 (column generation for variables x t (c, a, 1 , b, , h)). (2) The model facilitates different ambulance and crew types. (3) The model allows hospital choice, taking into account the emergency type and the location of the emergency relative to hospitals. (4) The model makes provision for a queue of waiting emergencies. (5) The model incorporates both ambulance selection decisions as well as ambulance reassignment decisions. (6) The model allows ambulances on their way to a station to be dispatched to an emergency. (7) The model takes into account the future consequences of dispatch decisions. We proposed a policy based on a rolling horizon approach combined with a two-stage stochastic problem solved each time an ambulance dispatch decision is required. An algorithm based on column generation was proposed for solving the two-stage stochastic problem. The model and algorithm were tested using a simulation calibrated with data of the Rio de Janeiro emergency medical service. The proposed policy performs much better than the popular closest-available-ambulance heuristic. Additional work is needed to reduce the time taken to compute the decisions for the proposed policy.
2022-03-31T01:16:28.631Z
2022-03-30T00:00:00.000
{ "year": 2022, "sha1": "f67aa47db5062454ae59e121053d285f535dfb81", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f67aa47db5062454ae59e121053d285f535dfb81", "s2fieldsofstudy": [ "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
53220244
pes2o/s2orc
v3-fos-license
Chasing Nested Convex Bodies Nearly Optimally The convex body chasing problem, introduced by Friedman and Linial, is a competitive analysis problem on any normed vector space. In convex body chasing, for each timestep $t\in\mathbb N$, a convex body $K_t\subseteq \mathbb R^d$ is given as a request, and the player picks a point $x_t\in K_t$. The player aims to ensure that the total distance $\sum_{t=0}^{T-1}||x_t-x_{t+1}||$ is within a bounded ratio of the smallest possible offline solution. In this work, we consider the nested version of the problem, in which the sequence $(K_t)$ must be decreasing. For Euclidean spaces, we consider a memoryless algorithm which moves to the so-called Steiner point, and show that in a certain sense it is exactly optimal among memoryless algorithms. For general finite dimensional normed spaces, we combine the Steiner point and our recent previous algorithm to obtain a new algorithm which is nearly optimal for all $\ell^p_d$ spaces with $p\geq 1$, closing a polynomial gap. Introduction We study a version of the convex body chasing problem. In this problem, a sequence of T requests K 1 , . . . , K T , each a convex body in a d-dimensional normed space R d , is given. The algorithm starts at a given position x 0 , and each round it sees the request K t and must give a point x t ∈ K t . The online algorithm aims to minimize the total movement cost More precisely, we take the viewpoint of competitive analysis, in which the algorithm aims to minimize the competitive ratio of its cost and the optimal in-hindsight sequence y t ∈ K t . Our companion paper [BLLS18] establishes the first finite upper bound for the competitive ratio of convex body chasing. Our upper bound is exponential in the dimension d, while the best known lower bound is Ω( √ d) in Euclidean space, which comes from chasing faces of a hypercube. In this paper we consider a restricted variant of the problem, nested convex body chasing, in which the bodies must be decreasing: This problem was first considered as a potentially more tractable version of the full problem in [BBE + 18], where it was shown to have a finite competitive ratio. In our previous work [ABC + 18] we gave an algorithm with nearly linear O(d log d) competitive ratio for any normed space. This upper bound is nearly optimal for ℓ ∞ but for most normed spaces there is still a gap. For example for ℓ 2 , the best lower bound is Ω( √ d). Notation and a Reduction We denote by B 1 a unit ball B(0, 1) centered at the origin in R d . We denote by K the space of convex bodies in R d . We often refer to the Hausdorff metric on K, defined by K)). In other words, it is the maximum distance from a point in one of the sets to the other set. Note that for K 1 ⊇ K 2 , we have d H (K 1 , K 2 ) ≤ 1 iff K 2 + B 1 ⊇ K 1 . We denote by s(K) the Steiner point of K, defined at the start of section 2. We denote by cg(K) the centroid of K. For nested chasing, it turns out we can simplify the problem to make it significantly easier to think about. The following reduction is taken from [ABC + 18], except that we add in a dependence on the number T of requests. The proof is essentially unchanged. (ii) (Bounded) Assuming that K 1 ⊆ B 1 and x 0 = 0, there exists an algorithm for nested convex body chasing with total movement O(f (d, T )). (iii) (Tightening) Assuming that K 1 ⊆ B 1 and x 0 = 0, there exists an algorithm for nested convex body chasing that incurs total movement cost O(f (d, T )) until the first time t at which K t is contained in some ball of radius 1 2 . Our Results This paper has two essentially independent parts. In the first part, we consider the algorithm which moves to the newly requested body's Steiner point, a geometrical center which is defined for convex subsets of Euclidean spaces. We show that for the Bounded version of nested chasing, it achieves a competitive ratio O(min(d, √ d log T )) which is nearly optimal for sub-exponentially many requests. We also consider the problem of finding the best memoryless algorithm, meaning that x t ∈ K t must be a deterministic function of only K t . In this case, we compare the competitive ratio against the Hausdorff distance between K 1 and K T , or equivalently the in-hindsight optimum starting from the worst possible x 1 . We show that the Steiner point achieves the exact optimal competitive ratio for any (d, T ) by adapting an argument from [PO89]. In the second part of this paper, we give a different algorithm which is nearly optimal even for exponentially many requests. Our previous algorithm from [ABC + 18] achieved a O(d log d) competitive ratio by moving to the centroid and recursing on short dimensions; like that algorithm, our new algorithm is most naturally viewed in terms of Tightening. Inspired by Steiner point, we improve that algorithm by adding a small ball B r to K t and taking the centroid with respect to a log-concave measure which depends on the normed space. Using this procedure, we obtain a general algorithm for any normed space. For Euclidean spaces, our new algorithm is O( √ d log d) competitive. For ℓ p spaces with p ≥ 1, our new algorithm is optimal up to a O(log d) factor. The Steiner Point and Competitive Ratio d Here we define the Steiner point and explain why it achieves competitive ratio d (in the Bounded formulation (ii) of Lemma 1.1). Definition 1. For a convex body K ⊂ R d , its Steiner point s(K) is defined in the following two equivalent ways: Then compute the average of this extremal point for a random direction: Here we integrate over the uniform (isometry invariant) probability measure on S d−1 . For any direction be the support function for K in direction θ, and compute The equivalence of the two definitions follows from ∇h K (θ) = f K (θ) and the divergence theorem. The first definition makes it evident that s(K) ∈ K. We will use the second definition to control the movement. Behold: More generally, for any sequence of nested convex bodies where ω(·) denotes the mean width, the average length of a random 1-dimensional projection. Proof. The idea is simply that for each fixed θ, the integrand decreases by a total of at most 2 over all the requests, so the total budget for movement is 2d. To save the factor 2 we combine ±θ, noting that they can change by at most 2 in total. Writing it out: The first integral is at most 1 because K T ⊆ B 1 . As h K (θ) + h K (−θ) ≥ 0 for any convex body K, the second integral is non-negative. So we conclude that which proves the theorem. The proof of the more general statement is identical. Better Competitive Ratio for Subexponentially Many Queries Here we refine the argument from the previous section to show that with T rounds, we obtain a competitive ratio O( √ d log T ). Hence, for subexponentially many queries, the Steiner point is a nearly optimal algorithm. The intuition for this improved estimate is the following. We think of h K as a budgeted resource with amount d θ h Kt (θ) − h K T (θ)dθ at time t. Our goal is to turn this budget into Steiner point movement. In order to have a lot of movement after a modest number of time-steps, we must use a significant amount of this budget in a typical time-step. So, suppose that a lot of the budget is used in going from K t → K t+1 . Then because only a tiny subset of the sphere correlates significantly with any fixed direction, the values of θ contributing to the movement must include a wide range of directions θ. As a result, the different θ values contribute to the Steiner point movement in very different directions and cancel out a lot, which means that the Steiner point's movement was actually much less than the amount of budget used up. In other words, the starting budget d · h K 0 (θ) can only be used efficiently if it is used very slowly. Below we make this argument precise using concentration of measure on the sphere. Proof. We have it suffices to estimate v ⊤ (s(K) − s(K ′ )) for arbitrary unit v. Without loss of generality we can just take v = e 1 := (1, 0, . . . , 0). And now we have Given this constraint that h K − h K ′ ∈ [0, 1] and has total mass λ, the maximum possible value of the integral is easily seen to be achieved when the λ-fraction of θ values with largest possible θ 1 coordinate have h K (θ) − h K ′ (θ) = 1, and the rest have h K (θ) − h K ′ (θ) = 0. Therefore we are reduced to bounding the largest average θ 1 coordinate over a subset of size λ. Concentration on the sphere shows that 1 − e −t 2 /2 fraction of the sphere lies in the set {θ 1 ≤ t/ √ d}. Therefore, the average of θ 1 over a subset of size λ is at most O Theorem 3.3. Following the Steiner point, starting from B 1 , gives total movement after T requests. More generally, the same upper bound holds for any sequence of nested convex bodies Remark. The condition K T + B 1 ⊇ K 1 means that OPT will always be 1 from any starting point. Hence this is a statement about the competitive ratio. , Lemma 3.2 shows that By concavity of log(λ −1 ) on λ ∈ [0, 1], and the constraints λ t ≥ 0 and T −1 t=1 λ t = 1, we have the upper bound The proof of the more general statement is identical. It is natural to wonder whether exponentially large T actually results in d movement. The next theorem establishes that this is indeed the case. We defer the proof to the appendix. Optimal Memoryless Chasing In the chasing nested convex bodies problem, the optimal strategy potentially depends on the new request K t and the initial/current points x 0 and x t−1 . However, our Steiner point algorithm achieved a good guarantee while using only K t to choose x t . It is natural to ask how well such a memoryless strategy with x t a function of only K t can do. Hence in this section we restrict our attention to such algorithms, or equivalently to selector functions, and formulate a precise question. We then show the Steiner point is the exact optimal solution to this question. We formulate the memoryless nested convex body chasing problem as follows. We aim to define a selector f which keeps the movement cost T −1 t=1 ||f (K t )−f (K t+1 )|| 2 within a small constant factor of the Hausdorff distance d H (K 1 , K T ) between K 1 , K T , for all sequences of T convex bodies. Note that now we do not begin at a given x 0 point, and are instead free to choose the starting point at no cost. The theorem below shows that with the above formulation, the Steiner point achieves the exact optimum competitive ratio. The result and proof are inspired by a similar result from the work [PO89] which proves that the Steiner point achieves the exact optimum Lipschitz constant among all selectors, where K is metrized by the Hausdorff metric. This is similar to the T = 2 case of our problem, and as we remark below, our proof specializes to give their result as well; the nested condition is not crucial. Theorem 4.1. For any d and T , the Steiner point achieves the exact optimum competitive ratio for the memoryless nested convex body chasing problem. That is, among all selectors f , the Steiner point yields the minimum constant C(d, T ) such that the following holds: for any sequence Description of Proof. The proof idea is as following: given a selector f , we symmetrize it to obtain a new functionf with at most the competitive ratio of f , which has some new symmetry. More precisely,f is equivariant under the isometry group of R d and commutes with addition, where addition of convex sets is Minkowski sum as usual. These symmetry properties are strong enough to forcef to coincide with the Steiner point. Sincef = s has a smaller Lipschitz constant than f by construction, and f was arbitrary, the result follows. The way we symmetrize f is to take where g is a random isometry of R d and K ′ is a random convex set. We require that the probability measures for g, K ′ be invariant under composition with isometries and Minkowski addition of a fixed convex set, respectively; these invariance properties ensure thatf is isometry invariant and additive. The issue is that such probability measures do not actually exist. But by using the concept of an invariant mean instead of a probability measure, we get a translation-invariant average that does the same job, though it cannot be written down without the axiom of choice. See the appendix for the precise argument and some references. Remark. The theorem above would still hold with the same proof if the sequence (K 1 , K 2 , . . . , K T ) were not required to be nested but were simply constrained to satisfy, for all θ, For this generalization, the T = 2 case is exactly the aforementioned result from [PO89] that the Steiner point attains the minimum Lipschitz constant among all selectors. This optimal Lipschitz constant is asymptotically 2d π ; the reason is similar to the proof of Theorem 2.1. Time-independent Nearly Optimal Algorithm with Mirror Map In this section, we give a new algorithm which combines the Steiner point and our method from [ABC + 18]. This algorithm is nearly optimal for any ℓ p norm with p ≥ 1, and we conjecture the result to be nearly tight in a general normed space. Our first observation is inspired by an alternative formula for the Steiner point from e.g. [Prz96]: where cg denotes the centroid and B r is a ℓ 2 ball centered at 0 with radius r. Since both Steiner point and centroid (with some modification proposed in [ABC + 18]) give a competitive algorithm for nested convex body chasing, it is natural to conjecture that cg(K + B r ) works for any r ≥ 0 (maybe with some modification). In this section, we show that it is indeed true for a certain range of r. The benefit of this variant of the Steiner point (or centroid) is that it is easier to modify the definition to fit our needs for other normed spaces. The second observation is that the centroid of a Gaussian measure restricted to any convex set moves with ℓ 2 distance proportional to its standard deviation when the measure is cut through its centroid. This is due to some concentration properties of Gaussian measure. Therefore, for ℓ 2 nested chasing bodies, it is more natural to use Gaussian measure instead of uniform measure to define the centroid. For a general normed space, we assume there is a 1-strongly convex 1 function φ on the space such that 0 ≤ φ(x) ≤ D for all x ≤ 1. We will use φ to construct an algorithm with competitive In general, the minimum D among all 1-strongly convex functions ranges from 1 2 to d 2 . The 1 2 lower bound follows from applying the strong convexity to (x, 0, −x) for |x| = 1, while a linear upper bound follows from John's theorem. This minimum D measures the complexity of the normed space for online learning [SST11]. Now, we define the weighted centroid as follows: Definition 3. We define the centroid and the volume of K with respect to e −φ(x) dx by Algorithm 1 is based on cg φ (K + B r ) and we measure the progress by vol φ (K + B r ). The key lemma we show is that this mixture of Steiner point and centroid is stable under cutting and Algorithm 1: ChasingNormedSpace φ is a α-strongly convex function on · such that 0 ≤ φ(x) ≤ D for all x ≤ 2, 3 a C-competitive ℓ 2 nested chasing convex algorithm (used for narrow directions), 4 padded radius r ≤ 1 √ d . 5 Let the localized set Ω = {x : x ≤ 1} and K be the current convex set. 6 Take the initial point x = cg φ (Ω + B r ) where cg φ (K) := K x·e −φ(x) dx K e −φ(x) dx and B r is a ℓ 2 ball centered at 0 with radius r. 7 loop 8 Let A be the covariance matrix of the distribution e −φ(y) 1 y∈Ω+Br . 9 Let V be the span of eigenvectors of A with eigenvalues less than (2e · r) 2 . Let H be a half space containing the convex set K such that x + V ⊂ ∂H, namely, a half space touching x and parallel to V . 16 ∈ Ω, pay extra 2r movement to Ω and back. 18 end that the volume vol φ (K + B r ) decreases by a constant factor every iteration. Unlike the standard volume vol φ (K), this volume has a lower bound, which is related to the volume of the ball B r . Therefore, the algorithm terminates without the trick of projecting the body from [ABC + 18]. We note that the projection trick in that paper does not work because the total movement of cg during the projection is already Ω(d). Theorem 5.1. Let φ be an α-strongly convex function on the normed space · and scalar r > 0. Let K be a convex set and v be an unit vector. Let H be the half space with normal v through cg φ (K + B r ): Then Using this geometric statement, one can readily prove the following statement by choosing appropriate parameters. Theorem 5.2. For any normed space · on R d equipped with a 1-strongly convex φ such that 0 ≤ φ(x) ≤ D for all x ≤ 1, there is an O( √ dD · log d)-competitive nested chasing convex body algorithm. Now, we prove that this bound is tight up to a O(log d) factor. Proof. In [FL93], Friedman and Linial used the family K t = K t−1 ∩ {x t = ±1} to conclude that the competitive ratio is at least √ d for the ℓ 2 norm because the offline optimum can directly go to the singleton K d using √ d movement in the ℓ 2 norm while the online algorithm must move d distance in the ℓ 2 norm. We note that this proof also gives d 1− 1 p competitive ratio lower bound for the ℓ p norm. Now, we prove the √ d lower bound for any ℓ p norm. In this lower bound, we assume without loss of generality that the dimension d is a power of 2. Consider the initial point is 0 and the initial convex body K 0 = R d . Let H be the d × d Hadamard matrix and h t be the i-th row of H. We construct the adaptive adversary sequence as follows: Let x t ∈ K t be the response of the algorithm. If h ⊤ t x t ≥ 0, we define else, we define it by K t+1 = K t ∩ {y ∈ R d : h ⊤ t y = +1}. Due to the construction, we have that h ⊤ t (x t+1 − x t ) ≥ 1. Since each entry of h t is ±1, the movement in the ℓ p norm is at least Hence, the algorithm must move d 1 p −1 in the ℓ p norm each step. After d iterations, the algorithm must move d 1 p in the ℓ p norm. On the other hand, Since H is invertible, K d consists exactly one point x * . The offline optimum can simply move from 0 to x * at the first iteration. Note that Hx * = s for some ±1 vector s. Since the minimum spectral value of H is exactly √ d, we have that and hence x * p ≤ d Proof. Pick a 1 20 -net {v 1 , . . . , v T } on the sphere. We know that for large d, the net has T ≤ 100 d points. Define the half-space and take K t = K t−1 ∩ H t . We claim that this sequence results in Ω(d) movement of the Steiner point. To show that this works, first note that B 0.9 ⊆ K T ⊆ B 0.95 . The first inclusion is obvious. To see the second, note that if 1 ≥ x 2 > 0.95, then for t with x |x| − v t < 0.05 we have v ⊤ t x > x 2 − 1 20 ≥ 0.95 so x / ∈ H t . The fact that K T ⊆ B 0.95 means that for all θ, we have h K T (θ) ≤ 0.95 which means the total budget decrease is large: On the other hand, we claim that we use our budget at constant efficiency: Once we establish this claimed inequality, the theorem is proved: we used up a constant amount of the budget at constant efficiency, so we achieved Ω(d) movement. Below, we establish inequality (1). The point is that since we only cut off a small part each time, we only change the support function h Kt (θ) for a small-diameter set of directions θ, which cannot have much cancellation. More precisely, we show below that any θ t for which h Kt (θ t ) = h K t+1 (θ t ) must satisfy v ⊤ t+1 θ t ≥ 1 10 , so that all θ t contributions point significantly in the direction of v t+1 . From this, inequality (1) follows easily. To see this last claim, we first observe that for any point y t ∈ H t+1 which is removed, since v ⊤ t+1 y t ≥ 0.9 we know y t is close to v t+1 : We also observe that if θ t ∈ S d−1 is such that h Kt (θ t ) > h K t+1 (θ t ), then taking y t = arg max y∈Kt (θ t · y) we must have y t ∈ H t+1 and y ⊤ t θ t ≥ 0.9. So for this choice of y t we also have y t − θ t 2 ≤ 0.45. As a result, any such θ t must be within 0.45 + 0.45 = 0.9 of v t+1 . Hence, this θ t must satisfy v ⊤ t+1 θ t ≥ 0.1. Since all the affected θ t vectors are correlated with a common vector v t+1 , we get This verifies the claimed inequality (1). A.1.2 Simple Optimizations to the Steiner Point Algorithm Do Not Help It is natural to suggest that the Steiner point algorithm often moves unnecessarily. For example, in the Ω(d)-movement example from Theorem 3.4, the original Steiner point is in every single convex body request, so 0 movement is trivially attainable. Here we consider two natural improvements to the Steiner point algorithm. The first moves to the new Steiner point only when it is forced to move, while the second moves towards the new Steiner point until it reaches the boundary of the newly requested set, and then stops. In both cases, we show that we can turn any hard instance for the ordinary Steiner point algorithm in d dimensions into a hard instance for the modified algorithm in d + 1 dimensions. Proposition A.1. Suppose the Steiner point starting from a unit ball in R d has movement C(d, T ) movement after some sequence of T requests. Then there is a sequence of T requests in R d+1 which give Ω(C(d, T )) movement for each of the following two algorithms: 1. If x t ∈ K t+1 , do not move. If x t ∈ K t+1 , move to the Steiner point: x t+1 = s(K t+1 ). 2. If x t ∈ K t+1 , again do not move. If x t ∈ K t+1 , move in a straight line towards s(K t+1 ) until reaching K t+1 , then stop. Proof. Suppose B 1 = K 0 ⊇ K 1 ⊇ · · · ⊇ K T is an example which forces the Steiner point to move distance Ω (C(d, T )). Then we go up to d + 1 dimensions and take, for some small ε > 0, Then because of this extra coordinate, the first algorithm moves every round as long as ε < 1 2 . The movement induced byK t is at least that of K t , so we still obtain Ω(d) movement and every move is forced. Also, we only slightly increased the diameter by going up 1 dimension, so we lose only a constant factor from this. For the second algorithm, note that x t must move to be within O(ε) of s(K t ). Indeed, because of the fast decay of the last coordinate, we move (1 − O(ε))-fraction of the way from x t−1 to s(K t ), and the total distance to move is O(1). Because d(x t , s(K t )) = O(ε), the change in the total movement cost is O(T ε). Hence by taking ε small we achieve essentially the same movement. 1. Λ has operator norm 1, and f ≥ 0 everywhere implies Λ(f ) ≥ 0. For a compact group, Λ is just the average with respect to Haar measure. For general abelian semigroups it is more complicated. To denote the translation-invariant averaging operator Λ we use integral notation following [PO89]. So for instance is an average of f with respect to the invariant mean Λ on K. A.2.2 Proof of Theorem 4.1 We use invariant means to symmetrize any function into the Steiner point. We first give the axiomatic characterization due to Rolf Schneider of the Steiner point, which we will use after symmetrizing to show that we ended up with the Steiner point. Lemma A.4 ([Sch71]). Let s : K → R d be a function from convex sets to R d such that: 2. f (gK) = gf (K) for any isometry g : R d → R d 3. f is uniformly continuous with respect to the Hausdorff metric. Then f (K) = s(K) is the Steiner point. Now we prove that the Steiner point has the lowest movement among all selectors. Theorem 4.1. For any d and T , the Steiner point achieves the exact optimum competitive ratio for the memoryless nested convex body chasing problem. That is, among all selectors f , the Steiner point yields the minimum constant C(d, T ) such that the following holds: for any sequence Proof. Suppose f : K → R d is an arbitrary selector which achieves the above movement estimate with constant C(d, T ). We first claim that f has to be 2C(d, T )-Lipschitz. Indeed, suppose we are given K, K ′ . We show that Indeed let K ′′ be the convex hull of K ∪ K ′ , so that Since Hausdorff distance d H is the same as L ∞ norm of the support function, we know that d H (K, K ′′ ) ≤ d H (K, K ′ ) and d H (K ′ , K ′′ ) ≤ d H (K, K ′ ). Since K ′′ contains both K, K ′ , the assumed movement estimate for f tells us that f (K ′′ ) is within distance C(d, T )d H (K, K ′ ) from both f (K) and f (K ′ ). Now the claim follows from the triangle inequality. Now we introduce symmetry to f to obtain the Steiner point. For ease of understanding, we go in three steps. First we make f translation invariant: Second, we introduce additivity: We again note that d(A + K, K) ≤ max a∈A a 2 , so the integrand is an L ∞ function on K. As above, we now have additivity: Furthermore we preserve translation invariance; in fact it is a special case of additivity where we take K ′ to be a single point. Finally, we introduce O(n) invariance, this time using an actual integral: This preserves additivity because the O(n)-action consists of linear maps, and similarly to above it results in f (3) additionally being O(n)-invariant. Since thisf = f (3) is isometry invariant and additive under Minkowski sum, and is also still Lipschitz since f was, we conclude from Lemma A.4 thatf (K) = s(K). Finally, we want to argue that by doing these symmetrizations, we preserve the movement upper bound C(d, T ). The point is that if we evaluate the movement off on a sequence of nested bodies the movement is expressible as an "average" of the movement of f over a sequence of bodies which are isometries of K t + K ′ . For any K ′ , we still have Hence the sequence (f (K t )) t≤T is an average of isometries applied to sequences (f (K t + K ′ )) t≤T , which can only shrink the distances. So we get that the Steiner pointf also has at most C(d, T ) movement since f does. We remark that in the proof above, we did not know thatf was a selector until we deduced from Lemma A.4 thatf is exactly the Steiner point. Indeed, the selector property is not obviously preserved through the symmetrizations we did. We needed f to be a selector in the proof so that e.g. f (A + x) − x would be bounded only depending on A. B.1 Notations and basic facts about log-concave distribution For any convex set K and any convex function φ, we use x ∼ (φ, K) to denote that x is sampled from the distribution e −φ(x) 1 x∈K /vol φ (K). In particular, we have Now, we list some facts that we will use in this section. This lemma shows that any log-concave distribution concentrates around any subset of constant measure. . For any log-concave distribution p on R d , any symmetric convex set A with p(A) = α ∈ (0, 1), and any t > 1, we have In particular, it concentrates around intervals centered at the center of gravity. Lemma B.2 (Tail bound of log-concave distribution). For any log-concave distribution p on R d and any unit vector v, we have that where x = E y∼p y and σ = Var y∼p v ⊤ y. The localization lemma shows that one can prove a statement about high dimensional logconcave distributions via a 1 dimensional statement. [KLS95,FG04]). Suppose p is a log-concave distribution in R d , g and h are continuous function such that This lemma relates the deviation and width of a convex set. Grunbaum's Theorem shows that for any log concave distribution, any half space containing the centroid has at least 1 e fraction of the mass. The following theorem shows that a similar statement holds as long as the centroid is close to the half space. [BV04, Theorem 3] proved this theorem for convex sets. For completeness, we include the proof for log-concave distribution. Theorem B.5 (Grunbaum's Theorem [Gru60]). Let f be any log-concave distribution on R d and H be any half space ) 2 is the distance of the centroid and the half space divided by the deviation on the normal direction v. Proof. Since the marginal of a log-concave distribution is log-concave, by taking marginals and rescaling, it suffices to prove that x≥t f (x)dx ≥ 1 e − t + for isotropic log-concave distribution on R. The case for t ≤ 0 follows from the classical Grunbaum Theorem. The case for t ≥ 0 follows from the case t = 0 and the fact that f (x) ≤ 1 for all x [LV07, Lemma 5.5a]. Finally, we will need this statement about strongly log-concave distribution, namely, a logconcave distribution multiplied by a Gaussian distribution. Theorem B.6 (Brascamp-Lieb inequality [BL76]). Let γ be any Gaussian distribution on R d and f be any log-concave function on R d . Define the density function h as follows: For any vector v ∈ R d and any α ≥ 1, B.2 Cutting along cg φ (Ω) The goal of this section is to prove Theorem 5.1. This theorem shows that a cutting plane passing through cg φ (Ω + B r ) behaves very similar to cg φ (Ω) as long as we do not cut along a narrow direction of Ω + B r . In particular, we show that it decreases the volume vol φ (Ω + B r ) by a constant factor and that cg φ (Ω + B r ) does not move a lot when we cut Ω. We will first prove cg φ (Ω) is stable when a constant fraction of Ω is removed. Lemma B.7. Let φ be an α-strongly convex function on the normed space · . Let Ω ⊆ Ω be two convex sets. Then, we have that Proof. For any h, we have that where the first inequality follows from Ω ⊂ Ω, the second inequality follows from Cauchy Schwarz. To prove our claim, it suffices to upper bound the maximum of the integral problem by t 2 + O(α −1 ) for all t. The localization lemma shows that it suffices to consider the case that p is log-affine. This is same as proving Var x∼q (x ⊤ h) 2 α −1 for any distribution q of the form q(x) ∝ e α ⊤ x−φ(x) restricted to an interval [a, b] ⊂ R d . We can parameterize the distribution q(s) by s = x ⊤ h. (Similarly for φ). Using the assumption that φ is α-strongly convex in · , we have that Hence, φ(s) is α a−b 2 ((a−b) ⊤ h) 2 -strongly convex in R. The Brascamp-Lieb inequality shows that Since h * = 1, we have that (a − b) ⊤ h ≤ a − b . Therefore, we proved the claim (2). Now, we are ready to prove the main theorem of this section: Theorem 5.1. Let φ be an α-strongly convex function on the normed space · and scalar r > 0. Let K be a convex set and v be an unit vector. Let H be the half space with normal v through cg φ (K + B r ): Suppose that Var x v ⊤ x ≥ (2e · r) 2 where x is sampled from e −φ(y) 1 y∈K+Br /vol φ (K + B r ). Then 1 e · vol φ (K + B r ) ≤ vol φ ((K ∩ H) + B r ) ≤ (1 − 1 2e ) · vol φ (K + B r ) and that cg φ ((K ∩ H) + B r ) − cg φ (K + B r ) α − 1 2 . Taking α = d D (1 + log 1 r ) and r = 1 Finally, we note that D ≥ 1 2 (using the definition of strong convexity) and D ≤ d 2 (otherwise, we can use x 2 as φ). Therefore, r ≤ 1 √ d and that the competitive ratio is simply O( √ dD · log d).
2018-11-05T18:13:44.000Z
2018-11-02T00:00:00.000
{ "year": 2018, "sha1": "b606153a49ca551e0ed75471f86e80d04470b529", "oa_license": null, "oa_url": "https://epubs.siam.org/doi/pdf/10.1137/1.9781611975994.91", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "5c32c08fa0a934fb99aa21976326943e2ae04863", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
239054646
pes2o/s2orc
v3-fos-license
Nitric Oxide Pathways in Neurovascular Coupling Under Normal and Stress Conditions in the Brain: Strategies to Rescue Aberrant Coupling and Improve Cerebral Blood Flow The brain has impressive energy requirements and paradoxically, very limited energy reserves, implying its huge dependency on continuous blood supply. Aditionally, cerebral blood flow must be dynamically regulated to the areas of increased neuronal activity and thus, of increased metabolic demands. The coupling between neuronal activity and cerebral blood flow (CBF) is supported by a mechanism called neurovascular coupling (NVC). Among the several vasoactive molecules released by glutamatergic activation, nitric oxide (•NO) is recognized to be a key player in the process and essential for the development of the neurovascular response. Classically, •NO is produced in neurons upon the activation of the glutamatergic N-methyl-D-aspartate (NMDA) receptor by the neuronal isoform of nitric oxide synthase and promotes vasodilation by activating soluble guanylate cyclase in the smooth muscle cells of the adjacent arterioles. This pathway is part of a more complex network in which other molecular and cellular intervenients, as well as other sources of •NO, are involved. The elucidation of these interacting mechanisms is fundamental in understanding how the brain manages its energy requirements and how the failure of this process translates into neuronal dysfunction. Here, we aimed to provide an integrated and updated perspective of the role of •NO in the NVC, incorporating the most recent evidence that reinforces its central role in the process from both viewpoints, as a physiological mediator and a pathological stressor. First, we described the glutamate-NMDA receptor-nNOS axis as a central pathway in NVC, then we reviewed the link between the derailment of the NVC and neuronal dysfunction associated with neurodegeneration (with a focus on Alzheimer’s disease). We further discussed the role of oxidative stress in the NVC dysfunction, specifically by decreasing the •NO bioavailability and diverting its bioactivity toward cytotoxicity. Finally, we highlighted some strategies targeting the rescue or maintenance of •NO bioavailability that could be explored to mitigate the NVC dysfunction associated with neurodegenerative conditions. In line with this, the potential modulatory effects of dietary nitrate and polyphenols on •NO-dependent NVC, in association with physical exercise, may be used as effective non-pharmacological strategies to promote the •NO bioavailability and to manage NVC dysfunction in neuropathological conditions. The brain has impressive energy requirements and paradoxically, very limited energy reserves, implying its huge dependency on continuous blood supply. Aditionally, cerebral blood flow must be dynamically regulated to the areas of increased neuronal activity and thus, of increased metabolic demands. The coupling between neuronal activity and cerebral blood flow (CBF) is supported by a mechanism called neurovascular coupling (NVC). Among the several vasoactive molecules released by glutamatergic activation, nitric oxide ( • NO) is recognized to be a key player in the process and essential for the development of the neurovascular response. Classically, • NO is produced in neurons upon the activation of the glutamatergic N-methyl-D-aspartate (NMDA) receptor by the neuronal isoform of nitric oxide synthase and promotes vasodilation by activating soluble guanylate cyclase in the smooth muscle cells of the adjacent arterioles. This pathway is part of a more complex network in which other molecular and cellular intervenients, as well as other sources of • NO, are involved. The elucidation of these interacting mechanisms is fundamental in understanding how the brain manages its energy requirements and how the failure of this process translates into neuronal dysfunction. Here, we aimed to provide an integrated and updated perspective of the role of • NO in the NVC, incorporating the most recent evidence that reinforces its central role in the process from both viewpoints, as a physiological mediator and a pathological stressor. First, we described the glutamate-NMDA receptor-nNOS axis as a central pathway in NVC, then we reviewed the link between the derailment of the NVC and neuronal dysfunction associated with neurodegeneration (with a focus on Alzheimer's disease). We further discussed the role of oxidative stress in the NVC dysfunction, specifically by decreasing the • NO bioavailability and diverting its bioactivity toward cytotoxicity. Finally, we highlighted some strategies targeting the rescue or maintenance of • NO bioavailability that could be explored to mitigate the NVC dysfunction associated INTRODUCTION The brain is highly dependent on the continuous and dynamically regulated delivery of metabolic substrates to support ongoing neural function. The energy requirements of the brain are expressively high and paradoxically, it has very limited reserves which imply that the blood supply must be finely and timely adjusted to where it is needed the most, which are the areas of increased activity (Attwell and Laughlin, 2001). This process, namely, neurovascular coupling (NVC), is accomplished by a tight network communication between active neurons and vascular cells that involves the cooperation of the other cells from the neurovascular unit (namely, astrocytes, and pericytes) (Attwell et al., 2010;Iadecola, 2017). Despite the extensive investigations and huge advances in the field over the last decades, a clear definition of the mechanisms underlying this process and particularly, the underlying cross-interactions and balance, is still elusive. This is accounted for by the difficulties in measuring the process dynamically in vivo, allied with the intrinsic complexity of the process, likely enrolling diverse signaling pathways that reflect the specificities of the neuronal network of different brain regions and the diversity of the neurovascular unit along the cerebrovascular tree (from pial arteries to capillaries). Within such complexity, there is a prevailing common assumption that points to glutamate, the main excitatory neurotransmitter in the brain, as the trigger for NVC in the feed-forward mechanisms elicited by activated neurons. The pathways downstream glutamate may then involve multiple vasoactive molecules released by neurons (via activation of ligand-gated cationic channels -iGluRs) and/or astrocytes (via G-coupled receptors activation -mGluRs) (Attwell et al., 2010;Iadecola, 2017;Lourenço et al., 2017a). Among them, nitric oxide ( • NO) is widely recognized to be an ubiquitous key player in the process and essential for the development of the neurovascular response, as will be discussed in a later section (Figure 1). A full understanding of the mechanisms underlying NVC is fundamental to know how the brain manages its energy requirements under physiological conditions and how the failure in regulating this process is associated with neurodegeneration. The connection between NVC dysfunction and neurodegeneration is nowadays well-supported by a range of neurological conditions, including Alzheimer's disease (AD), vascular cognitive impairment and dementia (VCID), traumatic brain injury (TBI), multiple sclerosis (MS), among others (Iadecola, 2004(Iadecola, , 2017Lourenço et al., 2017a;Iadecola and Gottesman, 2019). In line with this, the advancing of our understanding of the mechanisms through which the brain regulates, like no other organ, its blood perfusion may provide relevant cues to forward new therapeutic strategies targeting neurodegeneration and cognitive decline. A solid understanding of NVC is also relevant, considering that the hemodynamic responses to neural activity underlie the blood-oxygen-leveldependent (BOLD) signal used in functional MRI (fMRI) (Attwell and Iadecola, 2002). In the next sections, the status of the current knowledge on the involvement of • NO in regulating the NVC will be discussed. Furthermore, we will explore how the decrease in • NO bioavailability may support the link between NVC impairment and neuronal dysfunction in some neurodegenerative conditions. Finally, we will discuss some strategies that can be used to counteract NVC dysfunction, and thus, to improve cognitive function. Nitric Oxide Synthases The classical pathway for • NO synthesis involves a family of enzymes -nitric oxide synthase (NOS) -that catalyzes the oxidation of L-arginine to L-citrulline and • NO, provided that oxygen (O 2 ) and several other cofactors are available [nicotinamide adenine dinucleotide phosphate (NADPH), flavin mononucleotide (FMN), flavin adenine dinucleotide (FAD), heme and tetrahydrobiopterin (BH 4 )]. For this to occur, the enzyme must be in a homodimeric form that results from the assembly of two monomers through the oxygenase domains and allows the electrons released by the NADPH in the reductase domain to be transferred through the FAD and FMN to the heme group of the opposite subunit. At this point, in the presence of the substrate L-arginine and the cofactor BH 4 , the electrons enable the reduction of O 2 and the formation of • NO and L-citrulline. Under conditions of disrupted dimerization, ensured by different factors (e.g., BH 4 bioavailability), the enzyme catalyzes the uncoupled oxidation of NADPH with the consequent production of superoxide anion (O 2 −• ) instead of • NO (Knowles and Moncada, 1994;Stuehr, 1999). There are three major members of the NOS family which may diverge in terms of the cellular/subcellular localization, regulation of their enzymatic activity, and physiological function: type I neuronal NOS (nNOS), type II inducible NOS (iNOS), and type III endothelial NOS (eNOS) (Stuehr, 1999). The nNOS and eNOS are constitutively expressed enzymes that rely on Ca 2+ -calmodulin binding for activation. The nNOS and eNOS FIGURE 1 | • NO-mediated regulation of neurovascular coupling at different cellular compartments of the neurovascular unit. In neurons, glutamate release activates the N-methyl-D-aspartate (NMDA) receptors (NMDAr), leading to an influx of calcium cation (Ca 2+ ) that activates the neuronal nitric oxide synthase (nNOS), physically anchored to the receptor via the scaffold protein PSD95. The influx of Ca 2+ may further activate phospholipase A 2 (PLA 2 ), leading to the synthesis of prostaglandins (PGE) via cyclooxygenase (COX) activation. In astrocytes, the activation of mGluR by glutamate by rising Ca 2+ promotes the synthesis of PGE via COX and epoxyeicosatrienoic acids (EETs) via cytochrome P450 epoxygenase (CYP) activation and leads to the release of K + via the activation of BK Ca . At the capillary level, glutamate may additionally activate the NMDAr in the endothelial cells (EC), thereby eliciting the activation of endothelial NOS (eNOS). The endothelial-dependent nitric oxide ( • NO) production can be further elicited via shear stress or the binding of different agonists (e.g., acetylcholine, bradykinin, adenosine, ATP). Additionally, erythrocytes may contribute to • NO release (via nitrosated hemoglobin or hemoglobin-mediated nitrite reduction). At the smooth muscle cells (SMC), paracrine • NO activates the sGC to produce cGMP and activate the cGMP-dependent protein kinase (PKG). The PKG promotes a decrease of Ca 2+ [e.g., by stimulating its reuptake by sarcoplasmic/endoplasmic reticulum calcium-ATPase (SERCA)] that leads to the dephosphorylation of the myosin light chain through the associated phosphatase (MLCP) and, ultimately to vasorelaxation. Additionally, PKG triggers the efflux of K + by the large-conductance Ca 2+ -sensitive potassium channel (BK Ca ) that leads to cell hyperpolarization. Hyperpolarization is additionally triggered via the activation of the inward rectifier potassium channels (Kir) and spread rapidly to adjacent cells via gap junctions (Cx). Further, • NO can regulate vasodilation via the stimulation of SERCA, modulation of the synthesis of arachidonic acid (AA) derivatives, and regulation of potassium channels and connexins. activity is further regulated both at the transcriptional and post-translational levels and via protein-protein interactions (Forstermann and Sessa, 2012). While not exclusively, the nNOS is mainly expressed in neurons where it is intimately associated with glutamatergic neurotransmission. The dominant splice variant of this isoform (nNOSα) possesses an N-terminal PDZ motif that allows the enzyme to bind other PDZ-containing proteins, such as the synaptic density scaffold protein PSD-95. This allows the enzyme to anchor itself to the synaptic membrane by forming a supramolecular complex with the N-methyl-Daspartate receptors (NMDAr), whose activation upon glutamate binding results in Ca 2+ influx, and ultimately, • NO production. The eNOS isoform is mainly expressed at the endothelium and is critically involved in vascular homeostasis. In the endothelial cells, the eNOS is predominantly localized within the caveolae, forming a complex with caveolin-1 that inhibits its activity. The stretching of the vascular wall, induced by shear stress, results in the dissociation of this complex and allows the enzyme to be activated, either by Ca 2+ -calmodulin binding and/or by PI3K/Akt-mediated phosphorylation of specific serine residues (e.g., 1,177) (Forstermann and Sessa, 2012). Unlike the other two isoforms, iNOS does not rely on Ca 2+ increases for activation but on the de novo synthesis, which occurs predominantly in glial cells following an immunological or inflammatory stimulation. Because iNOS has much lower Ca 2+ requirements (calmodulin binds with very high affinity to the enzyme even at basal Ca 2+ levels), it produces • NO for as long as the enzyme remains from being degraded (Knott and Bossy-Wetzel, 2009). Nitrate-Nitrite-Nitric Oxide Pathway In recent years, studies have supported • NO production independent of NOS activity, through the stepwise reduction of nitrate (NO 3 − ) and nitrite (NO 2 − ) via the so-called nitratenitrite-nitric oxide pathway. Viewed as stable end products of • NO metabolism, both NO 3 − and NO 2 − are now recognized to be able to be recycled back into • NO, thereby acting as important • NO reservoirs in vivo. NO 3 − and NO 2 − can be consumed in the regular vegetable components of a diet, fueling the nitrate-nitrite-nitric oxide pathway (Rocha et al., 2011;Lundberg et al., 2018). NO 3 − can be reduced to NO 2 − by the commensal bacteria in the gastrointestinal tract and/or by the mammalian enzymes that can acquire a nitrate reductase activity under acidic and hypoxic environments. In turn, the reduction of NO 2 − to • NO can be achieved non-enzymatically via a redox interaction with one-electron reductants (e.g., ascorbate and polyphenols) or can be catalyzed by different enzymes (e.g., hemoglobin, xanthine oxidoreductase, and cytochrome P450 reductase). All these reactions are favored by low O 2 and decreased pH, thereby ensuring the generation of • NO under conditions of limited synthesis by the canonical NOSmediated pathways which require O 2 as a substrate (Lundberg et al., 2008). It is also worth mentioning that S-nitrosothiols might serve as downstream • NO-carrying signaling molecules regulating protein expression/function (Chen et al., 2008). Nitric Oxide Signal Transduction Pathways The transduction of • NO signaling may involve several reactions that reflect, among other factors, the high diffusion of • NO, the relative spatial and temporal abundance of the targets, and the relative rate constants with the potential targets. Most of the physiological actions of • NO are promoted by the chemical modification of relevant proteins either via nitrosylation or nitrosation [reviewed in Picon-Pages et al. (2019)]. Nitrosylation refers to the reversible binding of • NO to inorganic protein moieties (e.g., iron in heme groups), while nitrosation involves the modification of organic moieties (e.g., thiol groups in cysteine residues), not directly, but intermediated by the species produced upon • NO autoxidation, namely N 2 O 3 . In addition, • NO can react with superoxide anion (O 2 −• ), yielding peroxynitrite (ONOO − ), a potent oxidant and nitrating species that conveys the main deleterious actions associated with the • NO signaling (e.g., oxidation and/or nitration of proteins, lipids and nucleic acids) (Radi, 2018). The best characterized molecular target for the physiological action of • NO is the soluble guanylate cyclase (sGC), a hemeprotein that is frequently and controversially tagged as the classical " • NO receptor." The activation of the sGC by • NO involves the nitrosylation of heme moiety of the enzyme that induces a conformational change, enabling it to catalyze the conversion of guanosine triphosphate (GTP) to the second messenger cyclic guanosine monophosphate (cGMP) (Martin et al., 2005). Nitric oxide may additionally regulate the catalytic activity of sGC by promoting its inhibition via nitrosation of critical cysteine residues (Beuve, 2017). NITRIC OXIDE AS A MASTER PLAYER IN THE NEUROVASCULAR COUPLING After being recognized as the endothelial-derived relaxing factor (EDRF) in the late 80s, it did not take long for • NO to be implicated in NVC (Iadecola, 1993). This is not unexpected if we consider that • NO is well suited for such function: it is produced upon glutamate stimulation in the brain, is highly diffusible, and is a potent vasodilator involved in the regulation of the vascular tone. Neuronal-Derived • NO Linked to Glutamatergic Neurotransmission The conventional pathway for • NO-mediated NVC involves the activation of the glutamate-NMDAr-nNOS pathway in neurons. The binding of glutamate to the NMDAr stimulates the influx of [Ca 2+ ] through the channel that, upon binding calmodulin, promotes the activation of nNOS and the synthesis of • NO. Being hydrophobic and highly diffusible, the • NO produced in neurons can diffuse intercellularly and reach the smooth muscle cells (SMC) of adjacent arterioles, there inducing the activation of sGC and promoting the formation of cGMP. The subsequent activation of the cGMP-dependent protein kinase (PKG) leads to a decrease [Ca 2+ ] that results in the dephosphorylation of the myosin light chain and consequent SMC relaxation [reviewed by Iadecola (1993) and Lourenço et al. (2017a)]. Additionally, • NO may promote vasodilation via the stimulation of the sarco/endoplasmic reticulum calcium ATPase (SERCA), via activation of the Ca 2+ -dependent K + channels, or via modulation of the synthesis of other vasoactive molecules [reviewed by Lourenço et al. (2017a)]. Specifically, the ability of • NO to regulate the activity of critical hemecontaining enzymes involved in the metabolism of arachidonic acid to vasoactive compounds suggests the complementary role of • NO as a modulator of NVC via the modulation of the signaling pathways linked to mGLuR activation at the astrocytes. • NO has been demonstrated to play a permissive role in PGE 2dependent vasodilation by regulating cyclooxygenase activity (Fujimoto et al., 2004) and eliciting ATP release from astrocytes (Bal- Price et al., 2002). The notion of • NO as a key intermediate in NVC was initially grounded by a large set of studies describing the blunting of NVC responses by the pharmacological NOS inhibition under different experimental paradigms [reviewed (Lourenço et al., 2017a)]. A recent meta-analysis, covering studies on the modulation of different signaling pathways in NVC, found that a specific nNOS inhibition produced a larger blocking effect than any other individual target (e.g., prostanoids, purines, and K + ). In particular, the nNOS inhibition promoted an average reduction of 2/3 in the NVC response (Hosford and Gourine, 2019). It is recognized that the dominance of the glutamate-NMDAr-NOS pathway in NVC likely reflects the specificities of the neuronal networks, particularly concerning the heterogenic pattern of nNOS expression/activity in the brain. Although nNOS is ubiquitously expressed in different brain areas, the pattern of nNOS immunoreactivity in the rodent telencephalon has been pointed to a predominant expression in the cerebellum, olfactory bulb, and hippocampus and scarcely in the cerebral cortex (Bredt et al., 1990;Lourenço et al., 2014a). Coherently, there is a prevalent consensus for the role of • NO as the direct mediator of the neuron-to-vessels signaling in the hippocampus and cerebellum. In the hippocampus of anesthetized rats, it was demonstrated that the • NO production and hemodynamic changes evoked by the glutamatergic activation in dentate gyrus (DG) are temporally correlated and both dependent on the glutamate-NMDAr-nNOS pathway (Lourenço et al., 2014b). The blockage of either the NMDAr or nNOS also showed to blunt the • NO production and vessels dilation to mossy fiber stimulation in the cerebellar slices (Mapelli et al., 2017). In the cerebral cortex, • NO has been suggested to act as a modulator rather than a direct mediator of the NVC responses, but this view has been challenged in recent years. Emergent evidence from ex vivo approaches indicates that the regulation of vasodilation may diverge along the cerebrovascular tree: at the capillary level, vasodilation seems to be mainly controlled by pericytes via an ATP-dependent astrocytic pathway while at the arteriolar level it involves neuronal • NO-NMDAr signaling (Mishra et al., 2016). Neuronal-Derived • NO Linked to GABAergic Interneurons Recent data support that the optogenetic stimulation of nNOS positive interneurons can promote central blood flow (CBF) changes in the somatosensory cortex comparable to those evoked by whiskers stimulation on awake and behaving rodents (Krawchuk et al., 2020;Lee et al., 2020). The implication of the GABAergic interneurons in NVC has been previously demonstrated, both in the cerebellum and somatosensory cortex (Cauli et al., 2004;Rancillac et al., 2006). Also, in the hippocampus, parvalbumin GABAergic interneurons are suggested to drive, via • NO signaling, the NVC response to hippocampus-engaged exploration activities with impact in the neurogenesis in the dentate gyrus (Shen et al., 2019). The involvement of GABAergic interneurons in neurovascular regulation is not unexpected as some of them have extended projections in close contact with arterial vessels and secrete diverse molecules with vasoactive properties which are able to modulate the vascular tone (e.g., • NO, vasopressin, and NPY) (Hamel, 2006). A novel and striking hypothesis suggest that nNOS-expressing neurons can control vasodilation independent of neural activities. The optogenetic activation of NOS-positive interneurons regulates CBF without detectable changes in the activity of other neurons (Echagarruga et al., 2020;Lee et al., 2020). The activation of GABAergic interneurons has further been shown to promote vasodilation while decreasing neuronal activity; this occurring independently of ionotropic glutamatergic or GABAergic synaptic transmission (Scott and Murphy, 2012;Anenberg et al., 2015). The hypothesis stating that evoked CBF is dynamically regulated by different subsets of neurons, some independently of neuronal activity, calls into question the linearity of the correlation between the net ongoing neuronal activity and CBF changes and raises concerns regarding the interpretation of functional MRI (fMRI) data. Endothelial-Derived • NO Linked to Glutamatergic Neurotransmission As for the systemic vascular network, endothelial-derived • NO has also been implicated in the regulation of CBF. Endothelial cells are able to respond to diverse chemical and physical stimuli by producing, via Ca 2+ -dependent signaling pathways, a myriad of vasoactive compounds (e.g., • NO), thereby modulating the vascular tone. Additionally, Ca 2+ may directly induce the hyperpolarization of the endothelial membrane and adjacent SMC through the activation of Ca 2+ -dependent K + channels (Chen et al., 2014;Guerra et al., 2018). Despite this, the critical requirement of endothelium for the development of a full neurovascular response to neuronal activity only recently started to be valued. Specifically, endothelial-mediated signaling stands to be essential for the retrograde propagation of NVCassociated vasodilation. The discrete ablation of the endothelium was demonstrated to halt the retrograde dilation of pial arteries in response to hindpaw stimulation (Chen et al., 2014). Additionally, in the somatosensory cortex, NVC was shown to be regulated via eNOS upon the activation of the purinergic receptors at the endothelium in a mechanism involving a glioendothelial coupling (Toth et al., 2015). Recent data further pointed to the ability of endothelial cells to directly sense neuronal activity via the NMDAr expressed in the basolateral endothelial membranes, thereby eliciting vasodilation via eNOS activation (Stobart et al., 2013;Hogan-Cann et al., 2019;Lu et al., 2019). While the precise mechanisms by which the eNOS-derived • NO shape NVC response is still to be defined, eNOS activation is suggested to contribute to the local but not to the conducted vasodilation, the latter being associated with K + -mediated hyperpolarization ). Yet, it is proposed that • NO-dependent vasodilation may be also involved in a slower and shorter-range retrograde propagation cooperating with the faster and long-range propagation mediated by endothelial hyperpolarization (Chen et al., 2014;Tran et al., 2018). Of note, • NO can modulate the activity of connexins at the gap junctions to favor the propagation of the hyperpolarizing current upstream to the feeding vessels (Kovacs-Oller et al., 2020). Additionally, vascular-derived • NO has been pointed to facilitate Ca 2+ astrocytic signal and was forwarded as an explanation for the late endfoot Ca 2+ signaling (Tran et al., 2018). • NO in the Neurovascular Coupling in Humans Despite the extensive accumulated evidence for the involvement of • NO in the NVC in animal models, these studies have only been applied to humans recently. By addressing the hemodynamic response to visual stimulation, Hoiland and coworkers provided the first demonstration for the involvement of • NO in the NVC in humans via modulation by a systemic intravenous infusion of the nonselective competitive NOS inhibitor L-NMMA (Hoiland et al., 2020). The authors proposed a two-step signaling mechanism for the NVC in humans translated in a biphasic response with the first component being attributed to the NOS activation elicited by glutamatergic activation. They hypothesized that • NO may be further involved in the second component of the hemodynamic response via erythrocyte-mediated signaling (either by releasing • NO from nitrosated hemoglobin or by mediating NO 2 − reduction) (Hoiland et al., 2020). NEUROVASCULAR DYSFUNCTION IN NEURODEGENERATION -FOCUS ON ALZHEIMER'S DISEASE The tight coupling between neuronal activity and CBF is crucial in supporting the functional integrity of the brain, by both providing the essential metabolic substrates for ongoing neuronal activities and by contributing to the clearance of the metabolic waste byproducts. Disturbances of the mechanisms that regulate CBF, both under resting and activated conditions, can therefore critically impair neural function. Coherently, a robust amount of data support neurovascular dysfunction implicated in the mechanisms of neurodegeneration and cognitive decline associated with several conditions, including aberrant brain aging, AD, VCID, and TBI, among others [reviewed by Zlokovic (2011) (2020)]. A large amount of clinical studies has been focused on AD, for which the regional CBF changes were described to follow a stepwise pattern along the clinical stages of the disease in connection with a cognitive decline (Wierenga et al., 2012;Leeuwis et al., 2017;Mokhber et al., 2021). Alongside, both patients with mild cognitive impairment and AD displayed decreased hemodynamic responses to neuronal activation (memory encoding tasks) (Small et al., 1999;Xu et al., 2007). Interestingly, a retrospective neuroimaging analysis of healthy subjects and patients with mild cognitive impairment and AD suggested that vascular abnormalities are early events, preceding the changes in Aβ deposition, functional impairment, and cerebral atrophy (Iturria-Medina et al., 2016). These and other clinical data are strongly supported by an extensive portfolio of studies in animal models of AD that recapitulate the NVC dysfunction observed in patients [ (Mueggler et al., 2003;Shin et al., 2007;Rancillac et al., 2012;Lourenço et al., 2017b;Tarantini et al., 2017), reviewed by Nicolakakis and Hamel (2011)]. The latter has also proved to be valuable in providing insights on the mechanisms underpinning NVC dysfunction and their correlation with AD classical pathological hallmarks, namely, Aβ accumulation, tau hyperphosphorylation, and neuronal loss. For instance, both in vitro and in vivo studies demonstrated that Aβ can reduce the CBF changes in response to vasodilators and neuronal activation (Price et al., 1997;Thomas et al., 1997;Niwa et al., 2000). In turn, hypoperfusion has been demonstrated to foster both the Aβ production and accumulation (Koike et al., 2010;Park et al., 2019;Shang et al., 2019). Simplistically, this points to a vicious cycle that may sustain the progression of the disease. In this cycle, CBF alterations stand out as important prompters. For instance, in the 3xTgAD mice model of AD, the impairment of the NVC in the hippocampus was demonstrated to precede an obvious cognitive dysfunction or altered neuronal-derived • NO signaling, suggestive of an altered cerebrovascular dysfunction (Lourenço et al., 2017b). Also, the suppression of NVC to whiskers stimulation reported in the tauexpressing mice was described to precede tau pathology and cognitive impairment. In this case, the NVC dysfunction was attributed to the specific uncoupling of the nNOS from the NMDAr and the consequent disruption of • NO production in response to neuronal activation . Overall, these studies point to dysfunctional NVC as a trigger event of the toxic cascade leading to neurodegeneration and dementia. Oxidative Stress (Distress) -When Superoxide Radical Came Into Play The mechanisms underpinning the NVC dysfunction in AD and other pathologies are expectedly complex and likely enroll several intervenients through a myriad of pathways, that may reflect both the specificities of neuronal networks (as the NVC itself) and that of the neurodegenerative pathways. Yet, oxidative stress (nowadays conceptually denoted by Sies and Jones as oxidative distress) is recognized as an important and ubiquitous contributor to the dysfunctional cascades that culminate in the NVC deregulation in several neurodegenerative conditions (Hamel et al., 2008;Carvalho and Moreira, 2018). Oxidative distress is generated when the production of oxidants [traditionally referred to as reactive oxygen species (ROS)], outpace the control of the cellular antioxidant enzymes or molecules [e.g., superoxide dismutase (SOD), peroxidases, and catalase] reaching toxic steady-state concentrations (Sies and Jones, 2020). While ROS are assumed to be critical signaling molecules for maintaining brain homeostasis, an unbalanced redox environment toward oxidation is recognized to play a pivotal role in the development of cerebrovascular dysfunction in different pathologies. In the context of AD, Aβ has been demonstrated to induce excessive ROS production in the brain, this occurring earlier in the vasculature than in parenchyma (Park et al., 2004). At the cerebral vasculature, ROS can be produced by different sources, including NADPH oxidase (NOX), mitochondria respiratory chain, uncoupled eNOS, and cyclooxygenase (COXs), among others. In this list, the NOX family has been reported to produce more ROS [essentially O 2 −• but also hydrogen peroxide (H 2 O 2 )] than any other enzyme. Interestingly, the NOX activity in the cerebral vasculature is much higher than in the peripheral arteries (Miller et al., 2006) and is further increased by aging, AD, and VCID (Choi and Lee, 2017;Ma et al., 2017). Also, both the NOX enzyme activity level and protein levels of the different subunits (p67phox, p47phox, and p40phox) were reported to be elevated in the brains of patients with AD (Ansari and Scheff, 2011) and AD transgenic mice in correlation with a cognitive decline (Park et al., 2008;Bruce-Keller et al., 2011;Han et al., 2015;Lin et al., 2016). As mentioned earlier, NOS enzymes may produce O 2 −• themselves in their uncoupled state, critically contributing to the decreased BH 4 bioavailability. Of note, the BH 4 metabolism is described to be deregulated in AD (Foxton et al., 2007). The reaction of O 2 −• with • NO proceeds at diffusioncontrolled rates and is favored by an increased steady-state concentration of O 2 −• , providing that • NO diffuses to the sites of O 2 −• formation. This radical-radical interaction has two important consequences for cerebrovascular dysfunction: (1) reduced • NO bioavailability and ensued diminished • NOmediated vasodilation, and (2) increased peroxynitrite formation and fostering of oxidative distress (Figure 2). Decreased • NO Bioavailability and Impaired Neurovascular Coupling: A Central Role for Superoxide Radical The decrease of • NO bioavailability is translated into inefficient signaling to support the hemodynamic response to neuronal activation and might occur either due to increased scavenging (e.g., by O 2 −• ) or limited • NO synthesis (e.g., in hypoxia). Thus, the fast reaction of • NO with O 2 −• might be a persuasive reaction at the origins of the • NO decreased bioavailability and NVC dysfunction. The impact of O 2 −• -mediated • NO scavenging in the efficiency of NVC has been addressed in different animal models. In healthy young rats, we were able to mimic the impairment of • NO-mediated NVC observed in the hippocampus of 3xTg-AD mice and aged Fisher 344 rats by using 2,3-dimethoxy-1,4naphthoquinone (DMNQ), a redox-cycle quinone that triggers the intracellular O 2 −• generation (Lourenço et al., 2017b(Lourenço et al., , 2018. Park and coworkers demonstrated that the attenuation of the CBF changes to whisker stimulation, as observed in aged and AD mice, could be abrogated by both a SOD mimetic (MnTBAP) and a NOX inhibitor (peptide gp91ds-tat) (Park et al., 2007(Park et al., , 2008. In the Tg2576 mice, it was further shown that the genetic ablation of the NOX2 subunit of NADPH oxidase was able to prevent both the NVC dysfunction and spatial memory decline (Park et al., 2008). More recently, the mitochondria-targeted overexpression of catalase has been shown to hamper the age-related NVC dysfunction by preserving the • NO-mediated component of the hemodynamic response (Csiszar et al., 2019). The • NO synthesis by the NOS enzymes involves the oxidation of L-arginine to L-citrulline, dependent on O 2 . Under conditions of limited O 2 concentration (e.g., ischemic conditions) and going lower than the K M for NOS, the synthesis of • NO by the canonical pathway became limited, and expectedly, the • NO concentration decreases (Adachi et al., 2000). Shifting • NO Bioactivity From Signaling Toward Deleterious Actions As mentioned earlier, the reaction of • NO with O 2 −• , yielding ONOO − , conveys the major pathway underlying the deleterious actions of • NO, that eventually culminates into neurodegeneration (Radi, 2018). This pathway is largely fueled by the activity of iNOS, an isoform much less dependent on Ca 2+ concentration and capable to sustain a continuous • NO production, thereby producing a much larger amount of • NO relative to the constitutive isoforms (Pautz et al., 2010). The ONOO − formed can oxidize and nitrate several biomolecules, including proteins. Specifically, the nitration of the tyrosine residues of proteins, resulting in the formation of 3-nitrotyrosine (3-NT), may irreversibly impact signaling pathways (either by promoting a loss or a gain of function of the target protein) (Radi, 2018). A large body of evidence supports the enhanced 3-NT immunoreactivity in the brains of AD patients and rodent models, as well as the nitration and oxidation of several relevant proteins [reviewed in Butterfield et al. (2011) and Butterfield and Boyd-Kimball (2019)]. Among them, the mitochondrial isoform of SOD (MnSOD) was reported to occur nitrated in AD (Aoyama et al., 2000), a modification associated with enzyme inactivation (Radi, 2004) and expected increased oxidative distress. Also, tau protein has been demonstrated to be a target for nitration, a modification linked to increased aggregation (Horiguchi et al., 2003). In the 3xTgAD mice with impaired NVC, we detected increased levels of 3-NT and iNOS of the hippocampus (Lourenço et al., 2017b). Peroxynitrite can further impair NVC by altering the mechanisms for vasodilation (e.g., oxidizing BH 4 , inhibiting sGC expression/activity, inactivating prostacyclin) and by promoting structural alterations in the blood vessels [reviewed by Chrissobolis and Faraci (2008) and Lee and Griendling (2008)]. IMPROVING NITRIC OXIDE BIOAVAILABILITY TO HAMPER NEUROVASCULAR DYSFUNCTION From the abovementioned notions, we infer that sustaining • NO bioavailability may have a relevant impact in the NVC, and thus in the neuronal function. Strategies capable to enhance • NO bioavailability may therefore configure attractive approaches to be explored both from a therapeutic standpoint in patients with impaired NVC function (and cognitive decline) or from a prophylactic standpoint in individuals at risk of developing cognitive dysfunctions linked to neurovascular alterations. In line with this, several strategies have been tested, both in animal models and humans, targeting the promotion of • NO synthesis and/or the limitation of its scavenging by the ROS [reviewed by Lundberg et al. (2015)]. Some of the most promising strategies involving non-pharmacological intervention are discussed below (Figure 3). Arginine and Citrulline In the past two decades, both arginine (NOS substrate) and its precursor, citrulline, received a lot of attention for their potential to increase • NO-dependent signaling, being currently accepted that citrulline is more efficient than arginine as a precursor for • NO synthesis (Bahadoran et al., 2021). Citrulline, but not arginine, has been demonstrated to improve the acetylcholineinduced retinal vasodilation in diabetic rats (Kurauchi et al., 2017). It also has been shown to mitigate the cognitive dysfunction linked to transient brain ischemia in mice (Yabuki et al., 2013) and to improve the CBF response associated with spreading depolarization events in rats (Kurauchi et al., 2017). Yet, in patients with mitochondrial encephalomyopathy with lactic acidosis and stroke-like episodes (MELAS) syndrome, oral treatment with L-Arginine was able to restore the NVC response to the visual cortex stimulus (Rodan et al., 2020). Nitrite and Nitrate More recently, much focus has been paid to both NO 2 − and NO 3 − anions as relevant biological precursors of • NO, capable to support • NO-dependent mechanisms, including vasodilation, via the NO 3 − -NO 2 − -• NO pathway (Lundberg et al., 2008). In line with this, different pieces of evidence provide support for the benefits of NO 2 − and NO 3 − , particularly in the cardiovascular system (Lundberg et al., 2015). Additionally, acute nitrite was demonstrated to enhance basal CBF in rats (Rifkind et al., 2007) and to improve the hemodynamic response to somatosensory activation under conditions of NOS inhibition (Piknova et al., 2011). In essence, NO 2 − is now recognized as a regulator of hypoxic signaling in mammalian physiology through the formation of • NO (Rocha et al., 2011;Lundberg et al., 2018), and human studies support the application of NO 2 − to alleviate cardiovascular dysfunctions and as a supplement for healthy arterial aging (Lundberg et al., 2018). NO 3 − , which can be obtained from diets, salt supplementation, or endogenous • NO oxidation, is also proposed to increase both cerebral blood perfusion and cognitive performance in humans, but the overall evidence is still not conclusive (Clifford et al., 2019). This may be justified by the short duration of the interventions and the health status of the subjects. Magnetic resonance imaging studies in an elderly human population exposed to an increased intake of NO 3 − -rich foods showed that a 2-day intervention with dietary NO 3 − had no effect on the global CBF but enhanced the CBF in the deep white matter in the frontal lobe, suggested to be involved in executive functioning (Presley et al., 2011). Moreover, in healthy adults, a single dose of beetroot juice (containing 5.5 mmol NO 3 − ) was demonstrated to modulate their CBF response to task performance in the frontal cortex assessed by near-infrared spectroscopy and to improve their performance to a serial 3 s subtraction task (Wightman et al., 2015). Also, intrathecal levels of NO 3 − in patients with vascular dementia are inversely correlated to the degree of intellectual impairment (Tarkowski et al., 2000). In the cardiovascular and renal systems, dietary NO 3 − was able to attenuate oxidative distress by inhibiting NOX expression/activity, conveying an additional mechanism by which NO 3 − supplementation could improve • NO bioavailability (Carlstrom and Montenegro, 2019). Polyphenols Found predominantly in fruits and vegetables, polyphenols are another set of bioactive compounds recognized to improve • NO bioavailability, thereby providing positive health benefits (Oak et al., 2018). For decades, explored for their antioxidant capacity associated with the free radical scavenging, polyphenols are recognized to directly modulate the redox-sensitive processes in vivo (Fraga et al., 2019). Moreover, polyphenols are known to stimulate • NO production in the endothelial cells via the phosphorylation of eNOS mediated by both PI3K/Akt and p38 MAPK signaling pathways. Several polyphenols were also described to modulate the redox status of the cellular environment via the activation of the nuclear factor (erythroidderived 2)-like 2 (Nrf2), a transcription factor needed for anti-oxidant response element gene (ARE) expression, leading to the expression of several antioxidant enzymes (e.g., SOD). Interestingly, they have also been demonstrated to downregulate the expression of NOX enzymes (Forte et al., 2016;Fraga et al., 2018). Furthermore, as mentioned above, polyphenols are also capable to promote the reduction of NO 2 − to • NO, particularly under acidic environments (Rocha et al., 2009). Polyphenols accompany NO 2 − and NO 3 − in the vegetables consumed in the human diet and mounting evidence from epidemiological and randomized controlled trials support their beneficial effects in preventing age-associated cognitive decline (Fraga et al., 2019). The ability of some polyphenols to improve NVC has also been demonstrated by some clinical and pre-clinical studies. For instance, resveratrol, a polyphenol found in grape seeds, was able to rescue the hemodynamic responses to neuronal and endothelial activation in aged mice, occurring along with downregulation of the NADPH oxidase and decreased 3-NT levels (Toth et al., 2014). In humans, a similar beneficial effect was demonstrated in type 2 diabetic patients and postmenopausal women. In diabetic patients, a single dose of resveratrol enhanced their performance on a multi-tasking test battery and the blood flow velocity in the middle cerebral artery to this cognitive stimulus (Wong et al., 2016). In postmenopausal women, a long-term treatment with resveratrol significantly increased their cognitive performance and NVC response as compared with their basal condition (Thaung Zaw et al., 2020). Cocoa flavonoids were also demonstrated to improve NVC in humans, increasing the BOLD fMRI response to cognitive tasks in healthy subjects (Decroix et al., 2016) and type 1 diabetic patients (Decroix et al., 2019). FIGURE 3 | Interventions aiming to target neurovascular coupling dysfunction via reestablishment of • NO bioavailability. The • NO bioavailability can be increased either by stimulating its production (blue arrows) or by decreasing its degradation (orange arrows). The NOS-dependent • NO production can be increased by providing substrates for the enzyme (arginine or its precursor, citrulline) and via the post-translational regulation of the enzyme (e.g., signaling pathways stimulated by polyphenols and/or exercise). Additionally, the • NO synthesis can be enhanced via the alternative nitrate-nitrite-• NO pathway. The limitation of • NO scavenging can be achieved by decreasing the production of superoxide radical (both nitrate/nitrite supplementation and exercise can decrease the activity/expression of NOX enzymes). Exercise Regular physical exercise is widely recognized to have a substantial neuroprotective impact on brain function, contributing to restore and maintain cognitive performance (Alkadhi, 2018), and amending several cardiovascular risk factors (e.g., diabetes, hypertension) (Kokkinos and Myers, 2010). In particular, exercise is suggested to promote the upregulation of eNOS expression and activity via activation of Akt-dependent signaling by temporally increasing shear stress (Chistiakov et al., 2017). Shear stress has also been shown to stimulate the expression of Nrf2 and thus, the upregulation of antioxidant defenses (Warabi et al., 2007). Also, exercise is suggested to enhance NO 2 − plasmatic levels similar to dietary NO 3 − (Lundberg et al., 2015). The evidence for the beneficial effect of exercise in the CBF and brain functioning is overwhelmingly convincing both in humans and experimental animal models (Ogoh and Ainslie, 2009;Smith and Ainslie, 2017;Trigiani and Hamel, 2017). Recently, physical exercise has been demonstrated to alter the white matter pathology and hemodynamic response to whiskers stimulation in a VCID mouse model (Trigiani et al., 2020). Also, in an AD mouse model, exercise training was demonstrated to reduce cerebrovascular dysfunction via enhancing the P2Y2-eNOS signaling (Hong et al., 2020). CONCLUSION The NVC is a crucial pathway supporting both the structural and functional integrity of the brain. The diffusible vasodilator, • NO, is widely recognized to be a key player in the intricate communication between the neurovascular unit cells required to supply energetic substrates. The canonical pathway for • NO-mediated NVC involves the NMDAr-mediated activation of the nNOS at the neurons and the stimulation of the sGC at the SMC. Additionally, the • NO produced by different sources (interneurons, endothelial cells, erythrocytes) and modulating other signaling pathways likely converge to assure the proper development of the neurovascular response. The failure in this process is linked to neurodegeneration in different pathological conditions and may be fostered by the changes in the redox environment toward oxidation that decreases the • NO bioavailability and subverts its bioactivity. Diet is established among the most relevant adjustable variables of human health in modern societies. Considering that WHO recognizes cognitive impairment and dementia associated with aging as one of the major public health challenges of our time (World Health Organization [WHO], 2012), a more comprehensive understanding of how the different aspects of lifestyle and diet affect neural function and consequent cognitive performance is an imperative need. In line with this, the potential modulatory effects of dietary NO 3 − and polyphenols on • NO-dependent NVC, in association with physical exercise, may be used as effective non-pharmacological strategies to promote the • NO bioavailability and thereby to manage NVC dysfunction in neuropathological conditions. AUTHOR CONTRIBUTIONS CL and JL discussed the content of the review and approved the final version of the manuscript. CL preformed the literature research, draft the manuscript, and draw the figures. JL edited and revised the manuscript. All authors contributed to the article and approved the submitted version. FUNDING This work was supported by the European Regional Development Fund (ERDF) through the COMPETE 2020 -Operational Program for Competitiveness and Internationalization and the Portuguese national funds via FCT -Fundação para a Ciência e a Tecnologia, under the projects POCI-01-0145-FEDER-029099 and UIDB/04539/2020.
2021-10-22T13:25:07.182Z
2021-10-22T00:00:00.000
{ "year": 2021, "sha1": "92b9fda1c08cc188d49d5df5b82004260b49627a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2021.729201/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "92b9fda1c08cc188d49d5df5b82004260b49627a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238749198
pes2o/s2orc
v3-fos-license
Epidemiology of Corneal Neovascularization and Its Impact on Visual Acuity and Sensitivity: A 14-Year Retrospective Study Purpose: To quantify the severity and location of corneal neovascularization (cNV) and its impact on the visual acuity and corneal sensitivity in a cohort of the patients referred to a specialist cornea clinic and also to describe the etiology of cNV in the cohort. Methods: We retrospectively evaluated the charts of 13,493 subjects referred to the San Raffaele Cornea Unit between January 2004 and December 2018 to search for cNV diagnosis. The corneal neovascularization severity was measured in the quadrants (range: 1–4) and location was defined as superficial, deep, or both. Best spectacle corrected visual acuity (BSCVA) was measured in logMar. We used the multiple regression analysis to identify the independent predictors of logMAR, after adjusting for age, gender, keratoconus, herpes keratitis, penetrating keratoplasty, trauma, and cataract surgery. Results: Corneal neovascularization was diagnosed in 10.4% of the patients analyzed. The most prevalent etiology of cNV in our population was non-infectious corneal dystrophies/degenerations followed by herpes simplex virus infection. cNV affected OD, OS, or both eyes in 35.6, 40.2, and 24.2 of cases, respectively. Mean BSCVA (SD) was 0.59 (0.76), 0.74 (0.94), and 1.24 (1.08) in cNV one, two, and three or four of the quadrant groups. Superficial, deep, or mixed cNV occurred in 1,029, 348, and 205 eyes. Severe cNV (three or four of the quadrants) was a significant predictor of low visual acuity (p < 0.001) and reduced corneal sensitivity (p < 0.05). cNV location and its severity were associated (p < 0.05). In addition, corneal anesthesia was associated with lower BSCVA (p < 0.001). Conclusion: Severe and deep cNV are associated with the reduced visual acuity and corneal sensitivity. Our data strongly support the relevance of appropriate follow-up as cNV is a major risk factor for graft rejection. INTRODUCTION The incidence of corneal neovascularization (cNV) in the patients with ocular surface and its impact on visual acuity are not clear, although there is a general consensus that arresting cNV progression is beneficial (1). A well-known immune-privileged site, the normal cornea is avascular (2,3). A number of ocular diseases, generally associated with acute or persistent inflammation, results in an "angiogenic shift" and the development of cNV (4). The second cause of blindness worldwide, cNV, is a sight-threatening condition (5). Colby et al. reported a prevalence of 4.14% in a cohort of the patients visiting a general eye service (6). The etiopathology of cNV varies across the world. In Western countries, herpes simplex keratitis is the most common infectious cause of cNV. In the US alone, 500,000 cases of ocular herpes simplex virus are reported every year (7). Contact lenses and specifically extended-wear soft contact lenses are the most frequent non-infectious cause of cNV in the US affecting 11-30% of the contact lens wearers (8). In addition, cNV invariably follows the corneal chemical burns with an incidence of 37,000 people/year in the US. Other diseases frequently associated with cNV include pterygium, Stevens-Johnson syndrome, Lyell syndrome, and limbal stem cell deficiency (9,10). Of note, cNV is universally considered as a significant risk factor for corneal transplant rejection as the extent of cNV is directly related to the risk of rejection (11). This is a relevant clinical problem, since nearly 20% of the corneal buttons excised during corneal transplantation exhibit histological evidence of cNV (12). Literature suggests that the diseases commonly associated with cNV affect visual acuity, the exact impact of cNV and its location on vision are unknown. At the same time, experimental evidence in the animal models of cNV shows impairment of the corneal nerves, but it is not clear if this phenomenon is replicated in human subjects. In this study, we aimed to quantify the impact of cNV on the visual acuity and corneal sensitivity. Furthermore, we provided an estimate of cNV prevalence in the patients affected with ocular surface diseases. Study Design This retrospective analysis was conducted at the Cornea and Ocular Surface Unit of the San Raffaele Scientific Institute, Milan, Italy. The study was carried out in accordance with the guidelines established by the Declaration of Helsinki and the Institutional Review Board/Ethics Committee (Comitato EticoIstituto Scientifico Ospedale San Raffaele) approval was obtained. All the data used in this study were extracted from our electronic medical record system (OCULI, Bedigital SrL, Verona, Italy), which includes all the patients evaluated at the Cornea and Ocular Surface Unit of the San Raffaele Scientific Institute, Milan, Italy, between January 1, 2004, and December 31, 2018. Specific database search included "history of systemic and/or ocular illness, " "best spectacle corrected visual acuity (BSCVA), " "slit-lamp biomicroscopy of the anterior segment" (including the extension of the cNVs in one to four quadrants), "corneal sensitivity" (quantified as present or absent), and "suspect/definitive diagnosis." In this study, the following keyword combination was searched: "corneal neovascularization, " which retrieved all the patients affected with cNV in one or both eyes in at least one visit. If cNV was documented in both the eyes, they were both considered in the analysis. In total, the charts of 1,406 subjects were analyzed retrospectively. Clinical Parameters Best spectacle corrected visual acuity was recorded in the Snellen equivalents and converted to logMAR scale. "Counting finger" (CF) and "hand motion" (HM) BSCVA were converted to 2.0 and 3.0 logMAR values, respectively; "light perception" (LP) and "no light perception" (NLP) BSCVA were not converted to any logMAR value (13) and were not considered in this study. Therefore, visual acuity data from the patients with BSCVA less than HM were not included in the analysis. Corneal sensitivity was assessed by using the cotton swab test. Corneal neovascularization was defined as the presence of vessels in the cornea. The extension of cNV was quantified clinically as the number of the corneal quadrants showing neovessels (one, two, three, or four quadrants). cNV depth was evaluated clinically at the slit-lamp biomicroscopy. A cotton tip was used to evaluate corneal sensitivity, which was defined as normal, reduced, or absent (14). Statistical Analysis The Kolmogorov-Smirnov, D'Agostino-Pearson, and Shapiro-Wilk tests for normality failed for all considered variables and non-parametric testing was performed on the data. Multiple comparison tests of the Kruskal-Wallis and Dunn were applied to test the association between the visual acuity and cNV. In this case, statistical analysis was performed by using GraphPad In addition, we evaluated the potential association between the corneal neovascularization severity and visual acuity by using multiple regression with logMAR as the dependent variable and adjusting for age, gender, keratoconus, herpetic keratitis, penetrating keratoplasty, trauma, and phacoemulsification + intraocular lens. Sensitivity and inflammation type (blepharitis, hyperemia, or both) were excluded from the models because of the very large number of missing values. When included, however, they did not affect the significant variables. The validity of final regression models (left and right eye) was assessed as follows. The assumption of constant error variance was checked graphically, plotting Pearson residuals vs. fitted values, and, formally, using the Cook-Weisberg test for heteroskedasticity. Since Cook and Weisberg's test for heteroskedasticity was borderline significant, we used Huber/White robust SEs in the final models. High leverage observations were identified by calculating the Pearson, standardized, and studentized residuals, Cook's D influence, and the hat diagonal matrix. We found only 15 high-leverage observations excluding which we noted no substantial changes. A multiple regression was also fit to evaluate the potential independent predictors of sensitivity. The same above criteria to set the final models (left and right eye) were used. Statistical significance was defined as a two-sided p < 0.05 for all the analyses, which were performed by using STATA Etiology of Corneal Neovascularization The etiology of cNV was most frequently non-infectious in both the monolateral (48.9%) and bilateral (76.8%) cases ( Figure 1A). The etiology for bilateral cNV was non-infectious in 76.8% of the cases, infectious in 3.8% of the cases, and undetermined in 19.4% of the cases. Viral keratitis was the most prevalent infectious cause (69.3%) followed by bacterial (23.1%) and amebic (7.7%). Among non-infectious etiologies, genetic/congenital dystrophies were prevalent (30.5%) followed by dry eye, isolated or associated with Sjögren's syndrome, rosacea, or rheumatoid arthritis (15.6%). Long-term contact lens wear accounted for 13.7% of the cases, ocular surgeries for 8.8% of the cases, allergic conjunctivitis for 7.3% of the cases, bilateral chemical burns for 5.7% of the cases, while ocular cicatricial pemphigoid and Stevens-Johnson syndrome accounted for 5 and 4.6% of the cases, respectively. Other less frequent causes of non-infectious cNV are listed in Figure 1C. Categorization of cNV of the most frequent diseases is provided in Supplementary Figure 1. Visual Acuity and Corneal Neovascularization Out of the 1,539 eyes in which visual acuity was measured, cNV extension quantification was available for 1,229 eyes, which were used for this analysis. Eyes affected with one quadrant of cNV were significantly more likely to have better BSCVA than eyes presenting with three-fourths of the quadrants (p < 0.001). Similarly, eyes with two quadrants had significantly better BSCVA than eyes presenting with three-fourths of the quadrants of cNV (p < 0.001) (Figure 2A). Specifically, the media LogMAR was 0.65 ± 0.83, 0.74 ± 0.87, and 1.24 ± 1.08 in cNV one, two, and three-fourths of the quadrants, respectively. Moreover, severe cNV (three-fourths of the quadrants) was a significant predictor of low visual acuity (p < 0.001) ( Table 2). In addition, the age of the patients with cNV was also negatively correlated with visual acuity (p < 0.001), while gender was not determinant ( Table 2). Finally, superficial, deep, or mixed cNV occurred in 856, 281, and 190 eyes, respectively. We observed that, despite the amount of quadrants affected, most cNV was superficial (p < 0.05) ( Table 3). In addition, eyes affected with superficial cNV presented significantly better BSCVA compared to eyes with deep and mixed cNV suggesting that lower visual acuity was associated with the extension and depth of cNV (p < 0.005) ( Figure 2B and Table 4). Corneal Anesthesia and BSCVA Out of 1,229 eyes where cNV was quantified, 253 eyes were also tested for corneal sensitivity, which were used for this analysis. BSCVA (LogMAR) was significantly lower (i.e., better vision) in eyes that presented normal corneal sensitivity (0.63 ± 0.68, N = 82) compared to the ones that showed reduced (1.08 ± 1.00, N = 100) or completely absent (1.17 ± 1.00, N = 71) (p < 0.001) sensitivity ( Figure 2C). Moreover, we found that in the patients with cNV, lower visual acuity is a predictor of reduced corneal sensitivity (p < 0.05) (Supplementary Table 1). Corneal Neovascularization and Sensitivity Out of 305 eyes where corneal sensitivity was measured, cNV was quantified in 203 eyes, which was included for the analysis. Table 3 shows the loss of corneal sensitivity that was associated with the extent of cNV (p < 0.05). In other words, the majority of the patients affected with cNV one quadrant presented normal sensitivity, while most of the patients with severe cNV (thirdfourths of the quadrants) displayed anesthesia (p < 0.05). Therefore, cNV extent resulted in a significant predictor of lower corneal sensitivity (p < 0.05) (Supplementary Table 1). DISCUSSION Corneal neovascularization is the second cause of blindness worldwide (5) and an area of significant medical need. Current treatments include topical application of corticosteroids or nonsteroidal anti-inflammatory agents, which can be associated with serious side effects such as ocular hypertension, posterior cataract induction, delayed healing, or corneal melting, respectively (4). Recently, vascular endothelial growth factor (VEGF) inhibitors such as ranibizumab (Lucentis; Genentech), and bevacizumab (Avastin; Genentech) have been proposed as a treatment for cNV with encouraging results (15,16). Inhibition of cNV has been considered clinically beneficial, although weak evidence supports the efficacy of the anti-cNV treatments in ameliorating vision (17). In any case, the real impact of cNV, and its extension, on visual acuity remains largely unknown. For this reason, we aimed to quantify the impact of cNV on visual acuity in a large population. We show that BSCVA is significantly and negatively affected by cNV in our cohort of 1,406 patients. To the best of our knowledge, only one prior report has assessed the impact of cNV on visual acuity and has found it reduced in 12% of patients affected with cNV (6), which is very similar to the prevalence reported in 10.4% of the patients. However, the limited number of the patients considered in that study (35 subjects out of 845 subjects) and the different population (general ophthalmology as opposed to cornea clinic) and the absence of cNV grading make it difficult to compare the two studies. We also found that BSCVA is progressively reduced with an increasing extension of cNV. We would like to clarify that this does not mean that reducing the extent of cNV (therapeutically) may necessarily result in the improvement of BSCVA. In fact, vessel leaking of calcium or blood could irreversibly impair corneal clarity, and, hence, vision. Our study, however, suggests the importance of modulating cNV in its earlier stages as its progression to involve the entire cornea is indeed associated with worse vision. Additionally, we observed a significant correlation between cNV and loss of corneal sensitivity in line with prior reports in the animal models (18). The link underlying corneal nerve loss/dysfunction and cNV is not clear. It is known that corneal infections (19) or dry eye (20) is associated with altered nerve morphology and corneal sensitivity. One could hypothesize that neovessels induce corneal edema and, hence, cause peripheral nerve degeneration. Additionally, it is possible that normal nerves secrete anti-angiogenic factors such as pigment epithelium-derived factor (18), which are lost after their degeneration. Acute damage of corneal nerves and subsequent release of substance P could also promote cNV (21). Finally, it is possible that massive leukocyte infiltration, as it occurs during corneal inflammation, induces cNV and nerve disruption. Indeed, it has been reported that reduction of corneal nerve density is associated with an increased leukocyte infiltration in the conditions commonly associated with cNV (22). Regardless of the mechanism(s) involved, this study suggests that corneas affected with cNV should be routinely checked for abnormal sensitivity to rule out concomitant neurotrophic disease. Finally, we found that eyes with corneal anesthesia had worse BSCVA as opposed to eyes with normal corneal sensitivity. This suggests that vision reduction in patients with cNV could have multiple causes beyond the obvious effect of opacity induced by vessel growth into the cornea. In fact, impairment and/or loss of corneal sensitivity is associated with tear film alteration, punctate keratopathy, or ulcers, which can all reduce visual acuity. The finding that cNV patients exhibited altered nerve function should be taken into account, since anti-VEGF treatments, which have been proposed for cNV, are neurotoxic (23). Interestingly, VEGF neutralization with bevacizumab resulted in downregulation of nerve growth factor and delayed wound healing (24). In summary, the most prevalent etiology of cNV in our population was non-infectious corneal dystrophies/degenerations. This group included, not surprisingly, the patients affected with chronic corneal edema or aniridia. The finding that the patients with keratoconus were also affected could be associated with extensive contact lens use in this group. On the other hand, infectious keratitis was the second cause of monolateral cNV and the third cause of bilateral cNV. This is in line with prior literature, which confirms a lower prevalence of infectious keratitis in developed countries (25). Among infections, herpes keratitis was the most prevalent, which consolidates prior reports (7). Of note, the most common diagnosis among the patients affected with most severe (third-fourths of the quadrants) cNV were penetrating keratoplasty and trauma (16.7 and 15.3%, respectively) followed by keratoconus (12.2%) (Supplementary Material). We acknowledge that this study did not consider the central extension of cNV, as it was not possible to retrieve such information in this large cohort of patients. However, this study shows that higher cNV density (and, specifically, deep and extensive cNV) is associated with a lower vision that underlines the relevance of cNV-associated disorders as major drivers of visual impairment. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by mds. Written informed consent from the participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.
2021-10-14T13:18:33.185Z
2021-10-14T00:00:00.000
{ "year": 2021, "sha1": "08cce457223e8b7aa89fa95c78be3a4a436c3629", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2021.733538/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "08cce457223e8b7aa89fa95c78be3a4a436c3629", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261676997
pes2o/s2orc
v3-fos-license
The functional maturity of grafted human pluripotent stem cell derived-islets (hSC-Islets) evaluated by the glycemic set point during blood glucose normalizing process in diabetic mice Human pluripotent stem cell (hPSCs) derived-pancreatic islets (hSC-islets) are good candidates for cell replacement therapy for patients with diabetes as substitutes for deceased donor-derived islets, because they are pluripotent and have infinite proliferation potential. Grafted hSC-islets ameliorate hyperglycemia in diabetic mice; however, several weeks are needed to normalize the hyperglycemia. These data suggest hSC-islets require maturation, but their maturation process in vivo is not yet fully understood. In this study, we utilized two kinds of streptozotocin (STZ)-induced diabetes model mice by changing the administration timing in order to examine the time course of maturation of hSC-islets and the effects of hyperglycemia on their maturation. We found no hyperglycemia in immune-compromised mice when hSC-islets had been transplanted under their kidney capsules in advance, and STZ was administered 4 weeks after transplantation. Of note, the blood glucose levels of those mice were stably maintained under 100 mg/dl 10 weeks after transplantation; this is lower than the mouse glycemic set point (120–150 mg/dl), suggesting that hSC-islets control blood glucose levels to the human glycemic set point. We confirmed that gene expression of maturation markers of pancreatic beta cells tended to upregulate during 4 weeks after transplantation. Periodical histological analysis revealed that revascularization was observed as early as 1 week after transplantation, but reinnervation in the grafted hSC-islets was not detected at all, even 15 weeks after transplantation. In conclusion, our hSC-islets need at least 4 weeks to mature, and the human glycemic set point is a good index for evaluating ultimate maturity for hSC-islets in vivo. Introduction In collaboration with glucagon secreting-pancreatic alpha cells and somatostatin-secreting delta cells, pancreatic beta cells play a central role in regulating serum glucose levels within a normal range by adjusting the amount of insulin secretion in response to glucose levels [1][2][3][4].Currently, transplantation of pancreatic islets is useful as therapy for diabetes [5][6][7][8]; however, the shortage of donors still remains [9][10][11]. Human pluripotent stem cell (hPSCs)-derived pancreatic islets are an attractive source of cell replacement therapy for diabetes. Because undifferentiated hPSCs possess the ability to proliferate infinitely, hPSCs-derived islets (hSC-islets) could potentially overcome the shortage of pancreatic islet donors.For three decades, there has been a growing body of differentiation methods for hPSCs into pancreatic beta-like cells [12][13][14][15][16][17][18][19][20], and it was recently reported that functional hSC-islets successfully ameliorated serum blood glucose levels in diabetic mice after transplantation [21][22][23][24].However, in vitro differentiated hSC-islets are still more immature than normal human islets in both function and maker expression [25,26].Gene expression pattern and function of hSC-islets were shown to resemble native human islets after transplanting into rodents [27], suggesting that the hSC-islets maturate in vivo [26,27].For clinical application, protocols generating homogeneous populations to prevent teratoma formation have also been developed using cell sorting or chemical treatment to remove off-target cells [28]. Although grafting of hSC-islets ameliorates diabetes in mice, several weeks are needed for the normalization of serum blood glucose levels after transplantation [22][23][24].This blood glucose-normalizing process by hSC-islets is not yet fully understood.Previously, we confirmed blood glucose levels were finally maintained below 100 mg/dl in diabetic mice after hSC-islets were transplanted into their kidney capsule [23].These levels alerted us to a glycemic set point because the human glycemic set point is around 90 mg/dl; the mouse glycemic set point is higher [29].For example, the glycemic set point of the nude mouse is around 120 mg/dl, and the C57BL/6 mouse, around 150 mg/dl [29].Therefore, blood glucose levels below 100 mg/dl implied that these levels were controlled by a human glycemic set point, not a mouse one.Alejandro's group showed that pancreatic islets acted as systemic glucostats and held the instructions for setting the glycemic set point [29].We assume if transplanted hSC-islets mature functionally in mice, glycemic set point may transit from mouse to human levels.We also assume that, if both human and mouse beta cells exist together and behave independently in mice, the human beta cells will secrete more insulin to keep the blood glucose levels near the human glycemic set point, which is lower than mouse. The purpose of this study is to analyze the blood glucose-normalizing process induced by grafted hSC-islets and elucidate the timing of maturation and the relation between maturation and glycemic set point.In this research, we injected STZ into mice at a different time points after hSC-islet transplantation.This approach enabled us to examine the time window for hSC-islets to compensate the function of damaged mouse beta cells.Here we report that grafted hSC-islets can control mouse blood glucose levels within 4 weeks after transplantation.Revascularization was observed as early as one week after transplantation.Furthermore, we showed that the glycemic set point is a good index for grafted-hSC-islet function in vivo. Human iPS cell culture and differentiation Undifferentiated human iPS cells were detached with CTK solution, rinsed with D-PBS several times, and then dissociated into single cells using Accumax (Innovative Cell Technologies, San Diego, USA).Dissociated cells were seeded at a density of 1 × 10 6 cells/ ml in a spinner type reactor (Biott) containing 30 ml of mTeSR1 (Veritas) with 10 μM ROCK inhibitor (Y-27632; Cayman Chemical) at a rotation rate of 45 rpm.Spheroids formed by cell aggregation during 2 day-culture and then were cultured in hiPS medium without FGF2 for 1 day before starting differentiation. In this research, we differentiated 6 batches of hSC-islets (batch A-F).Each batch of hSC-islets was transplanted into mice as follows: characterization of these hSC-islets is shown in Fig. 1 A, B. Immunostaining and immunohistochemistry Differentiated islet-like cells from human iPS cells (hSC-islets) were fixed in 4% paraformaldehyde at room temperature (RT) for 30 min.Grafted-hSC-islets were harvested from mice and were fixed in 4% paraformaldehyde at 4 • C overnight.These cells were washed with phosphate-buffered saline (PBS) and gradually dehydrated using 70% ethanol and 99.9% ethanol.Then 99.9% ethanol was replaced with xylene, and cells were embedded in paraffin and cut into 3-μm sections.These sections were mounted on slides and immersed in xylene, next in ethanol, and finally rehydrated with tap water.Hematoxylin and eosin staining was conducted according to the standard protocol.For immunostaining, sections were washed with PBS and incubated with blocking buffer (3% BSA in PBS) at RT for 1 h.Next, sections were incubated with primary antibodies (diluted in PBS containing 1.5% goat serum) overnight at 4 • C in a humidified chamber.The following primary antibodies were used: rat anti-C-peptide (1:200; DSHB, University of Iowa), rabbit antiproglucagon (1:300; Cell Signaling Technology), goat anti-PDX1/IPF-1 (1:100; R&D), and rabbit anti-Synapsin1/2 (1:300; Synaptic Systems, Germany).Sections were washed with PBS and then incubated with a fluorescence-conjugated secondary antibody (diluted in PBS containing 1.5% goat serum) for 120 min at RT.The following secondary antibodies were used: Alexa Fluor 488-conjugated donkey anti-goat IgG (1:400; Invitrogen), Alexa 594-conjugated goat anti-rat IgG (1:400; Invitrogen) and Alexa Fluor 488-conjugated goat anti-rabbit IgG (1:400; Invitrogen).These sections were rinsed with PBS, and then phase-contrast or fluorescent images were taken with a charge-coupled device (CCD) camera (DP71; Olympus).The positive rate in Fig. 1 was calculated with Metamorph image analysis software (Molecular Devices, CA, USA), and doubly positive cells were counted by visual inspection. Quantitative reverse transcription polymerase chain reaction (qRT-PCR) Extraction and purification of total RNA were performed using RNAiso Plus (Takara, Japan).Then cDNA was synthesized with random nonamer and oligo dT(18) using PrimeScript II reverse transcriptase (Takara).qPCR was conducted using GoTaq qPCR mix and run on a CFX96 Touch Deep Well (Bio-Rad, Hercules, CA, USA).Relative quantification was done by standard curve method, and the expression levels of target genes were normalized against that of the reference gene, ornithine decarboxylase antizyme (OAZ1). Animal studies All animal experiments were approved by the Animal Care and Use Committee in the National Center for Global Health and Medicine and conducted in accordance with institutional procedures, national guidelines, and the relevant national laws on the protection of animals.Eight-week-old male NOD/SCID mice were purchased from Japan Clea and kept on a 12-h light/dark cycle with ad libitum access to drinking water and standard irradiated diet.These mice were housed for 1 week before implantation and randomly transplanted with hSC-islets.Mice were anesthetized with a mixture of medetomidine (Nippon Zenyaku Kogyo, Fukushima, Japan), midazolam (Sando, Tokyo, Japan), and butorphanol (Meiji Seika Pharma, Tokyo, Japan), and 6 × 10 6 differentiated cells were Fig. 6.Gene expression patterns in grafted hSC-islets at 1, 2, 3, 4 or 8 weeks after transplantation.(A) Scheme of experimental design.Grafted hSCislets were extirpated from 2 mice each at 1, 2, 3, 4, or 8 weeks, respectively.Gene expression was examined by qRT-PCR.(B) Gene expression in grafted hSC-islets at 1, 2, 3, or 4 weeks.(C) Gene expression in grafted hSC-islets at 4 or 8 weeks.transplanted under the kidney capsule before or after STZ injection.Mice were anesthetized with inhalable isoflurane (Pfizer, NY, USA).To create diabetic model mice, mice (Japan Clea) were administered one shot of 130 mg/kg of streptozotocin (STZ; Sigma) intravenously.Transplanted kidneys were extirpated from mice around 90 days after implantation, and histochemical analysis was performed.All mice were anesthetized with inhalable sevoflurane (Maruishi Pharmaceutical, Osaka, Japan) and euthanized by cervical dislocation at the time of sacrifice.Blood was collected via the tail vein.Non-fasting blood glucose levels were examined using a glucose test kit (Glutest Neo Sensor, Sanwa Chemical).No mice reached the experimental endpoint. Oral glucose tolerance test Seven or eleven weeks after transplantation, mice were fasted for 4 h, and blood glucose was measured 15, 30, 45, 60, and 120 min after the oral administration of a glucose solution (2.0 g/kg).We also examined STZ-induced DM control NOD/SCID mice. Measurement of serum hormones Blood samples were gathered from tail vein in heparin-coated capillaries.Plasma was separated after centrifugation (10 min, 4 • C, 800 g) and kept frozen at − 80 • C until measurement.Human or mouse C-peptide and glucagon concentrations in mouse plasma were determined using human ultrasensitive C-peptide ELISA kits (Mercodia), mouse C-peptide Elisa kits (Takara), and glucagon ELISA kits (Mercodia). Glucose stimulated C-peptide secretion assay Differentiated hSC-islets were pre-cultured in DMEM containing 2 mM glucose, 0.1% BSA and 10 mM HEPES at 37 • C for 1hr.These cells were then incubated in the same medium at 37 • C for 1hr, and the supernatant was harvested.Next, they were incubated in the DMEM containing 20 mM glucose, 0.1% BSA and 10 mM HEPES at 37 • C for 1hr.Then the supernatant was retrieved.After the assay, these cells were enzymatically dissociated and counted.Concentration of C-peptide in the supernatant was measured using human Cpeptide ELISA kit, and the numbers of cells were counted after the assay.The concentration was adjusted per 2 × 10 6 cells. hSC-islets substitute for mouse beta cells hSC-islets were differentiated using 6 stage differentiation protocol and characterized before transplantation (Fig. 1A).We immunohistochemically confirmed that insulin C-peptide, PDX1 or GCG positive cells existed in hSC-islets (Fig. 1 B).We also observed NKX6.1 and C-peptide double positive cells (Fig. 1B, the rate of double positive cells was 18%).Next, to examine their ability to secrete C-peptide, we carried out glucose-stimulated C-peptide secretion assays.We detected a certain amount of C-peptide secretion (over 1000 pM), although this C-peptide concentration did not differ much between 2 mM and 20 mM glucose stimulation (Fig. 1C).Our previous report showed that it takes more than 6 weeks to normalize the blood glucose levels in diabetic mice after transplanting hSCislets under the kidney capsule [23].At that time, we noticed that their normalization was delayed even if hSC-islets produced a large amount of insulin.To investigate these phenomena, we monitored the non-fasting blood glucose and serum levels of both human and mouse C-peptide periodically after transplanting hSC-islets into STZ-treated diabetic mice (DM) or non-treated mice (non-DM).As shown in Fig. 1D, blood glucose levels of DM mice started to decrease from 4 weeks after transplantation and reached a human glycemic set point (below 100 mg/dl) 10 weeks later.Their serum human C-peptide levels increased from 150 pM 2 weeks after transplantation to 500 pM after 8 weeks, while serum mouse C-peptide levels were below 100 pM after 2 weeks and became very low thereafter, near the detection limit of ELISA (Fig. 1E).These results indicated that STZ treatment destroyed most of the mouse beta cells and mouse insulin was minimally produced.In the non-DM mice, blood glucose levels gradually decreased from 150 mg/dl to 100 mg/dl (a human glycemic set point) within 10 weeks after transplantation (Fig. 1F).Their serum human C-peptide levels tended to increase over time (Fig. 1G).Surprisingly, their serum mouse C-peptide levels continued to decrease and were barely detected after 12 weeks without STZ injection (Fig. 1G). hSC-islets function well enough to control blood glucose levels 4 weeks after transplant Next, we ran further experiments to examine the rescue potentials of human beta cells when human and mouse beta cells coexist.To destroy the mouse beta cells, we chose STZ, because human beta cells are minimally damaged by STZ administration.STZ was administered intraperitoneally 2, 4, 6 or 8 weeks after transplanting hSC-islets under kidney capsules of NOD/SCID mice (Fig. 2A). As shown in Fig. 2B-E, blood glucose levels rapidly elevated to more than 300 mg/dl in all non-transplanted mice (non-TP).In contrast, blood glucose levels of hSC-islet transplanted mice (TP) didn't increase, except in one group in which STZ was administered 2 weeks after transplanting hSC-islets (Fig. 2B-E).Although a transient elevation of blood glucose levels was observed in those mice (Fig. 2B), their blood glucose levels fell below 100 mg/dl 10 weeks later regardless of timing of STZ administration (Fig. 2B-E).Notably, blood glucose levels immediately elevated in all mice after their transplanted kidneys were resected (nephrectomy: NT), indicating that transplanted hSC-islets controlled blood glucose levels. Increase of human C-peptide and improvement of OGTT regardless of STZ-injection timing Next, we examined the serum hormone levels of human C-peptide, glucagon, and mouse C-peptide in these mice.Serum human Cpeptide levels were approximately 200 pM at 2 weeks after transplant and increased thereafter, regardless of the timing of STZ administration (Fig. 3A).We confirmed that serum human C-peptide was not detected after removal of grafted hSC-islets by nephrectomy (Fig. 3A).Serum glucagon levels rose incrementally over time, albeit with individual differences, but they also decreased after nephrectomy, similarly to human C-peptide (Fig. 3B).Serum mouse C-peptide levels decreased in inverse proportion to human C-peptide levels after transplant in cases #17 and #18 (Fig. 3C).Although serum mouse C-peptide levels of most mice were undetectable after STZ administration, they increased once again after nephrectomy (Fig. 3C), indicating that functional mouse beta cells were present. To further investigate the function of grafted hSC-islets in vivo, we performed oral glucose tolerance tests (OGTT) 11 weeks after transplantation.Non-transplantation DM mice maintained high blood glucose levels until 60 min after glucose administration (Fig. 3D).In contrast, blood glucose levels peaked below 200 mg/dl at 15 min in almost all transplanted mice and then rapidly returned to their pre-glucose challenge levels irrespective of the timing of STZ administration (Fig. 3E).These data suggest that the grafted hSCislets functioned robustly in vivo.We did not observe any differences in their function caused by the timing of STZ injection. Four weeks are necessary for hSC-islets to function We found that the function of grafted hSC-islets improved between 2 and 4 weeks after implantation.To narrow the time window between 2 and 4 weeks, we injected STZ into mice at 2, 3, or 4 weeks after grafting (Fig. 4A).Blood glucose levels of the non-grafted mice rapidly elevated with the STZ treatment and then maintained high levels (Fig. 4B-D); two of these mice died around 40 days (Fig. 4D).In contrast, when STZ was injected into mice at 2 weeks after grafting, their blood glucose increased transiently to between 200 and 300 mg/dl but then decreased below the human glycemic set point (Fig. 4B).When STZ was injected into mice 3 weeks after transplantation, their blood glucose levels increased to around 200 mg/dl; these levels were lower than that the mice administrated STZ 2 weeks after transplantation.Of note, these mice reached the human glycemic set point earlier than those administered STZ 2 weeks after transplantation (Fig. 4C).We observed again that serum blood glucose levels were seldom affected by the administration of STZ 4 weeks after transplantation and then similarly reached the human glycemic set point (Fig. 4D).Blood glucose levels in all transplanted mice rose rapidly to diabetic levels after nephrectomy.These results indicate that it takes at least 4 weeks for grafted hSCislets to control the serum blood glucose levels and more time to reach the human glycemic set point. The function of hSC-islets improves until 11 weeks after transplantation Again, we measured human or mouse C-peptide and glucagon levels in mouse serum.Human C-peptide levels were approximately 150-200 pM 2 weeks after grafting and then increased above 300 pM 4 weeks later (Fig. 5A).After 6 weeks, human C-peptide levels peaked or reached a constant level, depending on the individual mouse (Fig. 5A).Serum glucagon levels peaked at 6 weeks and then downregulated at 13 weeks.Moreover, reduced serum glucagon levels were observed after nephrectomy (Fig. 5B).Although serum mouse C-peptide levels were over 150 pM at 2 weeks just before STZ injection, STZ injection strongly reduced these levels (Fig. 5C).However, the mouse C-peptide levels increased again after nephrectomy (Fig. 5C).To further test the function of grafted-hSC-islets in vivo, we conducted OGTT at 7 and 11 weeks after grafting (Fig. 5D-G).In non-transplanted DM mice, serum blood glucose levels remained over 450 mg/dl for 60 min and then did not decrease below 300 mg/dl after glucose load (Fig. 5D, F).In contrast, the peak of serum blood glucose levels did not exceed 300 mg/dl during the first 30 min, and blood glucose levels returned to 100 mg/dl 120 min after glucose challenge in transplanted mice when OGTT was performed 7 weeks after implantation (Fig. 5E).Furthermore, when OGTT was performed 11 weeks after grafting, the peak of serum blood glucose levels did not exceed 200 mg/dl during first 30 min, and serum blood glucose levels fell below 100 mg/dl 60 min after glucose challenge (Fig. 5G).These data suggest that grafted-hSC-islets functioned in vivo irrespective of the timing of STZ administration, and that their function further improved from 7 to 11 weeks. Gene expression of maturation markers in grafted-hSC-islets Because the results of OGTT clearly demonstrated that grafted-hSC-islets maturated in vivo, we focused on the surrounding environment at the grafted site.We extirpated grafts at 1, 2, 3, 4, or 8 weeks after transplantation and tested the gene expression patterns of grafted hiPS-islets by RT-qPCR (Fig. 6A).We examined the maturation and functional markers for beta cells such as UCN3, SIX2, PCSK1, PCSK2, SLC30A8, ABCC8 and KCNJ11 during the first 4 weeks (Fig. 6B).Maturation markers MAFA, UCN3 and SIX2 tended to increase from 4 to 8 weeks (Fig. 6C). Revascularization occurred within 1 week for grafted hSC-islets We periodically examined the revascularization of grafted hSC-islets during the first 4 weeks after grafting (Fig. 7A-D).Capillary vessels were already present in the grafted area 1 week after transplantation, and many dilated capillaries had also appeared at 2, 3, and 4 weeks.We also noted that vascularization did not occur at areas distant from the transplantation site in the kidney capsule in the mouse even 2 weeks after transplantation (data not shown).To further analyze the state of pancreatic beta and alpha cells in grafted hSC-islets during this period, we utilized immunohistochemistry using C-peptide and glucagon antibodies (Fig. 7E-L).Notably, there were many C-peptide and glucagon double-positive cells as well as C-peptide single positive cells at 1 week (Fig. 7E, arrows).There were 66 doubly positive cells in this section (Fig. 7I).From 2 weeks onward, C-peptide and glucagon doubly positive cells decreased (24 cells as shown in Fig. 7J); in contrast, C-peptide or glucagon singly positive cells increased (Fig. 7F-H, J-L).We could not find any doubly positive cells by 3 weeks after transplantation (Fig. 7K.L). S.G.Yabe et al. No reinnervation was detected in grafted hSC-islets The pancreatic islet is innervated by an autonomic axon, and neural input influences its function [30][31][32].Innervation of grafted mouse islets was initiated by 6 weeks and then increased over time [33], suggesting that the function of grafted-hiPS-islets responds to reinnervation.Using the neural marker synapsin, we examined whether innervation occurred in grafted-hSC islets by immunohistochemistry (Fig. 8).We observed many C-peptide positive beta cells in grafted hSC-islets, but we could not detect any synapsin-positive neural fibers near beta cells (Fig. 8A, B, C).However, synapsin positive cells were detected in renal parenchyma or around blood vessels in renal parenchyma, indicating that this antibody worked properly (Fig. 8D and E).These data suggest that innervation of grafted hSC-islets did not contribute to the enhancement of their function until 15 weeks. Discussion In this study, we examined the blood glucose-regulating function of grafted hSC-islets using a diabetic and non-diabetic mouse model by changing the timing of STZ injection.We also focused on the glycemic set points of both human and mouse and the relationship between blood glucose levels and insulin secretion from grafted hSC-islets.We demonstrated that grafted hSC-islets were able to control STZ-mediated hyperglycemia within 4 weeks after transplantation and finally reached the human glycemic set point within 10 weeks.We confirmed the elevation of maturation markers of beta cells in mRNA levels after 4 weeks and the improved function of beta cells over time by OGTT.Although revascularization was observed as early as one week after transplantation, no reinnervation was observed in the grafted hSC-islets even after 15 weeks. In the present study, we adopted two kinds of streptozotocin (STZ)-induced diabetes mice models to examine the time course of maturation and the effect of hyperglycemia on the maturation.In the pre-existing diabetes model, hyperglycemia is induced by STZ injection before transplantation.In the other model, hSC-islets were implanted in non-diabetic mice in advance, and STZ was administered later.This latter model utilizes the specific function of STZ that selectively destroys mouse pancreatic beta cells.Grafted hSC-islets suffer minimal damage because human beta cells barely express Glut2, which transports STZ into pancreatic beta cells [34][35][36][37].This advantage allows us to choose the timing of STZ administration for examining the time course of maturation. Although there are some reports in which STZ was injected after cell transplantation [12,38], to our knowledge, this is the first study that examines the timing of STZ administration after cell transplantation (1-2 week intervals).Once diabetes was provoked by STZ administration, it took more than 10 weeks to normalize the blood glucose level after hSC-islet transplantation (Fig. 1).However, the grafted hSC-islets prevented STZ-mediated hyperglycemia within 4 weeks after transplantation, even when the standard dosage of STZ for inducing diabetes was administered (Fig. 2 and 4).This time point, "4 weeks", corresponded to the descending timing of blood glucose level when hSC-islets were transplanted into diabetic mice.These results indicate that the grafted-hSC islets had acquired the ability to adjust STZ-induced hyperglycemic activity by secreting enough insulin.Even though the transplanted hSC-islets suppressed the elevation of blood glucose after STZ injection, it took several more weeks to stabilize the blood glucose levels below 100 mg/dl, which we regard as the human glycemic set point.In other words, the hSC-islets had fully matured and functioned like native human islets at this point. When hSC-islets were transplanted in non-DM mice, serum human C-peptide levels increased over time, while serum mouse Cpeptide levels decreased and were barely detectable at 12 weeks, as shown in Fig. 1G.These results suggest that mouse beta cells stop producing insulin without receiving any damage when human beta cells are alternatively producing sufficient insulin.Grafted-hSC islets continued to secrete insulin to reach the human glycemic set point, but native mice didn't need to secrete insulin below the mouse glycemic set point, which is higher than human.Our data shown in Fig. 3C and 5C as well as Fig. 1G clearly demonstrate that this is the case, and another study also supports this observation [29].In contrast, when hSC-islets were transplanted into diabetic mice, it took longer to reach the human glycemic set point than those transplanted into non-diabetic mice.One possible reason is that the glucotoxity of the hyperglycemia may have caused some damage to the grafted hSC-islets.Judging from the human C-peptide levels in mouse serum 2 weeks after transplantation, glucotoxicity may not be the main cause at the initial stage, if at all, because human C-peptide levels were almost the same in both DM mice and non-DM mice.Another possible factor is the mode of insulin secretion, because pulsatile secretion is reported to enhance insulin action and effectively regulate blood glucose levels better than constant insulin secretion [39,40].Also, hub/leader cells are suggested to be necessary to communicate and orchestrate mutual beta cells for pulsatile insulin secretion [41][42][43].Hub/leader cells might be damaged by pro-inflammatory cytokines associated with diabetes, because hub/leader cells are targeted by a diabetic milieu [41].Generation of hub/leader cells in grafted-hSC-islets may be also affected by diabetic state.Further research is needed for better understanding of hub/leader cells. In terms of maturation of grafted hSC-islets, environmental factors in the tissue surrounding grafted sites, such as revascularization and reinnervation, are thought to be important.A pancreatic islet is highly vascularized, and paracrine signals and extracellular matrix (ECM) stemming from a vessel are important for pancreatic beta cell function [44][45][46][47].Although revascularization in grafted-hSC islets or endocrine progenitors has been observed in some studies [22,23,48], chronological analysis of revascularization in grafted-hSC islets has not been examined.Therefore, it is important to show revascularization occurred in grafted-hSC islets as in native islets, in which revascularization is initiated within 1 week after transplantation [49][50][51][52][53][54].In fact, we found revascularization in subrenal capsules of mice as early as 1 week after grafting, which is consistent with previous reports [49][50][51][52][53][54].Interestingly, maturation of grafted hSC-islets at the cellular level was observed by chronological examination, because immunohistochemical analyses revealed the transition from polyhormonal cells to monohormonal cells, as shown in Fig. 7.Moreover, the trend of increasing gene expression levels of maturation markers of beta cells suggested the maturation process during first 4 weeks.Our qPCR data also suggested the upregulation of genes related to insulin granule processing/maturation, such as PCSK1 and SLC30A8.Therefore, improvement of grafted hSC-islet function may be accompanied by insulin granule maturation. As for innervation, mouse pancreatic endocrine cells are innervated by autonomic nerves in vivo [30][31][32], and C57BL/6 mouse pancreatic islets are influenced directly by parasympathetic nerve innervation [55].Anatomical studies indicate that human pancreatic endocrine cells are also innervated by autonomic nerves [30].However, human pancreatic endocrine cells are not highly innervated by autonomic axons; this is a marked contrast to mouse pancreatic endocrine cells [30].When mouse pancreatic islets were transplanted into mice, re-innervation was reported 6 weeks after grafting [33], but, to our knowledge, re-innervation of grafted hSC-islets has not yet been examined.Since reinnervation occurred later than revascularization, we searched for reinnervation until 15 weeks after grafting.However, we did not detect any innervation around the grafted hSC islets.This might reflect the innervation pattern of human islets; native human islets grafted into eyes were not innervated [55].Furthermore, the blood vessels in the grafted hSC-islets area were not invaded by nerve fiber, presumably because these blood vessels originated from the mouse.Taken together, we conclude that the function of grafted hSC-islets was not affected by autonomic nerves, unlike the situation in the normal mouse.The limitation of this study was the small sample size for each group in the animal experiments.We used two mice for each group, and we presented individual mouse data, which were consistent.Indeed, we performed experiments twice for STZ injection 2 and 4 weeks after transplantation and obtained reproducible results.Although the limited sample size precludes us from performing statistical analysis, we observed that each mouse's value had the same trend in each group.Despite this limitation, our findings have important implications for hSC-islets transplantation therapy.Confirming whether grafted-hSC-islets can set blood glucose to the human glycemic set point in mice provides a better evaluation of the function of the hSC-islets, differentiation protocols, and hPSC line used for hSC-islets.Although reduction in blood glucose levels by hSC-islets transplantation in diabetic mice has been reported, studies to ameliorate blood glucose levels to the human glycemic set point are not so many.Controlling blood glucose levels to the human glycemic set point reflects full maturity of the grafted-hSC islets.Moreover, we found that blood glucose levels reached the human glycemic set point by grafted-hSC islets regardless of innervation.This observation implies that transplantation sites or methods such as naked or device are selectable without consideration for reinnervation after transplantation. In summary, the human glycemic set point are a good index for the maturity of grafted hSC-islets in vivo.Further research is needed to fully elucidate the maturation process and the relationships between the glycemic set point and grafted hSC functions.We think that not only single cell level but also mass level studies are essential for understanding maturation of grafted hSC-islets. S.G.Yabe et al. Fig. 3 . Fig. 3. Periodical hormone levels in mouse serum and oral glucose tolerance test (OGTT).(A-C) Serum hormone levels at 2, 4, 6, 8, 10 weeks after transplantation and after nephrectomy (NP) in the same mice described in Fig.2.(A) Human C-peptide levels.(B) Glucagon levels.(C) Mouse Cpeptide levels.(D-E) OGTT was performed using STZ-treated mice at 11 weeks after transplantation.Blood glucose levels were measured at 30, 60, 90, and 120 min after oral glucose administration.(D) Blood glucose levels of non-transplantation diabetic mice.(E) Blood glucose levels of hSCislet-transplanted mice injected with STZ with different timing.OGTT: Oral glucose tolerance test. Fig. 5 . Fig. 5. Periodical hormone levels in mouse serum and OGTT.(A-C) Serum hormone levels at 2, 4, 6, 13 weeks and after nephrectomy in the same mice described in Fig.4.(A) Human C-peptide levels.(B) Glucagon levels.(C) Mouse C-peptide levels.(D, E) OGTT was performed using STZ-treated mice at 7 weeks after transplantation.Blood glucose levels were measured as in Fig.3.(D) Blood glucose levels of non-transplantation diabetic mice.(E) Blood glucose levels of hSC-islets transplanted mice which were injected with STZ with different timing.(F, G) OGTT was performed at 11 weeks after transplantation.(F) Blood glucose levels of non-transplantation diabetic mice.(G) Blood glucose levels of hSC-islets transplanted mice which were injected STZ with different timing. Fig. 8 . Fig. 8. Reinnervation around the grafted area in the mouse kidney at 15 weeks after transplantation.(A) Hematoxylin and eosin staining.Renal parenchyma is shown above dotted lines, and grafted hSC-islets are shown below them.(B, C) Immunostaining corresponding to (A).Red indicates insulin C-peptide expression.Green shows synapsin expression.(C) High magnification image of rectangular area surrounded by dotted line in (B).(D) Hematoxylin and eosin staining of the renal parenchyma.(E) Immunostaining corresponding to (D).Synapsin expression was detected around the blood vessels in the renal parenchyma.Scale bar = 200 μm (A, B), 100 μm (C, D, E).
2023-09-11T15:08:56.011Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "ac5c014bc7ba88fb6ad403b1497a61683d88d6b3", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S2405844023071803/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "29cf3b24d55ae4c4110ee486e63b66ef562cff93", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268447612
pes2o/s2orc
v3-fos-license
Validating part of the social media infodemic listening conceptual framework using structural equation modelling Summary Background The literature has identified various factors that promote or hinder people's intentions towards COVID-19 vaccination, and structural equation modelling (SEM) is a common approach to validate these associations. We propose a conceptual framework called social media infodemic listening (SoMeIL) for public health behaviours. Hypothesizing parameters retrieved from social media platforms can be used to infer people's intentions towards vaccination behaviours. This study preliminarily validates several components of the SoMeIL conceptual framework using SEM and Twitter data and examines the feasibility of using Twitter data in SEM research. Methods A total of 2420 English tweets in Toronto or Ottawa, Ontario, Canada, were collected from March 8 to June 30, 2021. Confirmatory factor analysis and SEM were applied to validate the SoMeIL conceptual framework in this cross-sectional study. Findings The results showed that sentiment scores, the log-numbers of favourites and retweets of a tweet, and the log-numbers of a user's favourites, followers, and public lists had significant direct associations with COVID-19 vaccination intention. The sentiment score of a tweet had the strongest relationship, whereas a user's number of followers had the weakest relationship with the intention of COVID-19 vaccine uptake. Interpretation The findings preliminarily validate several components of the SoMeIL conceptual framework by testing associations between self-reported COVID-19 vaccination intention and sentiment scores and the log-numbers of a tweet's favourites and retweets as well as users' favourites, followers, and public lists. This study also demonstrates the feasibility of using Twitter data in SEM research. Importantly, this study preliminarily validates the use of these six components as online reaction behaviours in the SoMeIL framework to infer the self-reported COVID-19 vaccination intentions of Canadian Twitter users in two cities. Funding This study was supported by the 2023-24 Ontario Graduate Scholarship. Introduction Throughout the COVID-19 pandemic, social media have played a substantial role in shaping public perceptions and attitudes towards COVID-19 vaccination. 1,2As a result of extreme interventions such as lockdowns to contain COVID-19 transmission before vaccines were available, people have increasingly connected and relied on digital channels, such as social media, to receive information related to COVID-19.8][9] The World Health Organization coined the term "social listening" to describe such activities and deployed its Early AI-Supported Response with Social Listening (EARS) platform during the pandemic. 10][9][10] However, these models have limitations, and they do not truly reflect current complex information ecosystems.3][14][15][16][17] SEM allows researchers to investigate complex relationships among variables, both observable and latent, and offers a practical and flexible tool for understanding complex structural associations.SEM represents an advanced statistical technique beyond typical regression analysis.Regression analysis is a special type of SEM that typically focuses on understanding the relationship between one dependent observable variable and at least one independent observable variable.][14][15][16][17] Therefore, SEM is particularly well suited for testing multiple hypotheses for various associations within a complex phenomenon.][14][15][16][17] Online surveys have been primarily used in SEM Research in context Evidence before this study Typical structural equation modelling (SEM) research has used online surveys to examine people's intentions to accept COVID-19 vaccines during the pandemic.Various theories, such as the health belief model and the theory of planned behaviour, have been commonly adopted in previous research.However, existing theories have limitations given the complex information ecosystems in modern societies, especially social media.We propose social media infodemic listening (SoMeIL) as a conceptual framework for public health behaviours.However, validation of the proposed conceptual framework is needed.Since this framework is developed according to social media, it requires the use of social media data.However, social media data, such as Twitter data, have rarely been used in SEM research, although they have been analysed in other studies that have investigated people's intentions or behaviours in relation to the COVID-19 vaccination. Added value of this study The findings of this study indicate significant statistical relationships between COVID-19 vaccination intention and several components derived from Twitter, including a tweet's sentiment score, the numbers of a tweet's favourites and retweets, and the numbers of Twitter users' favourites, followers, and public lists.Therefore, this study provides a preliminary validation of the proposed SoMeIL conceptual framework.This study also demonstrates the feasibility of using Twitter data rather than survey data in SEM research.For public health contexts, some indicators on Twitter, such as the numbers of likes and shares, can be used to infer Canadian Twitter users' vaccination behaviours in real life.Therefore, the findings of this study can be adopted and expanded to forecast vaccination coverage for vaccinepreventable diseases.This approach can also help to tailor communication strategies and address specific issues based on Twitter users' discussions and online behaviours to effectively reach different groups.The SoMeIL conceptual framework can be extended to other areas, such as symptom reports or behavioural patterns, to aid in public health decision-making and resource allocation.By integrating social media platforms such as Twitter into pandemic preparedness, health organizations and government agencies can harness their potential as powerful tools to engage with the public, address health misinformation, and effectively respond to crises, ultimately helping to mitigate the impact of future pandemics.Similar to other pandemic surveillance platforms, the SoMeIL conceptual framework can provide real-time monitoring and surveillance since social media data can complement traditional surveillance methods and help public health authorities respond quickly to potential outbreaks. Implications of all the available evidence Our findings preliminarily validate the proposed conceptual framework and show that social media data from Twitter can be used in SEM research.The best model demonstrates that the four variables derived from Twitter can be used as proxies linked to Canadian Twitter users' intentions to receive the COVID-19 vaccine.However, additional studies are needed to further confirm the proposed conceptual framework with different model specifications and social media data. research given their advantages such as cost effectiveness, easy administration, global outreach, and efficiency. 18However, survey research involves the limitations of nonresponse bias, recall bias, assumed honesty, respondents' misunderstanding or misinterpretation of questions, and others. 18Although social media data have been used in numerous COVID-19 social listening studies, 7,10 it is rare to find SEM studies that use social media data.Although researchers can apply ML or AI techniques to analyse large amounts of social media data, such studies do not demonstrate statistical relationships in the same way as SEM. Accordingly, a new conceptual framework, social media infodemic listening (SoMeIL) for public health behaviour, has been proposed to address multifaceted health infodemics on social media. 19The SoMeIL conceptual framework theorizes that social media users' online reaction behaviours can indicate their intentions to receive COVID-19 vaccines, for example. 19In other words, parameters derived from social media platforms, such as the number of likes and shares of a given post, can be used as proxies for social media users' selfreported intentions towards COVID-19 vaccination in real life.Given our interest in social media and its critical role in health infodemics and thus people's behaviours, it is important to directly use social media data to validate such associations.SEM has been commonly used to validate conceptual frameworks where latent variables involve survey data, [12][13][14][15][16][17] but social media data have not been directly and extensively used in SEM analysis.Although many studies have investigated how social media has influenced people's intentions towards COVID-19 vaccination, most previous studies have relied on questionnaires to collect data, [12][13][14][15][16][17] while few studies have requested that participants provide their social media posts.The use of social media data is conceptually similar to typical SEM research with online surveys since social media data share the same benefits while mitigating some limitations.Ideally, researchers can retrieve as many relevant parameters and data as possible from application programming interfaces (APIs) on social media platforms.Thus, the sample size of social media data is generally not an issue.Social media data may have similar nonresponse biases due to inactive users or users not on a given social media platform, but this nonresponse bias can be addressed by using the numbers of likes, shares, or other parameters to infer the opinions of inactive social media users.In addition, when data from multiple social media platforms are collected for studies, it is possible to obtain a more comprehensive representation of the target audience.Since researchers do not need to design the questions, there is no need to assume respondents' honesty or worry about respondents misunderstanding or misinterpreting the questions.However, researchers need to actively screen posts as relevant or irrelevant after retrieving social media posts.Therefore, this study aims to validate partial components of the SoMeIL conceptual framework using SEM with Twitter data and demonstrates the feasibility of using Twitter data in SEM research. Conceptual framework and hypotheses The objective of this study was to preliminarily validate online reaction behaviours, intentions, and self-reported offline reaction behaviours in the proposed SoMeIL conceptual framework using SEM with Twitter data.Fig. 1 shows the proposed SEM derived from part of the SoMeIL conceptual framework and corresponding hypotheses.Directly measured variables are represented by rectangles, and latent variables are represented by circles.The definitions of the key terms shown in Fig. 1 are presented below. • Sentiment_score: a continuous value normalized between −1 (most negative) and +1 (most positive) by summing positive, negative and neutral scores via the Valence Aware Dictionary and sEntiment Reasoner (VADER) for each tweet. 20 Favourite_log: transformed into favourite_log from favourite_count, which represents the number of times that a tweet was liked by Twitter users. 21 Retweet_log: transformed into retweet_log from retweet_count, which represents the number of times a tweet has been retweeted (i.e., shared). 21 Tweet engagement: a latent variable that represents engagement activities inferred at the tweet level.• User_favourites_log: transformed into user_favour-ites_log from user_favourites_count, which represents the number of followers the account currently has. 22 User_followers_log: transformed into user_follo-wers_log from user_followers_count, which represents the number of followers the account currently has. 22 User_friends_log: transformed into user_-friends_log from user_friends_count, which is the number of users the account is following (i.e., "followings"). 22 User_listed_log: transformed into user_listed_log from user_listed_count, which represents the number of public lists that the user is a member of. 22It is transformed into user_listed_log.• User engagement: a latent variable that represents engagement activities inferred at the user level.• Vaccinated: a tweet that indicates a Canadian Twitter user's intention to receive the first dose of the COVID-19 vaccine. The health information in this case is from the massive vaccination campaign that encouraged people in Canada to receive the first dose of the COVID-19 vaccine.Online reaction behaviours include sentiment scores (i.e., emotion in the framework), the log-number of favourites, the log-number of retweets, the lognumber of user favourites, the log-number of user followers, the log-number of user friends, and the lognumber of times a user is listed in a tweet.Offline reaction behaviour is self-reported vaccination or not in a tweet.We theorized positive associations for all hypotheses as follows: • H 1 : There is a significant relationship between a tweet's sentiment score and tweet engagement.• H 2 : There is a significant relationship between the log-number of a tweet's favourites and tweet engagement.• H 3 : There is a significant relationship between the log-number of a tweet's retweets and tweet engagement.• H 4 : There is a significant relationship between tweet engagement and COVID-19 vaccination.• H 5 : There is a significant relationship between the log-number of a user's favourites and user engagement. • H 6 : There is a significant relationship between the log-number of a user's followers and user engagement.• H 7 : There is a significant relationship between the log-number of a user's friends and user engagement.• H 8 : There is a significant relationship between the log-number of a user's public lists and user engagement.• H 9 : There is a significant relationship between user engagement and COVID-19 vaccination. Data collection This study utilized a cross-sectional design since we were interested in understanding COVID-19 vaccination behaviours among adults in Toronto and Ottawa when the first dose of COVID-19 vaccines became available via online appointments.English tweets related to the COVID-19 pandemic from March 8 to June 30, 2021, were retrieved via Twitter's Academic API using the keywords and hashtags listed in Table S1 (Supplementary Materials).This process resulted in approximately two billion tweets.Next, the tweets were narrowed to those that included "Toronto" or "Ottawa" in tweets or in users' locations to gather as many tweets as possible in Toronto or Ottawa, Ontario, Canada.This approach was used to address missing geolocations indicated in the literature 23 and resulted in approximately four million tweets. To prepare for the subsequent sentiment analysis, Twitter handles (i.e., @username), uniform resource locator links (URLs), punctuation, stop words, and retweets were removed in accordance with existing studies. 24,25Then, the words in a tweet were converted to their most general form 24,25 using the Natural Language Toolkit (NLTK) package version 3.8.1. 26 Measures In addition to tweets, the other directly measured independent variables were added and then transformed using the natural logarithm given the presence of zeros since the natural logarithm of one is zero.The variables were subsequently grouped to represent the latent dependent variables, tweet engagement and user engagement, as shown in Fig. 1 and based on the SoMeIL conceptual framework. To prepare for the dependent variable "vaccinated" shown in Fig. 1, a subset of the four million tweets was created by retrieving tweets that included "appoint," "jab," "shot," and "vaccin."We manually reviewed and labelled tweets "1" if users explicitly self-reported that they were seeking or waiting for a vaccine appointment or if they were already vaccinated with the COVID-19 vaccine.Tweets were labelled "0" if users explicitly self-reported that they were hesitant or against the COVID-19 vaccine.Other tweets were excluded if they did not include explicit expressions about the COVID-19 vaccination or if they were news, although these were still relevant to the overall pandemic and vaccine rollout in Canada.[14][15][16][17] Statistical analysis Descriptive statistics, such as means or frequencies, standard deviations, and Spearman correlations, were used to describe the measures in the proposed model (Supplementary Materials Tables S2 and S3, respectively), except for the latent variables.Spearman correlations were calculated to account for outliers and nonnormal distributions in some measured variables even after the data transformation via the natural logarithm (Supplementary Materials Appendix A). Confirmatory factor analysis (CFA) with diagonally weighted least squares (DWLS), also known as robust WLS, was used to test the "fit" of the observed variables for each latent variable.8][29] For each CFA model, variables were removed until the fit indices, including chi-square, comparative fit index (CFI), goodness of fit (GFI), adjusted goodness of fit (AGFI), Tucker-Lewis index (TLI), and root mean square error of approximation (RMSEA), were acceptable.For the CFI, GFI, AGFI and TLI, ≥0.90 is generally considered acceptable and ≥0.95 is considered good.An RMSEA ≤0.08 is recommended. 30,31fter CFA, SEM was performed to test the proposed model (Model 1) in Fig. 1 with the DWLS and the same recommended criteria for the fit indices.Model 1 was optimized if the model fit indices suggested that better models could be found according to the proposed conceptual framework and correlation matrix.All the data analyses were performed in Juypter Notebook, available in Anaconda version 4.3.3,with the semopy package used for CFA and SEM. 32,33 Ethical approval This study was approved by the University of Waterloo Office of Research Ethics (#43961). Role of funding source This study was supported by the 2023-24 Ontario Graduate Scholarship awarded by the Government of Ontario in Canada.The funding source had no role in study design, data collection/analyses/interpretation, manuscript preparation, or submission at all.All authors had full access to all of the study data and took final responsibility for the decision to submit for publication. Results The descriptive statistics and correlations of the measured variables are shown in Table S2 and Table S3 (Supplementary Materials), respectively.The results of CFA are shown in Table 1.The latent variable "twee-t_engagement" was saturated, and the latent variable "user_engagement" had good fit indices except for the RMSEA, which was greater than the recommended 0.08.When both latent variables were combined in the full measurement model, CFA revealed borderline fit indices that were close to the acceptable cut-off points.The RMSEA of the full-measure model also decreased slightly. Given the borderline CFA results using DWLS and Twitter data instead of typical surveys, we decided to test Model 1 using SEM.Fig. 2 presents Model 1, and the model fit indices are shown in Table 1.As Fig. 2 illustrates, two hypotheses, H 2 and H 6 , were not supported because they did not have a statistically significant association.Instead, SEM suggested that the log-number of a tweet's favourites and the log-number of a user's followers were fixed in the model as references: • H 1 : There is a significant relationship between a tweet's sentiment score and tweet engagement (p < 0.05). • H 2 : There is a significant relationship between the log-number of a tweet's favourites and tweet engagement (p value not provided).• H 3 : There is a significant relationship between the log-number of a tweet's retweets and tweet engagement (p < 0.05).• H 4 : There is a significant relationship between tweet engagement and COVID-19 vaccination (p < 0.05).• H 5 : There is a significant relationship between the log-number of a user's favourites and user engagement (p < 0.05). • H 6 : There is a significant relationship between the log-number of a user's followers and user engagement (p-value not provided).• H 7 : There is a significant relationship between the log-number of a user's friends and user engagement (p < 0.05).• H 8 : There is a significant relationship between the log-number of a user's public lists and user engagement (p < 0.05).• H 9 : There is a significant relationship between user engagement and COVID-19 vaccination (p < 0.05).The fit indices of Model 1 in Table 2 indicated that the model could be optimized.According to the results from CFA and SEM for Model 1, it was hypothesized that instead of two latent variables, one might be better.Fig. 3 shows the final SEM (Model 2) after model revisions based on the proposed conceptual framework.That is, instead of two latent variables representing engagement activities at the tweet and user levels, one latent variable, "VaxIntent," was proposed to represent Canadian Twitter users' intentions to be vaccinated against COVID-19.The model indices of Model 2 are also included in Table 2.According to Model 2, the lognumber of a user's friends was and the remaining variables had statistically significant relationships with the latent variable.Nonetheless, it was not straightforward to interpret the estimated coefficients and standard errors when the variables were transformed with the natural logarithm.Therefore, Tables S4 and S5 (Supplementary Materials) show the coefficients and standard errors for each variable in Model 1 and Model 2, respectively, after the estimates were converted back. Discussion The present study was conducted to preliminarily evaluate the online reaction behaviours, emotions, intentions, and self-reported offline behaviours proposed in the SoMeIL conceptual framework. 19According to the SoMeIL conceptual framework, 19 sentiment scores as emotions, the log numbers of a tweet's favourites and retweets, and users' favourites, followers, and public lists, as online reaction behaviours, were investigated using SEM to assess their relationships with selfreported COVID-19 vaccination, as an offline reaction behaviour, with a total of 2420 English tweets.As shown in Table S4 (Supplementary Materials), most variables in Model 1 had positive associations.However, the relationships between a tweet's sentiment score and tweet engagement, between the number of a tweet's retweets and tweet engagement, and between tweet engagement and vaccination could vary.Similarly, in Model 2, the association between the number of a user's followers and COVID-19 vaccination intention could be positive or negative (Table S5 in the Supplementary Materials), whereas other variables in Model 2 had positive associations with the latent variable.However, Model 2 was the best model according to the fit indices shown in Table 2, and all the variables in Model 2 had statistically significant relationships despite one unstable variable.According to Model 2 (Table S5 in the Supplementary Materials), the sentiment score had the strongest positive relationship with COVID-19 vaccination intention, followed by the number of public lists to which a user belonged.The number of followers had the weakest association with COVID-19 vaccination intention. Overall, Model 2 provides preliminary results that validate the partial components of the SoMeIL conceptual framework given the significant associations.That is, variables derived from Twitter could be used to infer Twitter users' intentions to receive the COVID-19 vaccine, which was the latent variable.13][14][15][16][17] The other variables exhibited similar relationships.The more favourites and retweets a tweet received or the more favourites, followers, or public lists a user received, the more likely the user was to accept the first dose of the COVID-19 vaccine, although the number of a user's followers could have a negative effect in some cases. Surprisingly, it appeared that outliers had little impact on SEM since Model 2 met all the recommended criteria of the fit indices.In fact, when outliers were removed or replaced with medians, none of the structural equation models converged.This outcome remained unchanged even after different combinations of the measured variables were tested.For example, "favourite_log" was excluded because it became useless after its outliers were removed or replacing with its median, which was zero.This approach allowed the variable to include only zeros since nonzero values were outliers.Even after "favourite_log" was excluded, the other SEMs still failed to converge.Therefore, we hypothesized that without the "favourite_log" variable, the remaining data would not fit the SEM well. 34,35Therefore, although the assumption of no outliers in SEM was violated in the current study, the outliers actually included important information that should not be removed from the modelling.Given the nature of social media data, outliers could be legitimate since some tweets could receive more likes or shares or some users could have more followers or likes than others.In addition to the preliminary validation of the partial components within the SoMeIL conceptual framework, this study may be the first to use only Twitter data in SEM research.The findings show promise for the use of Twitter data in SEM research with proper theoretical frameworks, but there are several limitations.First, the generalizability of this study was limited since it did not include Twitter users who were excluded from the data or non-Twitter users.Furthermore, SEM was conducted in a cross-sectional manner, so it offered only a snapshot of the entire pandemic.In the future, longitudinal SEM could be performed.However, unlike surveys, researchers have no control over the frequency of people's tweeting behaviours.Some very active users might tweet daily, whereas others might tweet sporadically.Considerable effort would be required to find enough users with similar tweeting frequencies to conduct a longitudinal SEM study, although this would not be impossible.The quality of the data was another major limitation.For example, users' demographic information, such as sex and gender, was not available to the researchers unless users self-identified their demographic information on their Twitter profiles.Extensive manual identification or complex ML or AI techniques are required to retrieve or infer users' complete demographic characteristics from Twitter data. 36,37This could lead to even fewer representative samples since the majority of Twitter users do not include demographic information in their profiles.Additionally, there are other methods for calculating sentiments, 8,9 although VADER sentiment analysis has commonly been used. 23The data transformation via the natural logarithm also limited the data quality due to information loss.In general, log transformations are not recommended for count data despite their common usage in linear models such as regressions and SEM. 34,35nstead, modelling count data with Poisson or negative binomial distributions is recommended. 35Nonetheless, Poisson or negative binomial distributions have not been made available in open-source SEM packages, such as the semopy package. 32,3334,35 Finally, ecological fallacy is a disadvantage in a SEM study.In other words, the findings should not be interpreted at the individual level. Despite these limitations, this study confirmed that Twitter data can be useful for SEM research and partially and preliminarily validated the SoMeIL conceptual framework. 19That is, parameters retrieved from Twitter as online reaction behaviours can be used to infer Twitter users' self-reported intentions, which can be used as a proxy for users' vaccination behaviours in real life.For future research, we plan to apply ML or AI techniques to correctly classify self-reported offline reaction behaviours to scale up the data sample.Alternatively, instead of self-reported offline reaction behaviours derived from Twitter, other data on offline reaction behaviours can be collected and analysed to further validate the SoMeIL framework.Alternatively, different social media data can be collected, such as videos and images, to study how the SEM approach and the SoMeIL conceptual framework can be applied.For example, as in typical SEM research, future studies can design questionnaires to collect participants' demographic information and request that participants voluntarily give social media posts to researchers to investigate how participants' online and offline reaction behaviours are associated with their demographic information.However, we acknowledge that collecting social media has become increasingly difficult for researchers since social media platforms have started to restrict their API access. This study provided preliminary validations of parts of the SoMeIL conceptual framework.The results showed that the six variables retrieved from Twitter had statistically significant relationships with the latent variable, which could be used as a proxy for Twitter users' selfreported COVID-19 vaccination uptake.This study also demonstrated that it is feasible to use Twitter data in SEM research.However, further studies are needed to examine other SEM approaches and other social media platforms to further validate the SoMeIL conceptual framework.As social media have been integrated into people's daily lives worldwide, their dominance will increase the impact of health infodemics.As a result, in addition to conventional channels such as surveys or word of mouth, it is crucial to "listen to" public discourse on different social media platforms and address emerging confusion, questions, and even misinformation in a timely manner. 10As this study illustrates, several indicators on Twitter, such as the numbers of likes and shares, can be used to infer the vaccination behaviours of Toronto and Ottawa Twitter users in real life.Therefore, this approach can be adopted to forecast vaccination coverage for future vaccine-preventable diseases.This approach can also help tailor communication strategies and address specific issues based on Twitter users' discussions and online behaviours to effectively reach different groups. 38,39The SoMeIL conceptual framework can be extended to other areas, such as symptom reports or behavioural patterns, to aid in public health decisionmaking and resource allocations. 40By integrating social media platforms such as Twitter into pandemic preparedness, health organizations and government authorities can harness their potential as powerful tools to engage with the public, address health misinformation, and effectively respond to crises, which can ultimately help to mitigate the impact of future pandemics. 38,39imilar to the WHO's EARS platform, 10 the SoMeIL conceptual framework can be implemented as a way to provide real-time monitoring and surveillance.The literature has shown that social media can be used for the early detection of emerging health threats and to track misinformation trends. 7,10,11Social media data can also complement traditional surveillance methods and help public health authorities respond quickly to potential outbreaks. Overall, this study provides a preliminary yet quantifiable method to examine social listening based on components of the SoMeIL conceptual framework.It is recommended that future pandemic preparedness recognize the substantial roles of social media in shaping public perception, disseminating information, and influencing behaviours during a health crisis.Incorporating social media into pandemic preparedness strategies can enhance communication, information sharing, and response efforts. Contributors Shu-Feng Tsao has conceptualised the study, collected and analysed data.She has also drafted and revised the manuscript.Dr. Chen and Dr. Butt have supervised the methodology of the study and reviewed the manuscript draft.All authors reviewed the results and approved the final version of the manuscript.All authors have accessed and verified the data used in the study. Fig. 1 : Fig.1: Proposed social media infodemic listening for public health behaviour conceptual framework using Twitter data.The components are: Sentiment score (a continuous value normalized between -1 as most negative and +1 as most positive), Favorite_log (natural logarithm transformation of favorite counts), Retweet_log (natural logarithm transformation of retweet counts), User_favourite_log (natural logarithm transformation of a user's favourite counts), User_followers_log (natural logarithm transformation of a user's follower counts), User_friends_log (natural logarithm transformation of a user's friend counts), User_listed_log (natural logarithm transformation of the number of public lists that the user is a member of), User engagement (a latent variable that represents engagement activities inferred at the user level), Vaccinated (a tweet indicating a user's intention to receive the first dose of the COVID-19 vaccine), H 1 (Hypothesis 1), H 2 (Hypothesis 2), H 3 (Hypothesis 3), H 4 (Hypothesis 4), H 5 (Hypothesis 5), H 6 (Hypothesis 6), H 7 (Hypothesis 7), H 8 (Hypothesis 8), and H 9 (Hypothesis 9). Fig. 2 : Fig. 2: Model 1 results according to the proposed conceptual framework.*p < 0.05.The components are: Sentiment score (a continuous value normalized between -1 as most negative and +1 as most positive), Favorite_log (natural logarithm transformation of favorite counts), Retweet_log (natural logarithm transformation of retweet counts), User_favourite_log (natural logarithm transformation of a user's favourite counts), User_followers_log (natural logarithm transformation of a user's follower counts), User_friends_log (natural logarithm transformation of a user's friend counts), User_listed_log (natural logarithm transformation of the number of public lists that the user is a member of), User engagement (a latent variable that represents engagement activities inferred at the user level), Vaccinated (a tweet indicating a user's intention to receive the first dose of the COVID-19 vaccine). Fig. 3 : Fig. 3: Model 2 results after Model 1 is optimized.*p < 0.05.The components are: Sentiment score (a continuous value normalized between -1 as most negative and +1 as most positive), Favorite_log (natural logarithm transformation of favorite counts), Retweet_log (natural logarithm transformation of retweet counts), User_favourite_log (natural logarithm transformation of a user's favourite counts), User_followers_log (natural logarithm transformation of a user's follower counts), User_friends_log (natural logarithm transformation of a user's friend counts), User_listed_log (natural logarithm transformation of the number of public lists that the user is a member of), User engagement (a latent variable that represents engagement activities inferred at the user level), Vaccinated (a tweet indicating a user's intention to receive the first dose of the COVID-19 vaccine). Table 1 : : comparative fit index.GFI: goodness of fit.AGFI: adjusted goodness of fit.NFI: Non-Normed Fit Index.TLI: Tucker-Lewis index.RMSEA: root mean square error of approximation.a The chi-squared p value is not recommended for consideration regardless of the SEM because it is heavily influenced by the sample size.Fit statistics for each latent variable and full measurement model. CFI Table 2 : : comparative fit index.GFI: goodness of fit.AGFI: adjusted goodness of fit.NFI: Non-Normed Fit Index.TLI: Tucker-Lewis index.RMSEA: root mean square error of approximation.a The chi-squared p value is not recommended for consideration regardless of the SEM because it is heavily influenced by sample size.Model fit indices for Model 1 and Model 2. CFI
2024-03-17T15:49:42.295Z
2024-03-14T00:00:00.000
{ "year": 2024, "sha1": "428f426292ce2ff91e25c40aab18f5d3f12b0c58", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "cea256df5a44eb2dcfa3c52657ac01264d9c578a", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
235624458
pes2o/s2orc
v3-fos-license
LRRK2-NFATc2 Pathway Associated with Neuroinflammation May Be a Potential Therapeutic Target for Parkinson’s Disease Abstract Neuroinflammation plays an important role in the pathogenesis of Parkinson’s disease (PD). However, the molecular mechanisms involved in extracellular α‑synuclein-induced proinflammatory microglial responses through Toll-like receptor 2 (TLR2) are unclear. Leucine-rich repeat kinase 2 (LRRK2) is a serine/threonine kinase, and its mutations are closely related to autosomal dominant PD. Recently, Masliah et al characterized a novel-specific neuroinflammation cascade dependent on LRRK2-NFATc2 in microglia activated by neuron-released α-synuclein. LRRK2 selectively phosphorylated and induced nuclear translocation of NFATc2 to activate a neuroinflammation cascade. In this cascade, LRRK2 kinase was activated by neuron-released α-synuclein in microglia via TLR2. Further, NFATc2, as a kinase substrate for LRRK2, was directly phosphorylated, which accelerated nuclear translocation of NFATc2, where cytokine/chemokine gene expression including TNF-α and IL-6 is regulated by NFATc2 transcriptional activity, resulting in a neurotoxic inflammatory environment. Moreover, an abnormal increase of NFATc2 in nuclear was observed in the brains of patients and a mouse model of PD. Additionally, the administration of an LRRK2 inhibitor could ameliorate neuroinflammation, prevent neuronal loss, and improve motor function. Therefore, modulation of LRKK2-NFATc2 signaling cascade might be a potential therapeutic target for the treatment of PD. Neuroinflammation plays an important role in the pathogenesis of Parkinson's disease (PD), 1 therefore, targeting crucial factors of different neuroinflammatory signaling pathways is a potential therapeutic strategy for PD. It is well known that PD is a chronic neurodegenerative disorder characterized by abnormal accumulation of α-synuclein in the degenerative dopaminergic neurons of substantia nigra, which can induce cell-autonomous neurotoxicity. 2,3 Notably, a large number of studies have shown that α-synuclein can be secreted by neurons. This extracellular α-synuclein can be directly transferred to neighboring neurons resulting in neurotoxic α-synuclein deposition, and it can further lead to neuroinflammatory responses in microglia that can produce and release pro-inflammatory cytokines, such as tumor necrosis factor-α (TNF-α), interleukin-1beta (IL-1β), interleukin-6 (IL-6) and so on, which in turn aggravate neuronal degeneration and neuroinflammation. 3 Previous studies have shown that extracellular α-synuclein actives microglia through Toll-like receptor 2 (TLR2), which is a α-synuclein receptor expressed in neurons and microglia, leading to non-cell-autonomous neurotoxicity. 4,5 The mechanism probably involves the production and release neurotoxic pro-inflammatory cytokines by activated microglia via nuclear factor B (NF-κB) and p38 mitogen-activated protein kinase (MAPK) downstream signaling pathways. 4 Furthermore, increased expression of TLR2 was observed in the brains of patients and a mouse model of PD. 6,7 However, the molecular mechanisms involved in extracellular α-synuclein-induced neurotoxic proinflammatory microglial responses through TLR2 are unclear. Leucine-rich repeat kinase 2 (LRRK2) is a serine/ threonine kinase, 8 and its mutations are closely related to autosomal dominant PD. 9,10 Mounting evidence indicates that LRRK2 is highly expressed in immune cells and is frequently associated with neuroinflammation in PD. 11,14 It has been reported that the production of pro-inflammatory cytokines TNF-α and IL-1β decreased in primary microglia with LRRK2 knockdown or kinase inhibition. 12,13 Therefore, an understanding of the interplay between LRRK2-mediated neuroinflammatory responses and PD is essential to clearly define the upstream and downstream factors of this pathway. Although knockdown of LRRK2 can reduce the transcriptional activity of NF-κB in activated microglia, 14 it might raise the question whether there are other LRRK2mediated neuroinflammatory signaling pathways in activated microglia via extracellular α-synuclein. Recently, Masliah et al characterized a novel neuroinflammation cascade dependent on LRRK2-nuclear factor of activated T cells, cytoplasmic 2 (LRRK2-NFATc2) in microglia activated by neuron-released α-synuclein 15 ( Figure 1). In this study, the level of LRRK2 phosphorylation and activity increased in mouse primary microglia with extracellular α-synuclein treatment. However, the increase disappeared when the microglia were pretreated with TLR2 function blocking antibody T2.5, suggesting that extracellular α-synuclein activated LRRK2 by TLR2 in microglial cultures. Furthermore, Lrrk2 knockout significantly suppressed α-synuclein-mediated microglial neurotoxicity by decreasing of TNF-α and IL-6 expression. It has been reported that TLR2 is a pattern recognition receptor that plays a key role in the innate immune response to micro-organisms. 16,17 Single-nucleotide polymorphisms of Lrrk2 or its adjacent genes are susceptible to leprosy infection in humans. 18 Although evidence suggests that TLR2 and LRRK2 are involved in the innate immune response to micro-organisms, the underlying interaction between them is not completely understood. It has been shown that marked phosphorylation of LRRK2 at Ser910 and Ser935 is induced by the canonical IκB kinase family during TLR signaling. 19 While Masliah et al did not provide additional information regarding the interactions between TLR2 and LRRK2, further studies are needed to elucidate this mechanism. 15 To address the mechanisms underlying the role of LRRK2 in microglial activation via extracellular α-synuclein, Masliah et al analyzed transcriptome data (GSE26532) obtained from primary rat microglia exposed to extracellular α-synuclein. Forty-three up-regulated genes, which involved in TLR signaling and other immune signaling pathways, were selected and a signaling network was reconstructed to describe their interactions using functional gene enrichment analysis and information on protein-protein interactions. 15 According to the analysis results, Masliah et al hypothesized that LRRK2 and NFATc2 selectively modulate cytokine and chemokine expression in extracellular α-synuclein-mediated microglia via TLR2 signaling pathway. 15 It has been shown that NFATc2, as an immune transcription factor, can translocate to the nucleus, and NFAT responsive elements are located in the promoter region of multiple immune-related genes such as TNF-α, IL-6 and so on. 20,21 Besides, calcineurin/NFAT signaling cascade is frequently associated with the neuroinflammation in mouse models of Alzheimer's disease (AD) 22 and PD. 23 Additionally, phosphorylation of NFATc1 is closely related to the transcription of genes associated with neuroinflammation in AD mice. 24 Notably, the NFATc2 isoform was the most highly expressed in murine microglia cultures, and the ability of the microglia to secrete cytokines was attenuated by deletion of NFATc2 in mouse models of AD. 25 The above findings are strong evidence to support the hypothesis that LRRK2 and NFATc2 selectively modulate pro-inflammatory cytokine expression in extracellular α-synucleinmediated microglia via TLR2 signaling pathway. Interestingly, the direct interaction between NFATc2 and LRRK2 was first identified by automated image analysis in this study. Specifically, LRRK2 selectively phosphorylated and induced the nuclear translocation of NFATc2 to activate a neuroinflammation cascade. In this cascade, LRRK2 kinase was activated by neuron-released α-synuclein in microglia via TLR2. Further, NFATc2, as a kinase substrate for LRRK2, was directly phosphorylated, which accelerated the nuclear translocation of NFATc2 where the cytokine/chemokine gene expression including IL-6 and TNF-α is regulated by NFATc2 transcriptional activity, resulting in a neurotoxic inflammatory environment 15 (Figure 1). Moreover, an abnormal increase of NFATc2 in the nucleus was observed in the brains of patients and mouse models of PD. 15 Additionally, the administration of an LRRK2 inhibitor (HG-10-102-01) could ameliorate neuroinflammation, prevent neuronal loss, and improve motor function in a mouse model of PD. 15 Although LRRK2 inhibitor clinical trials provide opportunities to refine our understanding of LRRK2 in human immune function, it is unknown whether LRRK2 inhibition increases the risk of opportunistic infections. 11 In addition, it is unclear whether LRRK2 inhibitors have disease modifier role in PD patients with LRRK2 mutations. 11 Although it is difficult to develop specific antibodies targeting NFATc2 because of multiple phosphorylation residues, modulation of the LRKK2-NFATc2 signaling cascade might be a potential therapeutic target for the treatment of PD. Altogether, this study is the first to report a novel LRKK2-NFATc2 signaling cascade that plays an important role in neuroinflammation in PD, and provides new clues for the clinical treatment of PD.
2021-06-25T05:26:24.122Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "cf67db3358e00a6694b851331293fbb9501c983a", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=70580", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cf67db3358e00a6694b851331293fbb9501c983a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235497555
pes2o/s2orc
v3-fos-license
Handling Extreme Class Imbalance in Technical Logbook Datasets Technical logbooks are a challenging and under-explored text type in automated event identification. These texts are typically short and written in non-standard yet technical language, posing challenges to off-the-shelf NLP pipelines. The granularity of issue types described in these datasets additionally leads to class imbalance, making it challenging for models to accurately predict which issue each logbook entry describes. In this paper we focus on the problem of technical issue classification by considering logbook datasets from the automotive, aviation, and facilities maintenance domains. We adapt a feedback strategy from computer vision for handling extreme class imbalance, which resamples the training data based on its error in the prediction process. Our experiments show that with statistical significance this feedback strategy provides the best results for four different neural network models trained across a suite of seven different technical logbook datasets from distinct technical domains. The feedback strategy is also generic and could be applied to any learning problem with substantial class imbalances. Introduction Predictive maintenance techniques are applied to engineering systems to estimate when maintenance should be performed to reduce costs and improve operational efficiency (Carvalho et al., 2019), as well as mitigate risk and increase safety. Maintenance records are an important source of information for predictive maintenance (McArthur et al., 2018). These records are often stored in the form of technical logbooks in which each entry contains fields that identify and describe a maintenance issue (Akhbardeh et al., 2020a). Being able to classify these technical events is an important step in the development of predictive maintenance systems. In most technical logbooks, issues are manually labeled by domain experts (e.g., mechanics) in free text fields. This text can then be used to classify or cluster events by semantic similarity. Classifying events in technical logbooks is a challenging problem for the NLP community for several reasons: (a) the technical logbooks are written by various domain experts and contain short text entries with nonstandard language including domain-specific abbreviated words (see Table 1 for examples), which makes them distinct from other short non-standard text corpora (e.g., social media); (b) off-the-shelf NLP tools struggle to perform well on this type of data as they tend to be trained on standard contemporary corpora such as newspaper texts; (c) outside of the clinical and biomedical sciences, there is a lack of domain-specific, expert-based datasets for studying expert-based event classification, and in particular few resources are available for technical problem domains; and (d) technical logbooks tend to be characterized by a large number of event classes that are highly imbalanced. Original Entry Pre-processed Entry fwd eng baff seeal needs resecured. forward engine baffle seal needs resecured. r/h eng #3 intake gsk leaking. right engine number 3 intake gasket leaking. bird struck on p/w at twy. bird rmvd. bird struck on pilot window at taxiway. bird removed location rptd as nm from rwy aprch end. location reported as new mexico from runway approach end. Table 1: Original and text-normalized example data instances illustrating that domain-specific terms (baffle), abbreviations (gsk -gasket, eng -engine), and misspellings (seeal -seal) are abundant in logbook data. We address the aforementioned challenges with a special focus on exploring strategies to address class imbalance. There is wide variation in the number of instances among the technical event classes examined in this work, as shown in Figure 1 and Ta- ble 3. This extreme class imbalance is an obstacle when processing logbooks as it causes most learning algorithms to become biased and mainly predict the large classes (Kim et al., 2019). To overcome this issue, we introduce a feedback loop strategy, which is a repurposing of a method used to address extreme class imbalance in computer vision (Bowley et al., 2019), and examine it for classification of textual technical event descriptions. This technique is applied in the training of a suite of common classification models on seven predictive maintenance datasets representing the aviation, automotive, and facility maintenance domains. This paper addresses these research questions: RQ1: To which extent does the class granularity and class imbalance present in technical logbooks impact technical event classification performance, and can a feedback loop for training data selection effectively address this issue? RQ2: Which classification models are better suited to classify technical events for predictive maintenance across logbook datasets representing different technical domains? The main contributions of this work include: 1. Experimental results showing strong performance of the feedback loop in addressing the class imbalance problem in technical event classification across all datasets and models; 2. A thorough empirical evaluation of the performance of the technical event classifier considering multiple models and seven logbook datasets from three different domains. Related Work Most expert-domain datasets containing events have focused on healthcare. For instance, Altuncu et al. (2019) analyzed patient incidents in unstructured electronic health records provided by the U.K. National Health Service. They evaluated a deep artificial neural network model on the expertannotated textual dataset of a safety incident to identify similar events that occurred. Deléger et al. (2010) proposed a method to deal with unstructured clinical records, using rule-based techniques to extract names of medicines and related information such as prescribed dosage. Savova et al. (2010) considered free-text electronic medical records for information extraction purposes and developed a system to obtain clinical domain knowledge. Patrick and Li (2009) proposed the cascade methods of extracting the medication records such as treatment duration or reason, obtained from patient's historical records. Their approach for event extraction includes text normalization, tokenization, and context identification. A system using multiple features outperformed a baseline method using a bag of words model. Yetisgen-Yildiz et al. (2013) proposed the lung disease phenotypes identification method to prevent the use of a handoperated identification strategy. They employed NLP pipelines including text pre-processing and further text classification on the textual reports to identify the patients with a positive diagnosis for the disease. Based on the outcome, they achieve (3), automotive safety (4), and facility maintenance (5). Each instance shows how domain-specific terminology, abbreviations (Abbr.), and misspelled words (in bold font) are used by the domain expert, and also illustrates some of the event types covered. More details are provided in Section 3. notable performance by using the n-gram features with the Maximum Entropy (MaxEnt) classifier. There is also relevant research on event classification in social media. For example, Ritter et al. (2012) proposed an open-source event extraction and supervised tagger for noisy microblogs. Cherry and Guo (2015) applied word embedding-based modeling for information extraction on news-wire and tweets, comparing named entity taggers to improve their method. Hammar et al. (2018) performed experimental work on Instagram text using weakly supervised text classification to extracted clothing brand based on user descriptions in posts. The problem of class imbalance has been studied in recent years for numerous natural language processing tasks. Tayyar Madabushi et al. (2019) studied automatic propaganda event detection from a news dataset using a pre-trained BERT model. They recognized that the BERT model had issues in generalizing. To overcome this issue, they proposed a cost-weighting method. Al-Azani and El-Alfy (2017) analyzed polarity measurement in imbalanced tweet datasets utilizing features learned with word embeddings. Li and Nenkova (2014) studied the class imbalance problem in the task of discourse relation identification by comparing the accuracy of multiple classifiers. They showed that utilizing a unified method and further downsampling the negative instances can significantly enhance the performance of the prediction model on unbalanced binary and multi-classes. Dealing with unbalance classes is also studied well in the sentiment classification task. Li et al. (2012) introduced an active learning method that overcomes the problem of data class unbalance by choosing the significant sample of minority class for manual annotation and majority class for automatic annotation to lower the amount of human annotation required. Furthermore, Damaschk et al. (2019) examined techniques to overcome the problem of dealing with high-class imbalance in classifying a collection of song lyrics. They employed neural network models including a multi-layer perceptron and a Doc2Vec model in their experiments where the finding was that undersampling the majority class can be a reasonable approach to remove the data sparsity and further improve the classification performance. also explored the problem of high data imbalance using cross-entropy criteria as well as standard performance metrics. They proposed a loss function called Dice loss that assigns equal importance to the false negatives and the false positives. In computer vision, Bowley et al. (2019) developed an automated feedback loop method to identify and classify wildlife species from Unmanned Aerial Systems imagery, for training CNNs to overcome the unbalanced class issue. On their expert imagery dataset, the error rate decreased substantially from 0.88 to 0.05. This work adapts this feedback loop strategy to the NLP problem of classifying technical events. Technical Event Datasets In this work, we used a set of 7 logbook datasets from the aviation, automotive, and facility domains available at MaintNet (Akhbardeh et al., 2020a). MaintNet is a collaborative open-source platform for predictive maintenance language resources featuring multiple technical logbook datasets and tools. These datasets include: 1) Avi-Main contains seven years of maintenance logbook reports collected by the University of North Dakota aviation program on aircraft maintenance that were reported by the mechanic or pilot. 2) Avi-Acc contains four years of aviation accident and reported damages. 3) Avi-Safe contains eleven years of aviation safety and incident reports. Accidents were caused by foreign objects/birds during the flights which led to safety inspection and maintenance, where safety crews indicated the damage (safety) level for further analysis. 4) Auto-Main is a single year report with maintenance records for cars. 5) Auto-Acc contains twelve years of car accidents and crash reports describing the related car maintenance issue and property damaged in the accident. 6) Auto-Safe contains four years of noted hazards and incidents on the roadway from the driver. 7) Faci-Main contains six years of logbook reports collected for building maintenance. These technical logbooks include short, compact, and descriptive domain-specific English texts single instances usually contain between 2 and 20 tokens on average including abbreviations and domain-specific words. An example instance from Table 2, r/h fwd upper baff seal needs to be resecured, shows how the instances for a specific issue class are comprised from specific vocabulary (less ambiguity), and therefore contain a high level of granularity (level of description for an event from multiple words) (Mulkar-Mehta et al., 2011). Table 3 presents statistics for each dataset, in terms of the number of instances, average instance length, number of classes, and the minimum, average, median and maximum class size to represent how imbalanced the datasets are. An instance in the logbook can be formed as a complete description of the technical event (such as a safety or maintenance inspection) like: #2 & #4 cyl rocker cover gsk are leaking, or it might contain an incomplete description by solely referring to the damaged part/section of machinery (hyd cap chck eng light on) using few domain words. In either form of the problem description, the given annotation (label) is at the issue type-level, e.g., baffle damage. Table 2 shows multiple examples with associated instances. Further characteristics of these log entries include compound words (antifreeze, engine-holder, driftangle, dashboard). Many of these words (e.g., a compound word: dashboard) essentially represent the items, or domain-specific parts used in the descriptions. Additionally, function words (e.g., prepositions) are important and removing them could alter the meaning of the entry. The logbook datasets also have both the following shared and distinct characteristics: Shared Characteristics: Each instance contains a descriptive observation of the issue and/or the suggested action that should be taken (eng inspection panel missing screw). Each instance also refers to maintaining a single event, which means the recognized problem applies to the only single-issue type. As an example, the instance cyl #1 baff cracked at screw support & forward baff below #1 includes a combination of sequences that refers to the location and/or specific part of the machinery. Distinct Characteristics: In each domain, terminologies, a list of terms, and abbreviations are distinct, and an abbreviation can have different expansion depending on the domain context (Sproat et al., 2001), e.g., a/c can mean aircraft in aviation and in the automotive domain air conditioner. However, the abbreviations and acronyms of the domain words (e.g. atcair traffic control) in these technical datasets should not be approached as a word sense disambiguation problem as they require character level expansion. Handling Class Imbalance Collecting additional data to augment datasets is a common approach for tackling the problem of skewed class distributions. However, as discussed earlier, technical logbooks are proprietary and very hard to obtain. In addition, each domain captures domain-specific lexical semantics, preventing the use of techniques such as domain adaption (Ma Re-sampling Under-and over-sampling are resampling techniques (Maragoudakis et al., 2006) that were used to create balanced class sizes for model training. For over-sampling, instances of the minority classes are randomly copied so that all classes would have the same number of instances as the largest class. For under-sampling, observations are randomly removed from the majority classes, so that all classes have the same number of instances as the smallest class. For both approaches, we first divided our datasets into test and training sets before performing over-sampling to prevent contamination of the test set by having the same observations in both the training and test data. Feedback Loop To address class imbalances in text classification, this work adapts the approach in Bowley et al. (2019) from the computer vision domain. The goal of this approach is not only to alleviate the bias towards majority classes but also to adjust the training data instances such that the models are always being trained on the instances that was performing the worst on. It should be noted that this approach is very similar to adaptive learning strategies which have been shown to aid in human learning (Kerr, 2015;Midgley, 2014). Algorithm 1 presents pseudocode for the feedback loop. In this process, the active training data (the data used to actually train the models in each iteration of the loop) is continually resampled from the training data. The model is first initially trained with an undersampled number of random instances from each class, which becomes the initial active training data. The model M then performs inference over the entire training set, and then selects MCS instances from each class C i which had the worst error during inference, where MCS is the minority (smallest) class size. The model is then retrained with this new active training data and the process of training, inference and selection of the MCS worst instances repeats for a fixed number of feedback loop iterations, FLI. In this way the model is always being trained on the instances it has classified the worst. To measure the effect of resampling the worst performing instances, the feedback loop approach was also compared to a random downsampling (DS) loop, where instead of evaluating the model over each instance and selecting the worst performing instances, MCS instances from each class are instead randomly sampled. As performing inference over the entire training set adds overhead, a comparison to the random DS loop method would show if performing this inference is worth the performance cost over simple random resampling. This approach is the same as Algorithm 1 except that SampleRandom is used instead of Resample in the feedback loop. Section 4.3 describes how the number of training epochs and loop iterations were determined such that all the training data selection methods are given a fair evaluation with the same amount of computational time. Evaluation Metrics For imbalanced datasets, simply using precision, recall or F1 score metrics for the entire datasets would not accurately reflect how well a model or method performs, as they emphasize the majority classes. To overcome this, alternative evaluation metrics to handle the class imbalance problem were used, as recommended by Banerjee et al. (2019). Specifically, we report the models performance based on precision, recall, and F1 score by utilizing a macro-average over all classes, as this gives every class equal weight, and hence reveals how well the models and training data selection strategies perform. Model Architecture and Training Different machine learning methods were considered for technical event/issue classification (e.g. engine failure, turbine failure). Each instance is an individual short logbook entry and contains approximately 2 to 20 tokens (12 words on average per instance including function words), as shown in Table 3 Deep Neural Network A deep artificial neural network (DNN), as described by Dernoncourt et al. (2017), can learn abstract representation and features of the input instances that would help to achieve better performance on predicting the issue type in the logbook dataset. The DNN used was a 3 layer, fully connected feed forward neural network with an input embedding layer of dimension 300 and equal size number of words followed by 2 dense layers with 512 hidden units with ReLU activation functions followed by a dropout layer. Finally, we added a fully connected dense layer with size equal to the number of classes, with a SoftMax activation function. Long Short-Term Memory An LSTM RNN was also used to perform a sequence-to-label classification. As described by Suzgun et al. (2019) LSTM RNNs utilize several vector gates at each state to regulate the passing of data by the sequence which enhances the modeling of the long-term dependencies. We used a 3 layer LSTM model with a word embedding layer of dimension 300 and the equal size number of words followed by an LSTM layer with setting the number of hidden units equal to the embedding dimension, followed by a dropout layer. Finally, we added a fully connected layer with size equal to the number of classes, with a SoftMax activation function. Convolutional Neural Network Convolutional neural networks (CNNs) have demonstrated exceptional success in NLP tasks such as document classification, language modeling, or machine translation (Lin et al., 2018). As Xu et al. (2020) described, CNN models can produce consistent performance when applied to the various text types such as short sequences. We evaluated a CNN architecture (Shen et al., 2018) with a convolutional layer, followed by batch normalization, ReLU, and a dropout layer, which was followed by a maxpooling layer. The model contained 300 convolutional filters with the size of 1 by n-gram length pooling with the size of 1 by the length of the input sequence, followed by concatenation layer, then finally connected to a fully connected dense layer, and an output layer equal to the size of the dataset class using a SoftMax activation function. Bidirectional Encoder Representations We also evaluated using the pre-trained uncased Bidirectional Encoder Representations (BERT) for English (Devlin et al., 2019). We fine-tuned the model, and used a word piece based BERT tokenizer for the tokenization process and the RandomSampler and SequentialSampler for training and testing respectively. To better optimize this model, a schedule was created for the learning rate that decayed linearly from the initial learning rate we set in the optimizer to 0. Experimental Settings Datasets and Baselines First, the technical text pre-processing pipeline developed by Akhbardeh et al. (2020b) was applied, which comprises domain-specific noise entity removal, dictionarybased standardization, lexical normalization, part of speech tagging, and domain-specific lemmatization. We divided the datasets selecting randomly from each class independently to maintain a similar class size distribution, using 80% of the instances for training and 20% of the instances for testing data. For feature extraction, two methods were considered: a bag-of-word model (n-grams:1) (Pedregosa et al., 2011) and pre-trained 300 dimensional GloVe word embeddings (Pennington et al., 2014). Hyperparameter and Tuning The coarse to fine learning (CFL) approach (Lee et al., 2018) was used to set parameters and hyperparameters for the DNN, LSTM, and CNN models. Experiments considered batch sizes of 32, 64, and 128, an initial learning rate ranging from 0.01 to 0.001 with a learning decay rate of 0.9, and dropout regularization in the range from 0.2 to 0.5 in all models, as well as ReLU and SoftMax activation functions (Nair and Hinton, 2010), categorical cross-entropy (Zhang and Sabuncu, 2018) as the loss function, and the Adam optimizer (Kingma and Ba, 2015) in the DNN, LSTM, CNN and BERT models. Based on experiments and network training accuracy, a batch size of 64 and drop out regularization of 0.3 was selected for model training. Each model with each training data selection strategy was trained 20 times to generate results for each dataset. To ensure each training data selection strategy was fairly compared with a similar computational budget, the number of training epochs and loop iterations (if the strategy had a feedback or random downsampling loop) were adjusted so that the total number of training instances evaluations each model performed was the same. For each dataset, the number of forward and backward passes, 'T' for 100 epochs of the baseline strategy was used as the standard. As an example, Table 4 shows how many loop iterations, epochs per loop, and inference passes were done for each training data selection strategy on the Auto-Safe dataset. Given the differences between the min and max class sizes it was not possible to get exact matches but the strategies came as close as possible. We counted each inference pass for the feedback loop the same as a forward and backward training pass, which actually was a slight computational disadvantage for the feedback loop, as a forward and backward pass in training takes approximately 1x to 2x the time as an inference pass. Table 5 shows a comparison between the baseline and the four different class balancing methods (over-sampling, under-sampling, the random downsampling (DS) loop and the feedback loop). Based on these outcomes, the feedback loop strategy almost entirely outperforms the other methods over all datasets and models, showing that performing inference over the training set and reselecting the training data from the worst performing instances does provide a benefit to the learning process. A plausible explanation is that this strategy does not introduce bias into the larger class and also does not effect the minority class size distribution. It also does not waste training time on instances the model has already well learned. Table 5 also shows the empirical analysis of the four classification models, with the model and training data selection strategy providing the overall best results shown in bold and italics. Using technical text pre-processing techniques described in Section 4.3, and the feedback loop strategy described in Section 4.1, the precision, recall, and F1 score improved compared to the baseline performance. The CNN model outperformed the other algorithms with improved precision, recall, and F1 score for almost all datasets except for Avi-Main, where BERT had the similar results, and Auto-Main where CNN and BERT tied. This is interesting, given the current popularity of the BERT model, however it may be due to the substantial lexical, topical, and structural linguistic differences between the technical logbook data and the English corpus that BERT was pre-trained on. Results Furthermore, we conducted the Mann-Whitney U-test of statistical significance by using the F1 scores of each of the 20 repeated experiments of the classification models, using the baseline and the feedback loop approach as the two different populations. The outcomes are shown in Table 6, with the differences being highly statistically significant. garding the discussion provided in Section 3 about the nature of such a dataset, there are key challenges that effect the performance of employed algorithms. As discussed in Section 1, the extreme class imbalance observed in these technical datasets substantially affects learning algorithms' performance. To overcome this issue, we first explored oversampling and undersampling, which both result in balanced class sizes. Undersampling removed portions of dataset that could be important for certain technical events or issues, which resulted in underfitting and weak generalization for important classes. On the other hand, oversampling may introduce overfitting in the minority class, as some of the event types are very short tokens containing domain-specific words. Following this, to minimize the possibility of overfitting and underfitting, a random downsampling loop and a feedback loop were investigated to minimize bias in the training process. It was found that the added computational cost of the feedback loop inference was worth the reduction in training time it caused over the random downsampling loop. The scarce data available in a dataset such as Auto-Main is certainly an issue for deep learning methods. Examining the accuracy improvement by using the proposed feedback loop strategy, requires incorporating more instances to the event classes. Similar to any supervised learning models, we noticed some limitations that could be addressed in future work. As shown in the previous sections (such as Table 2), logbook instances contain short text (ranging from 2 to 20 tokens per instance), and utilizing recurrent deep learning algorithms such as LSTM RNNs which are heavily based on the context leads to weak performance compared to other algorithms. One possible explanation is that logbooks with short instances (sequences) are not providing sufficient context for the algorithm to make better predictions. Another could be that RNNs are notoriously difficult to train (Pascanu et al., 2013), and the LSTM models may simply require more training time to achieve similar results. There is some evidence for this, as the dataset with the most instances, which also had the second largest number of tokens per instance on average was Faci-Main, which is the dataset which the LSTM model had the closest performance to the CNN and BERT models, and was also the only one which the LSTM model outperformed the DNN model. The pre-trained BERT model provided a reasonable classification performance compared to the other deep learning models, however as BERT is pre-trained on standard language, the performance when applying to logbook data was not optimal. Training or fine-tunning BERT to technical logbook data is likely to improve performance as observed in the legal and scientific domains (Chalkidis et al., 2020;Beltagy et al., 2019). As training or finetuning BERT requires large amounts of data, a limitation for fine-tuning a domain-specific BERT is the amount of logbook data available. Conclusion and Future Work This work focused on predictive maintenance and technical event/issue classification, with a special focus on addressing class imbalance. We acquired seven logbook datasets from three technical domains containing short instances with non-standard grammar and spelling, and many abbreviations. To address RQ1, we evaluated multiple strategies to address the extreme class imbalance in these datasets and we showed that the feedback loop strategy performs best, almost entirely providing the best results for the 7 different datasets and 4 different models investigated. To address RQ2, we empirically compared different classification algorithms (DNN, LSTM, CNN, and pre-tuned BERT). Results show that the CNN model outperforms the other classifiers. The methodology presented in this paper could be applied to other maintenance corpora from a variety of technical domains. The feedback loop approach for selecting training data is generic and could easily be applied to any learning problem with substantial class imbalances. This is useful as extreme class imbalance is a challenge at the heart of a number of natural language tasks. In future work, we would like to fine-tune BERT using logbook data, as described in Section 6, and extend this work to datasets in other languages. The biggest challenge for these two research directions is the limited availability of logbook datasets. Furthermore, we are exploring various methods of domain adaptation and transfer learning on these datasets to further improve the performance of classification models.
2021-06-22T10:29:06.741Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "300450616c609f97aea9dbf44891f3e5e2fe7dae", "oa_license": "CCBY", "oa_url": "https://aclanthology.org/2021.acl-long.312.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "0b83772fc608705525f01b1ad3c543c66ea326ab", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
237389101
pes2o/s2orc
v3-fos-license
Audit of the Sydney Local Health District Public Health Unit notification and contact tracing system during the first wave of COVID‐19 Objective To conduct a real‐time audit to assess a Continuous Quality Improvement (CQI) activity to improve the quality of public health data in the Sydney Local Health District (SLHD) Public Health Unit during the first wave of COVID‐19. Methods A real‐time audit of the Notifiable Conditions Information Management System was conducted for positive cases of COVID‐19 and their close contacts from SLHD. After recording missing and inaccurate data, the audit team then corrected the data. Multivariable regression models were used to look for associations with workload and time. Results A total of 293 cases were audited. Variables measuring completeness were associated with improvement over time (p<0.0001), whereas those measuring accuracy reduced with increased workload (p=0.0003). In addition, the audit team achieved 100% data quality by correcting data. Conclusion Utilising a team, separate from operational staff, to conduct a real‐time audit of data quality is an efficient and effective way of improving epidemiological data. Implications for public health Implementation of CQI in a public health unit can improve data quality during times of stress. Auditing teams can also act as an intervention in their own right to achieve high‐quality data at minimal cost. Together, this can result in timely and high‐quality public health data. Due to the potential for the rapid spread of COVID-19 and high mortality rates in at-risk groups, public health measures have been implemented around the world to control the transmission of SARS-COV-2. These measures include education, social distancing and restrictions, cancellation of mass gatherings and isolation or quarantine of at-risk or positive individuals. 9,10 These measures are informed by surveillance data, which is the systematic and continuous collection, analysis, interpretation and notification of health-related data to prevent and control health problems. 11 Surveillance data help describe the burden of the disease, monitor trends and provide data for setting priorities, designing, executing and evaluating programs and policies. [12][13][14][15][16] It is crucial that surveillance data are high quality, timely, simple to understand and representative of the population. 17,18 In New South Wales (NSW), medical practitioners and pathology laboratories are required to report positive cases of COVID-19 under the Public Health Act 2010 (NSW). As per NSW health guidelines, local public health units have the responsibility to follow up on all cases of COVID-19 and record this information in the state's Notifiable Condition Information Management System (NCIMS). 10 NCIMS is a secure, confidential, online database used to collect public health surveillance data in NSW. It contains information on patient demographic characteristics, laboratory results, clinical symptoms, medical management, likely source of infection, close contacts and the outcome of the disease. NCIMS is continuously updated in real-time with information from public health staff across the state, the Ministry of Health and laboratories. In March 2020, there was a rapid increase in the number of COVID-19 cases in the Sydney Local Health District (SLHD). In response to the increased workload of the district's public health unit (PHU), 73 surge staff were seconded to the PHU in March and remained there for a period of 3-4 months. This increased the number of staff working at the PHU from 58 to 131. These surge staff came from the district's population health, community health and planning teams. The majority of surge staff had no prior training in communicable disease control, case investigation or contact tracing. PHU staff were responsible for supervising and training the surge staff in interviewing, contact tracing, managing close contacts, data collection and the use of NCIMS. PHU staff were also rostered to work on-call after business hours (24/7), in addition to their usual work hours. It was recognised from the outset that the rapid escalation in activity, combined with changes in the workforce members and responsibilities, could adversely affect the quality of public health data. To initially address this issue, the PHU COVID-19 operations team ran a comprehensive staff training program that included completing HETI (Health Education and Training Institute) modules regarding emergency management, as well as surveillance and outbreak management, contact tracing videos, videos on how to use NCIMS, a review of Communicable Disease Network Australia (CDNA) national guidelines and the PHU-specific standard operating procedures, Zoom tutorials and full case simulation. The training program went through phases of the PDSA (Plan Do Study Act) cycle and constant real-time feedback from the staff and the audit team was incorporated. In addition to the staff training program, and to ensure that the epidemiological data were of high quality and timely, a continuous quality improvement (CQI) process was commenced. 17, 18 Shortell's CQI framework was applied to CQI processes as shown in Figure 1. 19 Shortell's CQI framework is comprised of four dimensions: strategic, cultural, technical and structural. The strategic dimension includes those activities and processes that are most important to the organisation and provide the greatest opportunity for improvement, such as vision, budget priorities and long-term strategy. The cultural dimension represents the organisation's 'beliefs, values, norms and behaviours' that support or inhibit the CQI work. The technical dimension encompasses training and information infrastructure. The structural dimension refers to the ways that knowledge is acquired and dispersed throughout the organisation. The CQI was centred on daily handover meetings that included discussion of cases, management issues, dissemination of decisions and review of processes. Evidence-based presentations on topics of interest were also included. These meetings were instrumental in identifying and understanding changes in guidelines and COVID-19 operational processes and discussing the implications -implementing rapid changes in processes in the unit. To guide and evaluate changes and the performance of the unit, a real-time audit was initiated. The audit was conducted by a team staffed separately to the core COVID-19 operations team, which comprised of two supervisors (a clinical director and epidemiologist) and four other staff (an advanced paediatric trainee, clinical nurse consultant, public health unit trained personnel and registered nurse). In addition to assessing the quality of the data, any information that was identified as missing or incorrect was investigated by the audit team and rectified. Methods The audit was performed at the SLHD PHU, located at Royal Prince Alfred Hospital. All COVID-19 cases managed by the SLHD PHU during the first wave (up until 23 May 2020) were included in the audit. These COVID-19 cases resided within the SLHD, were quarantined in hotels located within the SLHD (including overseas travellers) and had most of their follow up done by the SLHD PHU. COVID-19 cases were excluded if they were diagnosed after death, were subsequently found to have a false-positive result, or their follow up was done by another local health district. COVID-19 Audit of COVID-19 notification system notifications in NCIMS; email communication between the communicable disease team, Ministry of Health, and external agencies; SLHD patient electronic health records; and other electronic information stored by the PHU staff regarding the team's operations in the PHU shared drive and team emails. Where information remained missing or ambiguous, the relevant staff were contacted to clarify the information and cases were re-interviewed as required to improve the completeness and accuracy of the data. Aggregate ordinal variables for overall data quality, completeness, and accuracy were calculated for each case by summing the relevant variables listed above. A measure of PHU workload was also calculated by summing the number of other cases that were reported within three days of each case. Individual logistic regression models were generated for each audit variable, and ordinal logistic regression models were generated for each aggregate variable. The only covariates included in each model were workload and date. These were both normalised using the Ordered Quantile (ORQ) transformation. Data were analysed and visualised using the statistical package, R version 4.0.3. 20 The audit was approved by the SLHD Human Research Ethics Committee (reference LNR X20-0266). Results There were 304 COVID-19 cases reported to the SLHD PHU from 3 March 2020 to 23 May 2020 (82 days or 12 weeks). The first case was audited on 24 March 2020 and the last case on 12 June 2020. Of the 304 cases, 11 were excluded from this study for the following reasons: one case had an indeterminate result followed by two negative results and was declared a false positive; three cases had a positive result by RT-PCR with a subsequent negative serological result and declared to be false positives; one case was diagnosed after death; five cases were excluded due to crossjurisdictional issues as most of their follow up was done by other local health districts; and one case was lost to follow up after leaving the country. This left 293 COVID-19 cases that were included in the audit. There were 52 cases (17.7%) reported in the first quarter of the period audited, followed by a sharp increase to 217 cases (74.1%) in the second quarter, and 24 cases (8.2%) audited in the final two quarters ( Table 1). The number of cases that were re-interviewed by the audit team due to unknown source of infection was 41 (14.0%). Overall, there was an improvement in data quality over time, even after adjusting for workload (p<0.0001, ordinal logistic regression). On further examination, variables measuring completeness were associated with improvement over time (p<0.0001, ordinal logistic regression); whereas those measuring accuracy were inversely associated with increased workload (p=0.0003, ordinal logistic regression). Significant associations with individual audit variables are also shown in Figure 2. Of note, completeness of linking was the most strongly associated with improvement over time (p=0.0001, logistic regression). After assessing the data for completeness and accuracy, the audit and public health teams corrected the data, resulting in 100% complete and accurate data ( Table 1, the dashed green line in Figure 2). Discussion Our study confirms the challenges of achieving high-quality recorded data in a public health unit at the start of a pandemic. High-quality data are required to give an accurate epidemiological picture of cases and contacts to allow for effective and timely decisions to be made regarding controlling the spread of disease. The major challenges faced by the PHU included increased workload on existing staff, changing roles and responsibilities of these staff, new staff not trained in communicable disease control or the existing systems, increased demand on information systems, and evolving guidelines. Realising the importance of accurate epidemiological data early in the pandemic and anticipating that there could be an issue with the quality of the data, a CQI process was employed early to address this. The audit confirmed that there were issues with initial data quality and allowed a measure of the effectiveness of interventions to improve the epidemiologic data that were guiding the public health response. For example, the recording of clinical symptoms was identified as low early in the pandemic because data was being collected manually by one team member who then handed it over for data entry by another team member who prioritised it as low. Information such as this was fed back to the team during the unit's daily meetings to help improve operational processes. Not unexpectedly, our audit showed the lowest quality data at the beginning of the pandemic when these challenges were at their peak and when the number of reported cases was increasing rapidly. However, overall, there was an improvement in data quality over time, even after adjusting for the workload. This likely reflects changes due to quality processes like a comprehensive staff training program and ongoing support and supervision, leading to increased staff experience with the contact tracing and operational processes associated with the COVID-19 pandemic. Interestingly, some variables were not associated with improvement over time but were inversely associated with the workload. Variables measuring accuracy (place of acquisition and source of infection) worsened with A major challenge for the PHU was the shortage of skilled workforce trained in communicable disease control at the beginning of the pandemic. To limit the impact of this in the future, it would seem sensible that identified surge staff were trained in public health concepts and the local public health systems prior to the next wave of the pandemic. 16 However, this may not be easy to achieve politically and may come at a high opportunistic cost during normal business. Using the PDSA model, the extensive staff training program developed at the PHU helped to train the surge staff continuously to a high standard in COVID-19 operations. The staff training program would likely meet the future need to train the surge staff if there were to be another wave of the pandemic. This evaluation is the subject of a separate paper. However, our study results demonstrate that in addition to this training, having an integrated but separate audit team at the beginning of a pandemic to actively review and correct data and maintain high-quality data allows for appropriate delineation of staff roles and can ensure accurate surveillance data at a time of high activity. The audit team was able to achieve 100% complete and accurate data in nearreal-time, with an investment of only 4% of the unit's staff. Limitations There were some limitations to this study. The audit covered a period of unusual activity for a PHU, with a lot of uncertainty, rapidly changing information, and staff without a public health background working for the first time in a public health unit. Therefore, the findings from this study may not be generalisable to times with more normal activity. The study did not report on timeliness, which is an important aspect of data quality. In addition, the audit only reviewed data within one local health district of NSW. While other health districts in NSW all use the same database system to record notifiable diseases such as COVID-19 according to the same national guidelines, this information was not analysed. Therefore, our findings may not reflect the quality of data in other districts. However, the audit does provide a reference to compare other NSW local health districts. The audit was also unable to differentiate between the quality of the data entered by the PHU staff versus the surge staff, as many staff members were involved at various stages of contact tracing and data entry. Recommendations CQI processes can result in improved data quality during times of rapid growth and evolving knowledge. The effectiveness of these interventions can be monitored and guided by real-time auditing. Data identified as incomplete or inaccurate from a realtime audit can be corrected with minimal resources, resulting in high quality and timely data. Conclusion The SLHD PHU was able to provide highquality contact tracing services and good quality public health data on COVID-19 cases in challenging times due to early deployment of surge staff with a comprehensive training program in COVID-19 operational tasks, implementation of CQI processes guided by real-time auditing, and an auditing team that actively corrected missing and inaccurate data. The CQI processes and the real-time audit team enabled us to make more timely decisions regarding case management and contact tracing, contributing to the statewide reduction in COVID-19 cases.
2021-09-03T06:17:04.775Z
2021-09-02T00:00:00.000
{ "year": 2021, "sha1": "39c2b71267eb0b4a257566f99dcf4dfdf45dec8a", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1111/1753-6405.13145", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "053fff54a92956a2ee0236cd19430428de6276f8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5973534
pes2o/s2orc
v3-fos-license
Nonequilibrium Thermodynamics of Amorphous Materials III: Shear-Transformation-Zone Plasticity We use the internal-variable, effective-temperature thermodynamics developed in two preceding papers to reformulate the shear-transformation-zone (STZ) theory of amorphous plasticity. As required by the preceding analysis, we make explicit approximations for the energy and entropy of the STZ internal degrees of freedom. We then show that the second law of thermodynamics constrains the STZ transition rates to have an Eyring form as a function of the effective temperature. Finally, we derive an equation of motion for the effective temperature for the case of STZ dynamics. I. INTRODUCTION Understanding the irreversible deformation of amorphous systems remains a major challenge in nonequilibrium statistical physics and materials science [1,2]. Systems of interest include noncrystalline solids below or near their glass temperatures, dense granular materials, and various kinds of soft materials such as foams, colloids, and the like. An ongoing effort to develop a dynamical theory of such systems has been based on the shear-transformation-zone (STZ) model of [3]. Recent work has extended the original model to include an effective disorder temperature as an essential ingredient [4,5,6,7,8,9,10,11]. Our main goal in this paper is to develop an STZ theory that is consistent with the internal-variable, effectivetemperature thermodynamics described in two preceding papers [12,13]. In [12] we focused on the role of internal state variables in determining the nonequilibrium dynamics of amorphous, not necessarily glassy, systems. We used the statistical interpretation of the first and second laws of thermodynamics to obtain equations of motion for the internal variables, and we emphasized the need to understand how both energy and entropy are shared between the internal variables and other degrees of freedom. In [13] we extended this development to include an effective disorder temperature. Our basic premise in that paper was that the slow configurational degrees of freedom of such materials are only weakly coupled to the fast kinetic/vibrational degrees of freedom, and therefore that these two subsystems can be described by different temperatures during deformation. Using the tools of nonequilibrium statistical thermodynamics, we derived a general form for the equation of motion for the effective temperature, and obtained a set of second-law constraints on the thermomechanical equations of motion for such systems. We start here in Sec. II by summarizing the major results of [12,13] in a form appropriate for the STZ analysis. In Sec. III, we introduce the STZ degrees of freedom as thermodynamically well defined internal state vari-ables with associated energies and entropies. We then deduce specific forms for the STZ equations of motion based on the thermodynamic analysis. Our most important departure from earlier versions of the theory is that the STZ transition rates are now required to have an Eyring form as a function of the effective temperature rather than the reservoir temperature. In Sec. IV, we discuss the noise strength that determines the STZ annihilation and creation rates, and we derive an equation of motion for the effective temperature. Section V contains a summary of the STZ equations. We conclude in Sec. VI with remarks about the significance and limitations of this theory. II. THERMODYNAMIC CONSTRAINTS We consider the deformation of an amorphous material in contact with a thermal reservoir at temperature θ R . We assume that θ R is either below or not too far above the glass temperature θ g , so that the two-temperature theory developed in [13] is applicable. We express temperatures in units of energy, and set Boltzmann's constant k B equal to unity. For simplicity, we assume from the beginning that the system is spatially uniform and that it undergoes only volume-conserving, pure-shear deformations. The total, extensive, internal energy of this system, including a thermal reservoir, is where U C and U K , respectively, are the configurational and kinetic/vibrational internal energies, S C and S K are the respective entropies, and {Λ α } denotes a set of internal state variables, soon to be identified as the STZ variables. U R is the energy of the thermal reservoir, which we assume to be strongly coupled to the kinetic/vibrational subsystem. E el is a deviatoric (traceless, symmetric), elastic shear strain. Note that our assumption of volumeconserving, pure-shear deformation allows us to omit any volume dependence in U K , cf. Eq. (3.1) in [13]. The effective temperature χ and the kinetic/vibrational temperature θ are We assume that θ ≈ θ R , i.e. that the kinetic/vibrational subsystem is always in equilibrium with the thermal reservoir. The shear stress acting on the configurational subsystem is where V is the fixed total volume. As explained in [13], the kinetic/vibrational subsystem has no shear modulus, but it can support a viscous stress in the presence of shear flow. For further simplicity, we assume that the kinetic/vibrational viscosity vanishes. The total entropy is The expression for any one of these three entropies can be inverted to obtain the corresponding internal energy function in Eq. (2.1), or vice versa. Without further loss of generality, we specialize to the case of pure, planar shear oriented along fixed axes, say x and y, and define s C ≡ s C,xx = − s C,yy . We assume (for small elastic deformations) that the rate of deformation tensor is the sum of elastic and inelastic parts, D = D el + D in , where D el =Ė el , and we define D in ≡ D in,xx = −D in,yy . All other elements of these deviatoric tensors vanish; thus, for example, the rate of inelastic work done by the shear stress is s C : D in = 2 s C D in . The analysis in [13] produced an equation of motion for the effective temperature that is basically a statement of the first law of thermodynamics, i.e. a heat-flow equation. For the present case, this equation has the form Here is the time rate of change of the heat of configurational disorder, and C eff V is an effective (extensive) heat capacity at constant volume. As in the preceding papers, the nonnegative dissipation rate W C -the difference between the rate at which inelastic work is being done on the configurational subsystem and the rate at which energy is being stored in the internal degrees of freedom -is (2.7) Non-negativity of W C is an important second-law constraint that plays a central role in the analysis to follow. The second term on the right-hand side of Eq.(2.5), is the rate at which heat is flowing into the configurational subsystem. Here, A(χ, θ) is a non-negative thermal transport coefficient that, as will be seen, depends on other dynamical variables in addition to χ and θ. III. STZ EQUATIONS OF MOTION The basic assumptions of the STZ theory have been described in [7]. To the extent possible, the following discussion follows the steps outlined in that paper. The main idea is that deformation of amorphous materials occurs via localized molecular rearrangements that take place at shear transformation zones (STZ's). The STZ's are created and annihilated either by thermal fluctuations or by noise generated by the deformation itself. They are rare, ephemeral fluctuations that are especially important for irreversible deformations because they make stress-driven transitions between two, energetically almost degenerate orientations. Thus, the STZ's are two-state systems. There is nothing arbitrary about this two-state picture. The STZ's have the special property of being able to shift between one orientation and another in response to a shear stress. Sites with this property are already statistically unlikely, and higherorder degeneracies are statistically negligible. The difference between what we are doing here and the analysis presented in [7] is that now, on the basis of [12,13], we insist on a proper thermodynamic description of the STZ's as internal degrees of freedom. Such a description requires a specific STZ model. To construct any such model, we must make physical assumptions that may need to be modified in later applications. In particular, as in the earlier work, we assume that there is just a single kind of STZ, with a single characteristic formation energy e Z , and a single mechanism for making transitions between the two orientational states. For additional simplicity, we go back to the original version of the theory [3] in which the STZ's occur only with orientations either "+" or "−" with respect to the shear direction. A procedure for averaging over STZ orientations and constructing a properly invariant tensorial version of the theory was presented in [7]. That procedure works just as well for the present analysis, but seems unnecessarily complex for present purposes. The internal state variables are the extensive numbers of STZ's in these two different states, N + and N − . As usual, define Thus, the set of internal state variables {Λ α } reduces to {Λ, m}. Our arguments in [12,13] tell us that we must include the entropy associated with the internal variables Λ and m in this analysis. If we take the two-state model literally, then we compute this entropy by counting the number of ways in which we can distribute N + "+" zones and N − "−" zones among, say, N available sites in the system. This number is which, after use of Stirling's approximation, reduces to (3.4) To use this formula, write where S 1 and U 1 , respectively, are the entropy and energy of all the degrees of freedom of the configurational subsystem apart from those attributable to the STZ's. Accordingly, where e Z is the formation energy of an STZ. Equations (3.5) and (3.6) are equivalent to each other if we write In terms of these STZ variables, the inequality in Eq. (2.7) becomes To make further progress, go back to the original STZ equations of motion for the N ± (3.8) Here, τ 0 is a time scale, the factors R(±s C )/τ 0 are the rates at which STZ's switch back and forth between their two orientations,Γ/τ 0 is the rate factor for creation and annihilation of STZ's, and N eq is an as-yet undetermined "equilibrium" value for the number of STZ's. The superscript "eq" is used here and below to denote steady-state equilibrium. Note that, in Eq. (3.8), we are assuming that the STZ creation rate is the same for both STZ orientations, independent of the orientational state of the system as a whole. The deviatoric, inelastic rate of deformation tensor is where v 0 is a molecular-scale volume. As usual, define Then, In previous papers, we defined N v 0 /V ≡ ǫ 0 . We will return to this notation in Sec. V. The equations of motion for Λ and m are where Λ eq = N eq /N , and The next step in this analysis is to impose the secondlaw constraint expressed in Eq. (3.7). We immediately encounter a difference between the present situation and the one described, for example, by Maugin in [14]. Specifically, the inelastic rate of deformation D in appearing in W C is not simply proportional to the time derivativeṡ Λ andṁ. Therefore, we cannot satisfy the inequality in Eq. (3.7) by identifying the coefficients of those time derivatives as thermodynamic forces associated with energy landscapes, and then requiring that Λ and m both relax toward free-energy minima. In fact, our situation is more interesting. It is almost certainly typical of open systems in which external work is being done and energy is being dissipated, and where no variational formulation is relevant. Our strategy is to use Eq. (3.12) to evaluateṁ in Eq. (3.7), and thereby to write W C as the sum of two terms, one proportional toΛ, and the other proportional to the stress-dependent quantity T (s C ) − m. These two terms must individually be non-negative. The inequality in (3.7) becomes From Eq. (3.4), we know that Therefore, the first term in the expression for W C in Eq. (3.13) is always non-negative, and we can set it aside for the moment. The second term in Eq. (3.13) produces a standard, variational, second-law inequality of the form is a free energy. Λ eq in Eq. (3.11) must be the value of Λ at which Therefore, For χ ≪ e Z , we expect Λ eq ≈ Z eq ≪ 1, which is consistent with the basic idea of a low density of STZ's. We then obtain the expected Boltzmann factor, Λ eq ≈ exp (−e Z /χ), with a small modification from the mdependent entropy. The term proportional to m dS 0 /dm in Eq. (3.19) means that Z eq diverges weakly, and Λ eq → 1, when m → ±1. However, it is easy to see from the denominator in the equation of motion for m, i.e. either Eq.(4.6) or Eq.(4.8) shown below, that m → ±1 is a dynamically inaccessible limit. Therefore, so long as e Z is the largest energy scale in the problem -which has always been the case in prior applications -the requirement of small Λ is satisfied. The more interesting result comes from the term proportional to T (s C ) − m in Eq. (3.13). That term must be non-negative for all values of the stress s C , i.e. which means that the two stress-dependent factors, T (s C )−m and v 0 s C +χ dS 0 /dm, must each be monotonically increasing functions of s C that change sign at the same point for arbitrary values of m. From Eq. (3.14), we see that this condition can be satisfied only if which, according to Eq. (3.9), means that where R 0 is a symmetric, non-negative function of s C . As indicated, R 0 may also depend on the temperatures χ and θ, because the transitions between STZ orientations are very likely to be thermally activated processes. Equation (3.22) indicates a major difference between the present thermodynamic results and the earlier theories. In the latter, we started with physical models for the transition rates R(±s C ), and then assumed that the dependence of the internal energy on the STZ variables would be consistent with these rates. Here we start with a known internal energy, and must argue in the other direction to make sure that the rates are consistent with thermodynamics. In particular, Eq. (3.22) tells us that the STZ transition rates must have an Eyring form with the effective temperature χ rather than the reservoir temperature θ in the exponent. IV. NOISE STRENGTH AND EQUATION OF MOTION FOR χ Having used the second law to deduce equations of motion for the STZ variables, our next steps are to go back to the first law in Eq. (2.5) and use the expressions forΛ andṁ to computeΓ, and then to derive the STZ version of an equation of motion for χ. Both of these steps again require going beyond purely thermodynamic arguments, and making additional physical assumptions. Equation (2.5) now can be expressed explicitly in terms of the internal variables: As in previous STZ papers, we assume that the rate factorΓ is a sum of two independent noise strengths, Γ = Γ(s C , χ) + ρ(θ). Here Γ(s C , χ) is the part of the rate factor determined by mechanically generated noise, and ρ(θ) is the super-Arrhenius, thermally generated part. We next invoke Pechenik's hypothesis [15], which identifies Γ as being proportional to the total rate of heat production per STZ where the proportionality factor s 0 has the dimensions of stress. Inserting this relation into Eq. (4.1) and solving forΓ, we find whereÑ The equation of motion for m, Eq. (3.12), becomes where At this point, it is useful to distinguish between slow and fast processes, as was done in [6,7]. The inelastic deformation rate given in Eq. (3.10) contains a factor Λ, meaning that it is proportional to the density of STZ's and is small. The equation of motion for χ will be seen to be similarly slow. On the other hand, the equations of motion for Λ and m contain no such factors Λ. These internal state variables respond rapidly to changes in their environments. Therefore, we simplify the analysis by setting Λ = Λ eq , and replacing m by m eq , the stationary solution of (4.8) This solution is shown explicitly in Eq. (5.5). These approximations are always valid for steady-state solutions but, as seen in [7], they also work well for transients. In steady state, and at low temperatures where ρ(θ) ≈ 0, Eq. (4.8) exhibits the usual [3,5,16] exchange of stability at a yield stress (minimum flow stress) s y determined implicitly by where χ 0 is the steady-state value of χ in the limit of vanishingly small strain rate. According to Eqs. (4.8) and (5.5), for ρ(θ) = 0, m eq goes through a maximum value of tanh (v 0 s y /χ 0 ) at s C = s y . At that point, Eqs. (3.18) and (3.19) tell us that the condition requires that e Z be much larger than χ 0 and v 0 s y , which, as noted earlier, is generally true. To complete this development, we need an explicit equation of motion for χ, and again we need to make additional physical assumptions. Use Eqs. (4.1) and (4.2) to write (4.11) The thermal transport coefficient A(χ, θ) is one of two places in this theory where the weak coupling between the configurational and kinetic/vibrational subsystems must be modeled explicitly. The other place is the noise strength Γ defined in Eq.(4.2), where we argued that mechanically generated noise contributes additively, along with the thermal noise, in creating configurational disorder. Similarly, it seems plausible that the overall heat exchange between the two subsystems is enhanced by mechanical noise. Thus we propose that A have a form similar to that ofΓ, and write where κ is a dimensionless parameter, the factor θ has been inserted for dimensional reasons, and a 0 is a dimensionless quantity to be determined as follows. Separate the right-hand side of Eq. (4.11) into parts proportional to Γ and ρ, and then write this equation in the form In [8], it was argued that athermal (ρ = 0) amorphous systems reach steady state for effective temperatures χ equal to some functionχ(q), where q is a dimensionless, non-negative measure of the total strain rate. For timeindependent stresses, q is the magnitude of τ 0 D in . This means that the quantity in square brackets in Eq. (4.13) must vanish at χ =χ(q), a condition that we satisfy by setting (4.14) Thus, Eq. (4.13) becomes Equation (4.15) is essentially the sameχ equation that we have used in previous applications. The main difference is the prefactor (χ − θ) −1 . Non-negativity of a 0 requires thatχ(q) > θ, which is a plausible and interesting constraint. The steady-state solution of Eq. (4.15) is The function Γ(s C ) vanishes in the limit of vanishing strain rate q; therefore, for fixed, nonzero ρ(θ), χ ss → θ as q → 0. On the other hand, if the strain rate is fixed and ρ(θ) becomes small, then χ ss →χ(q). As pointed out in [8], the crossover between these limiting behaviors takes place at very small strain rates for small ρ(θ), and therefore it can be very difficult to determine whether a glass transition has occurred. At higher temperatures, this crossover occurs at higher strain rates, and the conditionχ(q) > θ requires thatχ be a function of θ in some circumstances. For the moment, we note that physically realistic systems do not probe the extreme limit of vanishingly small strain rate, and we therefore assume that χ(q) − θ ∼ = χ 0 − θ is a positive constant for situations in which the system is deforming at experimentally accessible rates. V. SUMMARY OF STZ EQUATIONS We conclude this part of the paper by summarizing the STZ equations in their most usable versions, that is, in the limit in which the relaxation of the STZ variables Λ and m is much faster than the rates at which plastic deformation and the effective temperature respond to changes in the external driving forces. Many of these equations are the same as the ones that appear -in more general tensorial versions -in [7]. As noted previously, however, there are some differences. The rate of inelastic deformation, given here in Eq. (3.10), is a function of the configurational shear stress s C (assuming no appreciable contribution from the viscous stress in the kinetic/vibrational subsystem) and the effective temperature χ D dev in = Λ eq (χ) f (s C , χ), and Here, we have reverted to the earlier notation, ǫ 0 = N v 0 /V , which is the ratio of a molecular volume v 0 associated with STZ transitions to the volume per molecule in the system as a whole, and is of the order of unity. The STZ formation energy e Z previously was denoted by k B T Z . In [7], T Z was found to be larger than the glass temperature by a factor of about 30 for a metallic glass; and the time constant τ 0 was of the order of a femtosecond. We have abbreviated the functions C and T as follows R 0 (s C ) is an arbitrary, symmetric function of the shear stress s C . m eq (s C , θ) is the stationary solution of Eq. (4.8) The parameter s 0 is a stress that can be determined from the low-temperature yield stress (minimum flow stress) s y via Eq. (4.9) where χ 0 is the steady-state value of χ in the limit of vanishingly small strain rate. It is useful to look at the equation of motion for χ, Eq. (4.15), in two special cases. First, consider the parameter range relevant for deformations of ordinary plastic materials such as metallic glasses. The experience gained from the studies reported in [7] and [8] suggests, for temperatures not too far above the glass transition, and for strain rates not extremely small, that we can assume that χ(q) ≈ χ 0 remains constant at a value larger than θ, so that the dimensionless quantityχ/(χ − θ) is a slowly varying function of θ that can be absorbed into other parameters such as the effective heat capacity and κ. When this is true, Eq. (4.15) can be written in the form wherec 0 andκ are dimensionless constants of the order of unity. To use this equation, we need the explicit expression for Γ and m eq (s C , χ) is given by Eq. (5.5). Second, consider the athermal limit of Eq. (4.15) by setting θ = 0 and ρ(θ) = 0. In this case, we have , (5.10) where, now,c 0 = c eff V /ǫ 0 , and c eff V is the effective heat capacity per unit volume in units of Boltzmann's constant k B . This limit is appropriate for granular materials, bubble rafts, and the like, where ordinary thermal fluctuations are irrelevant, and the disorder described by the effective temperature is generated only by externally driven deformation. Thus, only states with stresses above the yield stress are relevant, and Eq. (5.5) tells us that m eq = s 0 /s C (exactly). Moreover, when s C ≫ s 0 , we have so that the noise strength is just proportional to the rate at which inelastic work is done on the system. We have usedχ(q) on the right-hand side of Eq. (5.10), instead of its small-q limit χ 0 , because large values of q are more easily attainable for systems in which the intrinsic relaxation time τ 0 is not microscopically small. As shown in [17],χ(q) increases rapidly when q grows to values of the order of unity. Thus the restoring term in Eq. (5.10) becomes small; and the resulting rapid growth of χ produces localized shear failure. This mechanism was shown in [18] to provide a plausible explanation of rapid stress drops and localized failure in earthquake faults. VI. CONCLUDING REMARKS We have made many simplifying assumptions in developing this thermomechanical version of the STZ theory. Some of these assumptions were needed only to simplify the presentation, and seem to have little if any physical importance. For example, it should not be difficult to rewrite this theory in tensor notation, as in [7], and apply it to spatially nonuniform situations with orientationally varying stress and flow fields. It will be technically more difficult to deal with situations in which both volumetric and shear deformations are occurring and are coupled to each other; but here again there seems to us to be no problem in principle. Yet another example of simplification is that, throughout this series of three papers, we have dropped terms that would have described thermoelasticity or, more pertinently in the context of nonequilibrium phenomena, thermo-viscoelasticity. Here too, we see no intrinsic difficulties. In fact, we see attractive opportunities to use a thermo-viscoelastic version of this theory for studying the behavior of glasses subject to thermal cycling in the neighborhood of the glass temperature. One of our more problematic simplifications is our assumption that we can distinguish elastic from plastic strains, and use the elastic strain as an independent argument of thermodynamic functions such as the internal energy or the entropy. As we have stated here and in earlier papers, we maintain that the plastic strain, necessarily measured from some reference configuration (possibly evolving), cannot be a physically meaningful variable for determining the current state of the system or predicting its subsequent motion. Thus, we have insisted on expressing our equations of motion in Eulerian coordinates, and using the internal state variables to carry the memory of recent deformations. This self-imposed requirement leaves us with an asyet unsolved problem regarding elasticity. The problem is compounded here by our recognition of the extended thermodynamic roles played by internal degrees of freedom, which, as we have seen, may store energy in recoverable forms as well as relax irreversibly toward states of equilibrium. In such situations, it is unclear to us whether "elastic" behavior is always the same as "reversible" behavior, or whether the conventional Kroner-Lee [19,20] decomposition of elastic and plastic displacements is generally correct. We have evaded these issues so far by restricting our attention to infinitesimally small elastic displacements. However, we suspect that these questions now require more serious attention. Our list of topics needing further investigation includes the choice of rate factors in the STZ theory. Our most notable departure from earlier STZ results is the relatively simple, χ-dependent transition rate shown in Eq. (3.22). This formula is primarily a result of our statistical interpretation of the second law of thermodynamics in [12]; it is related to the two-temperature theory only in the sense that it is the effective temperature χ, and not the thermal temperature θ, that governs the configurational subsystem's motion toward statistically more probable states. So far as we can tell, this result does not substantially change previous conclusions, e.g. in [7,8]. In fact, the stronger stress dependence in Eq. (3.22) may be needed in order to understand seismic data [21]. This statistical interpretation of the rate factors is especially difficult for jammed states at low temperatures, where the stress is below the yield stress and ρ(θ) = 0. Our theory predicts that, in this situation, m = tanh(v 0 s C /χ). This result makes sense for a glass below its glass transition temperature, where thermal fluctuations still can activate transitions between the states of STZ's even if they cannot create new ones. In this case, we can change the inelastic strain by changing the stress, although reequilibration to a new state of deformation might be very slow. For a granular material, however, the most we can say is that m = tanh(v 0 s C /χ) is the statistically most likely average orientation of STZ's at the given values of s C and χ. Such a state might be achieved by tapping the system, i.e. by artificially introducing something like thermal noise. But the way in which such a jammed system responds to changing stresses has to do with whether it forms force chains or bridging structures or the like. Such mechanisms cannot be included in a theory of the kind we are discussing here. Therefore, when talking about granular materials in Sec.V, we have restricted ourselves to unjammed systems that are undergoing deformation. More generally, this limitation of the STZ theory emphasizes the need for a more thorough investigation of the limits of validity of this theory and of similarly constructed statistical theories of noncrystalline deformation.
2009-03-09T11:03:44.000Z
2009-03-09T00:00:00.000
{ "year": 2009, "sha1": "6cec32c28af728ec38324a350fefc19939865998", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0903.1527", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6cec32c28af728ec38324a350fefc19939865998", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
29168475
pes2o/s2orc
v3-fos-license
Trends in ageing and ageing-in-place and the future market for institutional care: scenarios and policy implications Abstract In several OECD countries the percentage of elderly in long-term care institutions has been declining as a result of ageing-in-place. However, due to the rapid ageing of population in the next decades future demand for institutional care is likely to increase. In this paper we perform a scenario analysis to examine the potential impact of these two opposite trends on the demand for institutional elderly care in the Netherlands. We find that the demand for institutional care first declines as a result of the expected increase in the number of low-need elderly that age-in-place. This effect is strong at first but then peters out. After this first period the effect of the demographic trend takes over, resulting in an increase in demand for institutional care. We argue that the observed trends are likely to result in a growing mismatch between demand and supply of institutional care. Whereas the current stock of institutional care is primarily focussed on low-need (residential) care, future demand will increasingly consist of high-need (nursing home) care for people with cognitive as well as somatic disabilities. We discuss several policy options to reduce the expected mismatch between supply and demand for institutional care. Introduction Faced with an ageing population, in many countries matching future supply and demand (or need) for long-term care (LTC) challenges governments. Key questions include safeguarding an adequate provision of formal and informal care and an adequate supply of institutional and home care facilities that is affordable to those in need *Correspondence to: Peter Alders, PhD, Erasmus School of Health Policy and Management, Erasmus University, P.O. Box 1738, 3000 DR Rotterdam, The Netherlands. Email: alders@eshpm.eur.nl for LTC. Forecasting the need for home care and institutional LTC is all but straightforward, however, due to diverging trends in ageing and ageing-in-place. 1 The trend of ageing-in-place seems to be partly driven by technological advancement, changing preferences and culture, and partly by changes in health policy (Alders et al., 2017). Personal emergency systems have given a sense of safety and ease notifying caregivers to act swiftly when needed (De San Miguel and Lewin 2008). Preferred alternatives, most notably home-delivered care and assisted living, were likely filling the gap left by declining nursing home use in the United States (Bishop, 1999;Stevenson and Grabowski, 2010). 2 The trend of ageing-in-place, with a steadily declining percentage of people aged over 80 years in LTC institutions is noticeable in several OECD countries (see Table 1). The trend of deinstitutionalization is especially noticeable in countries with traditionally relatively high shares of institutionalized elderly. In countries where traditionally the family has been the main provider of care of the elderly, like in Southern and Eastern Europe or Korea, the level of institutionalized elderly has always been low and in some countries might be increasing. As the number of people over 80 years will double over the next 20 years, however, in the future the declining demand for institutional LTC due to ageingin-place might be offset by a growing population in need for institutional LTC. The effect of population ageing on the demand for institutional LTC might be 1 Ageing-in-place can be defined as 'remaining living in the community with some level of independence, rather than in care homes' (Davey et al, 2004). 2 Stevenson and Grabowski (2010) find that a 10% increase in assisted living capacity led to a 1.4% decline in private-pay nursing home occupancy and a 0.2-0.6% increase in patient acuity. somewhat mitigated by a compression of morbidity (i.e. a reduction of the time spent by elderly in worse health or disability) but the empirical evidence about this is mixed. Chatterji et al. (2015) found in a review of the literature some evidence for compression of morbidity if morbidity was defined as a form of disability or impairment, but evidence for the opposite if morbidity is defined as multiple diseases. Not only the quantity but also the type and quality of institutional care that will be demanded is likely to change, because of changing ideas, expectations and preferences of how institutional care should be provided. Traditionally, institutional care for impaired elderly was hospital-like and clinically oriented. Many LTC institutions were built in the late 70s and early 80s and were based on hospital design, with semi-private rooms and cafeteria-style dining options (Gerace, 2012). For more than a decade, however, the delivery of institutional LTC is shifting to a home-like environment in a nursing home (Moise et al., 2004;White-Chu et al., 2009;Koren, 2010;Grabowski et al., 2014;Miller et al., 2016). For dementia care, the trend over the past two decades has been to reduce the size of the nursing 'unit' from 60 beds to households accommodating anywhere between 9 and 24 residents (Calkins, 2009). Smaller units appear to have a number of positive benefits such as higher motor functioning; greater friendship formation; less anxiety, sadness and depression; more positive activity involvement; and greater mobility (Calkins, 2009). In this paper, we analyse the changing demand for institutional LTC in the Netherlands using three scenarios based on different assumptions about future changes in the need for institutional care. By comparing the future composition of the institutionalized population with the current stock of institutional care facilities we identify the potential gap between future supply and demand. 3 We discuss several policy options to reduce the expected mismatch between supply and demand for institutional care. The Netherlands constitutes an interesting case study since in this country the international trend of deinstitutionalization is especially prominent given the high level of institutionalization (see Table 1) and recent reforms to reinforce the trend of ageing-in-place (Alders et al., 2015). LTC in the Netherlands The Netherlands has a separate tax-based system of social insurance for institutional LTC (abbreviated as WLZ). Access to institutional care is restricted to citizens that need permanent supervision or need a sheltered residence (Ministry of Health, Welfare and Sport, 2014). Once someone is admitted to institutional care, he or she can choose to live in an institution or arrange housing him-or herself. 4 Total cost for a person in a nursing home is on average about 79,000 euro per year (Ministry of Health, Welfare and Sport, 2016). Providers of institutional care covered by the public LTC insurance scheme are non-profit organizations by law. They are paid on the basis of a bundled payment per client based on an assessment of need by an independent agency (CIZ). Based on this assessment eligible people are entitled to a certain 'care severity package' (in Dutch abbreviated as ZZP). The bundled payment per ZZP includes a payment for the capital costs (i.e. the normative housing component, NHC), for the provision of care and for 'hotel services' like meals, entertainment and house cleaning. Eligible people can be entitled to one out of eight different ZZPs with increasing care needs (see Table 2). 5 In general, elderly people with relatively lower care needs (ZZP 1-3) reside in residential care homes and those with high care needs (ZZP 5-8), who require more intensive care and treatment, in nursing homes, although this division has blurred during the last decades. People who are entitled to care package ZZP4 ('Assisted living with intensive support and extensive nursing') are at the boundary of residential and nursing home care. Two care packages are suited for people withprimarilysomatic impairments (ZZP 4 and 6) and three care packages are meant for people withprimarilycognitive impairments (ZZP 5, 7 and 8). Elderly people with dementia generally live in nursing homes, often in a form of group-living. The procurement of care included in the various care packages is carried out by regional care offices, which are separate legal entities operated by the largest health insurer in that region. For this procurement, the Netherlands is divided in 31 regions, each with a legally determined annual budget constraint. Per region on average 25 organizations are active in providing institutional care (Dutch Healthcare Authority, 2013). Whether clients can choose the LTC institution of their first choice does not only depend on the institution's capacity but also on the amount of care purchased by the regional care office. The regional care office may take into account the quality of care homes and the preferences of the clients in that region, but there is no guarantee that these preferences will be matched. There does not seem to be excess demand: whereas in 2014 140,350 persons were living in a LTC institution ( Admission to a LTC institution is generally related to concerns about safety for a person (for instance a fall) or for his or her environment (for instance a risk of causing a fire accident or a frail spouse) and the inability to guarantee personal hygiene. 6 Gaugler et al. (2007) suggest that once certain functional or cognitive thresholds are reached, risk of nursing home admission increases substantially. In the Netherlands, factors such as age, disability, receiving formal care with household tasks or personal care, a hospital visit in the last 6 months and dementia were significant predictors of admission to an LTC institution (Alders et al., 2017). A significant 'time' effect was found as well: in the period 2006-2009 less people were admitted to institutional care compared to the period 1995-1999 when they were in a comparable health and personal situation. Elderly with mild disability are more likely to be treated at home than before, whereas severely disabled individuals continue to receive institutional LTC. Overall, the effect of ageing-in-place was dominant in the period 1996-2009(de Meijer et al., 2015Alders et al., 2017). The decreasing number of places in institutional care is mirrored by an increasingly more disabled and older population in LTC institutions (de Klerk, 2011). The percentage of people with severe disabilities increased from slightly more than 40% in 2000 to almost 50% in 2008. As a consequence, the decline in the percentage of living in LTC institutions is coinciding with a shift from residential care to care in psychogeriatric nursing homes (de Klerk, 2011). A couple of recent policy measures are likely to affect the demand for institutional care in the near future. First, in 2013, more strict admission criteria were implemented. New clients with impairments comparable to the low-need care packages are no longer entitled to institutional care. Next, in 2013 and 2015, co-payments for institutional care increased in absolute and relative terms to home care. Since 2013, co-payments depend on someone's assets, with an exemption of the value of a house. In 2015 a major LTC reform 2015 took place, targeted at reinforcing the trend of ageing-in-place. A result of the reforms is that, since 2015, a co-payment is no longer required for home care in the form of district nursing, which may make home care financially more attractive. 7 Taking into account trends in ageing, socioeconomic factors and health status, Eggink et al. (2017) expect that the main driver of future growth in institutional care will be demography. For the period 2014-2030, they project that residential care increases annually by 2.1%, which is a net effect of demography (2.8%) and uptake per demographic group (−0.8%). The lower uptake per demographic group is partly the result of an expected compression of years with disability (−0.4%). This compression of disability is based on van Duin and Stoeldraijer (2014), who extrapolates trends on life-expectancy and self-reported disability of Statistics Netherlands of the period 1983-2012. The finding by van Duin and Stoeldraijer, however, is at variance with findings from other research and when other time periods are used (Statistics Netherlands, 2017b). In the period 1990-2008 prevalence rates of chronic diseases increased in community-living older people, whereas prevalence rates of activity limitations were stable or slightly decreased depending on the definition (Hoeymans et al., 2012). Other research showed an increase in the prevalence of mild activity limitations, but not in severe activity limitations in the Dutch older population over the period 1992-2009 (van Gool et al., 2011;Galenkamp et al., 2012). Supply of care home places The number of places in institutional care declined from 183,000 in 1995 to 161,000 in 2010 (Statistics Netherlands, 2017c, 2017d. Although exact data on the capacity of institutional care in 2014 are not available, we estimate the institutional capacity at 146,000 places in 2014 by taking the available capacity in 2010 (161,000) and subtracting the net decline of 15,000 places over the period 2010-2014 as derived from the annual reports of care organizations (DIGI MV, 2016). These numbers include a shift to more intensive care. From 1995 to 2010 the number of places in elderly facilities for low-need LTC decreased by about 30% from 128,000 to 89,000, while the number of places in elderly facilities for high-need care increased by about 30% from 55,000 to 72,000 (Statistics Netherlands, 2017c, 2017d. However, the former clear-cut distinction between low-need LTC in residential care facilities and high-need LTC in nursing homes has disappeared. According to a government website (www.zorgopdekaart.nl), in 2014 there were about 99,000 places in residential care facilities, although in many of these facilities elderly also receive high-need LTC. Of these 99,000 places, 33,650 cannot be reoriented towards high-need LTC covering costs. Total (potential) supply of high-need institutional care is therefore 112,350 places. In 2016, 341 providers of institutional care and home care managed 386 nursing homes, 789 residential facilities and 259 combined locations in the Netherlands (Actiz, 2016a). 8 80% of the population in the Netherlands has a nursing home or residential care facility within 5.2 km (Riedel and Kraus, 2011). The association of LTC providers Actiz concluded from a survey among its members that 150-200 LTC institutions with a total capacity of about 10,000 elderly, were closed in the period 2013-2016. About half of the responding members are renting rooms that are idle in the private market (Actiz, 2016b). 9 Until 2009, the government regulated the supply of institutional care by requiring a permit for building new facilities. When a permit was issued, providers were reimbursed for 100% of the actual capital costs. Since 2012, however, providers of institutional care have become increasingly at risk for the capital costs. In 2012 the reimbursement percentage of capital costs was lowered to 90%, and in subsequent years stepwise further reduced to 0% in 2018. At the same time the percentage based on a NHC is proportionally increased. 10 From 2018 on, the capital costs will be part of the negotiable price of a ZZP. Scenario analysis As argued above, two opposite forces primarily determine the future demand for institutional care. On the one hand, the long-term trend of ageing-in-place and recent measures of the Dutch government are expected to reduce the demand for institutional care. On the other hand, however, the ageing of the population is likely to result in a higher number of potentially frail elderly in need of institutional care. Furthermore, because the disability level is a key predictor of institutional use, the future expansion or compression of the period elderly are confronted with disabilities, is likely to have a major impact on the length of stay and thereby the future demand of institutional care. 8 These figures concern members of the association of LTC providers Actiz. These providers cover 90-95% of the market for institutional care. 9 The revenues for renting rooms to clients entitled to institutional care is usually significantly higher than for renting the same room in the private market (Olde Bijvank, 2015). 10 In 2016 the NHC is about 900 euro per month per client with a slightly higher NHC for higher ZZPs. We therefore distinguish the following three primary drivers of change in the use of institutional LTC: 1. Population ageing; 2. Disability levels; 3. Ageing-in-place: later admission to institutional care given a disability level. The gender-specific care probabilities per age band are based on the use of institutional care severity packages in 2014 as reported by Statistics Netherlands (2017a) (Figure 1). We simulate three scenarios, based on the three drivers of institutional LTC identified above, to calculate future use of institutional care over the period 2014-2035. These scenarios are summarized in Box 1. The first scenario, labelled 'Ageing', only takes into account the population projections of Statistics Netherlands (2017e) and assumes that the age-related probabilities for admission to an institutional LTC facility stay constant. In 2014, the Netherlands had 16.8 million inhabitants; 2.92 million of them were older than 65 years and 0.72 million were older than 80 years (Statistics Netherlands, 2017c). Of the residents in institutional care 30% were aged above 90, 48% between 80 and 90, 18.5% between 65 and 80, and 3.5% below 65 years. Using a prognostic model, with yearly updates for trends in birth death and migration, Statistics Netherlands forecasts that in 2035 4.5 million people are older than 65 years, and 1.4 million older than 80 years (Statistics Netherlands, 2017e). In the second scenario, labelled "Improved health", we use the population projections of scenario 1, and, given the mixed evidence on compression and expansion of morbidity, furthermore assume that the probabilities of using institutional care shift by 1 year for every year of life expectancy gained. For instance, the probability of admission to an institutional care facility for a 70-year-old woman is attributed to a 71-year-old woman in the year in which female life expectancy (at the age of 65) has risen by 1 year above base year female life expectancy: P(X) t = P(X + ε) t + 1 < = > LE t + 1 − LE t = ε (see Rothgang, 2003). 11 Drivers of this scenario could be, for instance, better life-styles or improved medical treatment methods. In the third scenario, labelled 'Reinforced ageingin-place', we use the trends in population ageing and disability levels of scenario 1 and 2 and, furthermore assume that the trend of ageing-in-place continues and the length of stay shortens. We assume that the average length of stay of people in an LTC institution declines from currently 36 months (3 years) to 30 months in 21 years over the period 2014-2035 (i.e. a reduction of 0.8% per year). To this end we assume that the probability of use of institutional care declines every year with 2/7 month. 12 Hence P(X) t + i = (1y) × P(X) t with y = (6/36 × i/21) and i = 1, 2, … 21. Both the increased morbidity of residents and the significant decline in use of institutional LTC by the elderly point to a reduction of the length stay during the last decades. Data of Statistics Box 1. Scenarios for future demand of institutional long-term care (LTC). -Scenario 1, 'Ageing': the proportion of older people receiving institutional care remains constant for each sub-group defined by age, gender and care severity package. -Scenario 2, 'Improved health': admission to an institutional care facility is postponed by the same number of years as the increase in average life expectancy at 65. -Scenario 3, 'Reinforced ageing-in-place': admission to an institutional care facility is postponed by the same number of years as the increase in average life expectancy at 65, and the average length of stay in an institution decreases linearly, in total by 6 months (from 36 to 30 months) over the period 2014-2035. 11 Because we use age bands, the probability of nursing home use for a person aged 78 years old after a 2 years gain in life-expectancy is determined for 3/5 by the probability of the age band 75-80 and for 2/5 by the age band 70-75. 12 For reasons of simplicity, we assume a constant decline in length of stay. Another option would be to assume a function with an asymptotic decline of 6 months. Netherlands (2017a) show that from 2012 to 2014 the average length of stay (from admission until death) for high-need persons has been reduced by 3.8%. Nursing home care providers also draw attention to an increasingly severe caseload and a substantial reduction of the average length of stay (Kiers, 2016). Drivers of this scenario could be further technological changes and policies that facilitate and stimulate ageing-in-place (e.g. domotica, e-health), and stronger preferences for care at home. As mentioned before, since 2013 new low-need elderly are no longer admitted to LTC institutional care facilities. People already admitted to institutional LTC keep their right on receiving nursing home care. Hence, the number of low-need elderly in nursing homes will steadily decline. Based on the data on length of stay of institutionalized elderly with low needs in 2012 and 2013 (Statistics Netherlands, 2017f), we assume that each year 35% of the low-need elderly will die or enrol in high-need care. Results We use 2014 as baseline year for the three scenarios to project future demand for institutional care. At the end of 2014 total excess supply of institutional care facilities was about 5650 places, being the difference between the estimated available total capacity (146,000 places) and the actual use (140,350 places, see Table 2). Despite the overall excess supply, however, there seems to be a qualitative mismatch: a shortage of rooms in (small scale) nursing homes of about 7600 Table 3. In all scenarios the total demand for institutional care is likely to decline in the first years, reaching the lowest demand in 2016 (scenario 1), 2018 (scenario 2) and 2020 (scenario 3). As shown in Figure 2, however, in subsequent years this downward trend is reversed and demand will exceed the baseline capacity of institutional care facilities in 2022 (scenario 1), in 2027 (scenario 2) or will just match this in 2035 (scenario 3). Hence, the upward trend in demand for institutional LTC as result of population ageing may be partially or fully offset by healthy ageing and ageing-in-place, depending on the strength of these trends. The proportion of people over the age of 65 years in institutional care is 4.6% in 2014. In scenario 1 this proportion drops to 4.0% in 2021 and then increases to 4.7 in 2035. In scenario 2 it drops to about 3.6% in 2025 and then remains constant, while in scenario 3, it steadily declines to 3.0% in 2035. Figure 2 shows that if the trend of ageing-in-place is sufficiently strong, the existing capacity is sufficient to meet future demand if we only look to the number of available places. The limited change of total demand for institutional care, however, is hiding a more important and sizeable change in the composition of this demand. As shown in Table 3, in all three scenarios there is a substantial shift in demand from low-need to high-need institutional care, particularly for care for people with cognitive impairments. Figure 3 illustrates that the demand for high-need institutional care already exceeds the existing capacity of high-need LTC facilities in 2014, and that in all scenarios this gap will substantially increase over time if the high-need capacity will not be expanded. Hence, providers are facing an increasing excess supply for low-need care and an increasing excess demand for high-need care. Discussion: policy options to counter the mismatch in demand and supply In this paper we projected the potential impact of ageing, changes in disability levels and changes in the length of stay in institutional care on the demand for Trends in ageing and ageing-in-place 93 institutional care. These changes are driven by changes in demography, population health, technology, social norms, policy reforms and relative prices of care. By distinguishing three scenarios we tried to capture the potential impact of several of these factorsprimarily demography, population health, technology and ageingin-place policiesby a number of crude assumptions. Obviously, for this reason the scenarios provide merely a rough indication than a precise estimate of the future demand of institutional care. We compare the projected future demand to current supply, to explore to what extent current supply is both qualitatively and quantitatively sufficient to meet future demand. At first sight, confronting total demand with the available baseline capacity, capacity problems do not seem to occur during the next 5-10 years, and might be tackled in the longer run by investments in healthy ageing and policies to reinforce ageing-in-place. A closer look, however, reveals a potentially growing gap between supply and demand for low-need and for high-need care, in particular for people with cognitive impairments. As a result of ageing-in-place and policy measures that restrict admission of low-need people to LTC institutions the demand for institutional care by low-need elderly will decline in the coming years. This will increase the already existing excess capacity of residential care. Because the current stock of institutional care is primarily suited for residential care, we observe a potentially growing qualitative gap between the available stock of institutional care and the needs of the growing number of people with high somatic and cognitive impairments. Providers of residential care may attract other clients by providing assisted living, caring for people with dementia and intensive nursing home care. Moreover, spare capacity might be used for people who need low-cost housing. However, residential care facilities cannot be used automatically for nursing home care or small-scale dementia care (Heinen et al., 2012). The costs of the required adjustments (e.g. expanding bathrooms, including lifters, broadening doors) are estimated at 15,000-25,000 euro per unit (de Wildt and Neele, 2003;Nouws and Sanders, 2014). In a competitive market, such mismatch between supply and demand is likely to be temporary: (new) providers will build new capacity or transform old capacity to address the mismatch. The market for institutional care has many properties of a potentially competitive market: capital costs are relatively low, new providers can enter with few beds, much care can be provided by relatively unskilled labor and the use of specialized equipment is small as compared to hospitals (Bishop, 1988). There are several reasons, however, why this market will not clear easily in case of excess supply. Return on investments in new facilities are highly uncertain, given the long-term nature of these investments (30-35 years), and because of the considerable uncertainties about the future role of the government and public financing, about the future demand including the scrap value of a facility, and about the projected labour supply shortages (Joldersma et al., 2017;van Aartsen, 2017). Moreover, excess supply might deter current providers to replace older facilities by newer ones that are better suited to accommodate higher need levels, as long as there is excess demand for high-need facilities and providers are not 'punished' for providing less suitable facilities. In the past, the Dutch government basically regulated the capacity of institutional care by issuing permits for new capacity and a 100% reimbursement of capital costs. With the transition to a system where providers decide on the level of capacity and fully bear the risk of vacancies, competition may effectively reduce the potential mismatch between supply and demand. However, regulatory constraints at both the demand and supply side still may hamper the necessary adjustments of institutional LTC capacity. A first supply-side constraint is that LTC institutions require a license to provide care that is financed by the public LTC insurance scheme (WLZ) and are not allowed to distribute profits to owners/ shareholders. This may prevent investors entering the market and establishing new or adjusting existing facilities. 13 In addition, LTC providers need government approval if they want to sell facilities on the private market. Next, capacity adjustments may be hampered by municipalities since they decide about the landuse planning for societal purposes, including the provision of nursing home and residential home care. A final supply constraint that reduces the attractiveness of investments in adjusting current facilities is the restriction that these facilities may only be rented to low-income persons if the building is the property of a housing corporation. At the demand side, total demand is constrained by regional budgets set by the government. Regional care offices decide how this budget is allocated and this allocation may not meet the preferences and needs of the residents (i.e. patients follow the money instead of money follows the patients). Because of the considerable (regulatory) uncertainties and current demand and supply constraints, an adequate and timely adjustment of the projected (growing) quantitative and qualitative mismatch is unlikely without (a mix of) policy measures. At the demand side, the government can increase market competition by providing clients an individual-trailing budget (i.e. the money follows the patient) in order to let clients choose their preferred nursing home. At the supply side, the government may pursue policies to reduce the overcapacity of low-need residential care facilities. For instance, the government body responsible for organizing housing for refugee status holders may actively rent or buy buildings of residential care facilities with idle capacity. By taking outdated facilities out of stock, the business case for building new capacity improves, because the new capacity will not result in cannibalization of the revenue on the old facilities. Furthermore, the government may increase incentives for investors to invest in better facilities for the growing high-need population by allowing providers to distribute profits (i.e. by lifting the legal ban on profit distribution). Increasing these incentives, however, may also involve risks. Especially when quality information is poor for-profit nursing homes may have stronger incentives to skimp on quality. Evidence from the United States shows that non-profit nursing homes may perform better than for-profit ones (Grabowski et al., 2008;Hirth et al., 2014). Therefore, improving information about quality of LTC is important when the role of the market is reinforced to meet future consumer preferences about LTC provision. However, the current system of reporting on quality of LTC providers in the Netherlands, which is based on a so-called consumer-quality index, is not satisfactory and will be abandoned, but a new system still needs to be developed (Ministry of Health, Welfare and Sport, 2015). Werner et al. (2016) conclude that reporting quality by the nursing home star rating system in the United States significantly affected consumer demand for high-and low-rated nursing homes. Zhao (2016) finds that while the effect of competition on nursing home quality is generally rather limited, this effect becomes significantly stronger with increased information transparency. These results suggest that regulations on public quality reporting and market structure are policy complements, and should be considered together to guarantee an adequate future supply of institutional care facilities. Conclusion Due to opposite trends of ageing and ageing-in-place, LTC provision in the Netherlands likely has to grapple with an increasing mismatch between the demand and supply of institutional care. Future demand will increasingly consist of care for high-need elderly, whereas the current stock of care homes is better suited for relatively low-need elderly. As a result, both the existing overcapacity of lowneed care facilities and undercapacity of high-need facilities may increase. Hence, high-need elderly may be increasingly forced to use residential homes that are not suitable given their impairments. An option for the government to bring demand and supply more in balance is to abolish current regulatory constraints on demand and supply that appear to make LTC providers and potential entrants reluctant to invest in new capacity or to refurbish the existing capacity. By removing these constraints, however, the government may also increase the risk of market failure, such as moral hazard and quality skimping. Therefore, improving information about quality of LTC provision is imperative in case of a greater reliance on market forces to meet future preferences for LTC. Many countries face the same trends of ageing and ageing-in-place and may be confronted with similar challenges to match future demand and supply of institutional LTC. The Dutch case shows the importance of differentiating care needs in forecasting future demand for institutional care, because the demand for institutional care in case of low-need and in case of somatic and cognitive high-need care might diverge. Looking at the market for institutional care from an aggregated point of view could miss a potentially severe mismatch between supply and demand, because of the rapidly changing composition of those who need institutional care. Of course, what we see as a potential mismatch now or in the Netherlands, might be perceived differently in the future or in other countries. Notwithstanding this, given the rapidly changing composition and preferences of elderly people for institutional LTC, projections of future demand for institutional care may provide useful information for an adequate adjustment of the future supply of publicly financed LTC facilities.
2018-05-25T21:26:16.707Z
2018-05-21T00:00:00.000
{ "year": 2018, "sha1": "2655e0e23850e8d5f3984749036a9ea27c4f05e9", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/D7A2655F22B7AEB2EA1D8E1466E6DE4F/S1744133118000129a.pdf/div-class-title-trends-in-ageing-and-ageing-in-place-and-the-future-market-for-institutional-care-scenarios-and-policy-implications-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "db75dfa5012c8c3d503dd7b8b9f36af469916954", "s2fieldsofstudy": [ "Economics", "Political Science" ], "extfieldsofstudy": [ "Business", "Medicine" ] }
80386613
pes2o/s2orc
v3-fos-license
ThE EffEcTIVNEss Of chRONIc GINGIVITEs TREATmENT IN PATIENTs WITh NON-REmOVABLE ORThOdONTIc APPARATUs AccORdING TO ThE REsULTs Of PERIOdONTAL TIssUEs INdEX AssEssmENT Background. The risk of chronic gingivitis is increased in patients who undergo orthodontic treatment. It is known that the gingivitis is closely correlated with the level of hygiene. Objective. The study involved 123 orthodontic patients with chronic catarrhal and hypertrophic gingivitis that developed during the first two months of active orthodontic treatment. We chose vITIS orTНoDonTIС (DENTAID INTERNATIONAL, Spain) that contains active ingredients we need to support healthy state of oral cavity. All studied patients were 12–15 years old. Methods. For dental examination we used health indices OHI-S according Greene-Vermillion (1964). Gums bleeding was determined according to a modified SbI index by Muhlemann (1971), inflammation of the gingival margin was assessed by PMA index Parma (1960). Results. Therapeutic treatment consisted of the following: all patients underwent correction of oral hygiene, removal of dental plaque. Vitis ORTHODONTIC was prescribed due to the manufacturer's recommendations: rinse 15 ml for 30 seconds after normal procedures of oral hygiene. Eating or drinking is not recommended during the next 30 minutes after using this product. The results proved a high anti-inflammatory efficacy of the treatment schemes. Conclusions. The complex therapy of early manifestations of inflammation in periodontal had a positive effect on the subjective feelings of patients and health performance rate, gum inflammation and bleeding. Introduction Dentition abnormalities impair the hygienic condition of the mouth, cariogenic effect exacerbate the situation and increase the risk factors of periodontal diseases.Several researchers indicate high percentage of periodontal abnormalities with dentoalveolar lesions [4,6,9,10].Thus, the prevalence of periodontal diseases in patients requiring orthodontic treatment was 81.4%.Due to the Anne-Marie Bollen symptoms of periodontal they were determined in 89.3% patients.Periodontal tissues were affected in all kinds of bite anomalies [1]. The clinical picture may correspond to varying degrees of severity of periodontal tissues diseases.Some authors [3,5] diagnosed chronic periodontitis in 70%, gingivitis -in 30%, including chronic catarrhal gingivitis -in 15%, medium gravity of hypertrophic gingivitis -in 15% of people. The aim of the study was to improve the efficiency of treatment of catarrhal and hypertrophic gingivitis in patients with fixed orthodontic apparatus in mouth when using rinse aid. Methods The examination and treatment of 123 orthodontic patients in permanent dentition period with symptoms of chronic catarrhal and hypertrophic gingivitis that were determined two months after the beginning of active orthodontic treatment by the technique of permanent braces version 'the direct arc'.The patients' age was 12-15 years: 59 (47.9%) boys, 64 (52.1%) girls.All patients were almost healthy. Two weeks before the beginning of orthodontic treatment, all patients were subjected to professional teeth cleaning procedure to remove all deposits and external staining, and ye.ya.Kostenko et al. Dentistry then tooth surface was polished.Before the beginning of the study instructions on oral hygiene were provided for the patients and a standard method of teeth cleaning was recommended. In a study patients health indices OHI-S according to Greene-Vermillion (1964) were used.Gums bleeding was determined by a modified SBi index by Muhlemann (1971), inflammation of the gingival margin was assessed by PMA index Parma (1960) [7]. The methods of therapeutic treatment consisted of the following: all patients underwent correction of oral hygiene, removal of dental plaque.Vitis ORTHODONTIC was prescribed due to the manufacturer's recommendations: rinse 15 ml for 30 seconds after normal procedures of oral hygiene.Eating or drinking is not recommended during the next 30 minutes after using this product. In two weeks of using Vitis ORTHODONTIC twice daily, patients were examined again.When assessing the results of the drug, the views of patients about the taste of the drug, convenience of application was surveyed, we also evaluated the dynamics of the major indexes. Statistical data processing was carried out by probability Student t-test [8]. Results Initial data of periodontal state in the patients before orthodontic phase of treatment are presented in Table 1.Periodontium was clinically healthy.Hygiene was satisfactory. Within two months from the beginning of orthodontic treatment, due to the lack of oral hygiene, the availability of items for retention of dental plaque around the brackets, most patients complained of bleeding gums when brushing their teeth, swelling, halitosis. During objective examination, in 70 patients swelling, cyanosis gum, thickening in the area of gingival papilla were evidenced that pointed to a mild severity of catarrhal gingivitis.Mechanical irritation was accompanied by bleeding.In the teeth there was the increased content of soft plaque (patients avoided brushing due to pain and bleeding gums). According to the results of objective examination, in 53 children hypertrophic gingivitis was determined that proved the visual assessment of size of crown and vertical sensing.All patients had mild severity of hypertrophic gingivitis that was manifested by proliferation of gum to 1/3 crown.To the touch gingival papillae dense produced false periodontal pockets, bleeding was absent, which evidences of the fibrous form of hypertrophic gingivitis. for gingivitis diagnosis we used classification of periodontal tissues diseases by MF Danilevsky (1994). Evaluation of patients' performance in two months after the beginning of orthodontic treatment is presented in Table 2. All the indicators increased in both groups of patients, oral hygiene index increased to 2.36 points, proving the deterioration in 3 times.The average plaque index increased to 2.6 points that verified unsatisfactory oral hygiene, gingivitis index increased on average by 52%, bleeding index -by 45%. After the application of hygiene measures and the Vitis ORTHODONTIC solution the patients in 2-3-day reported about reduction of bleeding, gums swelling, disappearance of discomfort events.On examination, the de- crease of gingival hyperemia was determined.In 8-10 days of the treatment, hyperemia, gingival swelling disappeared, gingival papillae compacted and acquired normal form.By the 14 th day of rinses the state of oral health improved significantly.Also with a significant improvement of subjective sensations the pa- tients noted positive dynamics of the indices, and there was no oral mucosa irritant.None of them had any allergic reaction or any adverse side effects.All patients had good rinse results.Teeth sensitivity was not changed.Teeth were not stained.The results are presented in Table 3. Discussion The results of epidemiological studies of dental status of children in some areas of Ukraine prove great variability and frequency of teeth anomalies, which varies in different regions from 30.8% to 85.4% and tends to increase.The treatment of anomalies of jaw apparatus teeth using fixed orthodontic structures is a priority in contemporary orthodontics, because it is highly effective and provides reliable retention of the obtained results.However, in the literature it is mentioned that periodontal tissue react to the treatment with braces, whereby the share of gingivitis according to Petrushanko TA (2013) is 33.3% [3,4,5].Among the causes of inflammatory diseases of periodontal tissues in children this categories are defined: worsening of hygienic condition of oral cavity, microbial factors, hormonal changes and the effect of orthodontic forces.However, the development of complex therapeutic measures for chronic catarrhal and hypertrophic gingivitis in children with non-removable orthodontic apparatus has its own significance and importance that should be considered. Thus, a comparative analysis of clinical trials revealed that the developed complex of therapeutic measures for patients with braces help to improve its clinical course that made it possible to achieve stable remission. Conclusions Inclusion in the complex therapy of early manifestations of inflammation in periodontium had a positive effect on the subjective feelings of patients and health performance rate, gum inflammation and bleeding.it is necessary to emphasize that the use of Vitis ORTHODONTIC must be preceded by correction of oral hygiene and improvement of hygiene practices in patients. Table 1 . Initial data evaluation of the studied patients (M±m) note.Statistical significance of differences between the relevant groups of girls and boys. Table 2 . Evaluation of the patients examination in two months after the beginning of orthodontic treatment (M±m) Note. p 1 -difference reliability index compared with the initial data.ye.ya.Kostenko et al.Dentistry ye.ya.Kostenko et al.
2018-12-29T13:35:58.601Z
2017-07-12T00:00:00.000
{ "year": 2017, "sha1": "44a91e8a354d8b24c1f1d8c01d9423ca8ac3808c", "oa_license": "CCBYNC", "oa_url": "https://ojs.tdmu.edu.ua/index.php/ijmr/article/download/7105/7345", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "44a91e8a354d8b24c1f1d8c01d9423ca8ac3808c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248921742
pes2o/s2orc
v3-fos-license
Editorial: Immunoglobulin Glycosylation Analysis: State-of-the-Art Methods and Applications in Immunology Immunoglobulins (Igs) are critical in the ability of our immune system to recognize and eliminate pathogenic antigens. Igs are coand post-translationally modified by oligosaccharides (glycans) that significantly impact their structural and functional properties. All five human Ig classes (IgG, IgA, IgM, IgD, and IgE) are glycosylated; and the glycosylation of IgG, in particular, has been studied in depth. Although IgG glycosylation varies between individuals, it is stable within an individual under homeostasis. During disease, IgG glycans can dramatically change making them promising early diagnostic and prognostic biomarkers for several inflammatory, infectious, and autoimmune conditions. The goal of this Research Topic is to cover 1) advancements in analytical approaches for Ig glycosylation analysis; 2) applications of state-of-the-art methodology/technology in exploring aberrant glycosylation patterns during illness/disease; and 3) recent findings on the functional relevance of Ig glycosylation in immunity during different physiological states. Developmentof newanalytical approaches facilitates the expansionof our knowledgeof Ig glycosylation – from its regulation and functional effects in healthy conditions to changes in glycosylation before/during disease, their connection with disease progression, and success of therapeutic interventions. Sensitivity, simplicity, and throughput are three key aims of current methodological development. The GlycoFibroTyper platform of Scott et al. exemplifies this quest for robust methods using minute amounts of sample to tackle large numbers in patients and/or longitudinal samples. The authors combine an antibody capture slide array with direct detection by matrix-assisted laser desorption/ ionization (MALDI) (imaging) mass spectrometry (IMS) of PNGase F-released N-glycans – both microarrays and MALDI-MS are well established in high-throughput glycomics. The minimal sample preparation deviates from the dominant liquid chromatography (LC)-MS or LC-fluorescence approaches (1). Though much method development is still focused on IgG, the current advances in sensitivity will hopefully make the analysis of the remaining Ig isotypes more commonplace in the future (2). IgG glycosylation depends on several demographic, genetic, and environmental factors (such as age, sex, ethnicity, and exercise). Genetic studies provide information on glycosylation regulation. We still know little due to the lack of a direct genetic template and the complexity of glycosylation. Li et al. aim at filling this gap by providing an atlas of genetic regulatory loci related to IgG N-glycosylation with their Editorial on the Research Topic Immunoglobulin Glycosylation Analysis: State-of-the-Art Methods and Applications in Immunology Immunoglobulins (Igs) are critical in the ability of our immune system to recognize and eliminate pathogenic antigens. Igs are co-and post-translationally modified by oligosaccharides (glycans) that significantly impact their structural and functional properties. All five human Ig classes (IgG, IgA, IgM, IgD, and IgE) are glycosylated; and the glycosylation of IgG, in particular, has been studied in depth. Although IgG glycosylation varies between individuals, it is stable within an individual under homeostasis. During disease, IgG glycans can dramatically change making them promising early diagnostic and prognostic biomarkers for several inflammatory, infectious, and autoimmune conditions. The goal of this Research Topic is to cover 1) advancements in analytical approaches for Ig glycosylation analysis; 2) applications of state-of-the-art methodology/technology in exploring aberrant glycosylation patterns during illness/disease; and 3) recent findings on the functional relevance of Ig glycosylation in immunity during different physiological states. Development of new analytical approaches facilitates the expansion of our knowledge of Ig glycosylation from its regulation and functional effects in healthy conditions to changes in glycosylation before/during disease, their connection with disease progression, and success of therapeutic interventions. Sensitivity, simplicity, and throughput are three key aims of current methodological development. The GlycoFibroTyper platform of Scott et al. exemplifies this quest for robust methods using minute amounts of sample to tackle large numbers in patients and/or longitudinal samples. The authors combine an antibody capture slide array with direct detection by matrix-assisted laser desorption/ ionization (MALDI) (imaging) mass spectrometry (IMS) of PNGase F-released N-glycansboth microarrays and MALDI-MS are well established in high-throughput glycomics. The minimal sample preparation deviates from the dominant liquid chromatography (LC)-MS or LC-fluorescence approaches (1). Though much method development is still focused on IgG, the current advances in sensitivity will hopefully make the analysis of the remaining Ig isotypes more commonplace in the future (2). IgG glycosylation depends on several demographic, genetic, and environmental factors (such as age, sex, ethnicity, and exercise). Genetic studies provide information on glycosylation regulation. We still know little due to the lack of a direct genetic template and the complexity of glycosylation. Li et al. aim at filling this gap by providing an atlas of genetic regulatory loci related to IgG N-glycosylation with their target genes within functionally relevant tissues. This work implies that the IgG N-glycome is specific for individual tissues. Thereby, it consistently goes beyond general genome-wide association studies (GWAS) (3) in recognizing the impact of the B cell microenvironment (4). The latter also seems (indirectly) affected by estrogen. For example, Mijakovac et al. have explored estrogens impact on IgG glycosylation in the context of menopausal changes. Glycosylation of Igs is shown to change in various pathological states and has been studied in different diseases. Nevertheless, mechanisms of these changes are still largely unexplored. The main question whether glycosylation changes are a cause or consequence (or both) of disease mostly remains unanswered. IgG glycosylation has been recently studied in the context of thyroid autoimmunity, Hashimoto's thyroiditis and Graves' disease, by Trzos et al., connecting IgG glycosylation and Hashimoto's thyroiditis severity as well as demonstrating an impact of immunosuppressive methimazole therapy on the IgG N-glycome in Graves' disease. A further study in the parasitic disease lymphatic filariasis by Adjobimey and Hoerauf demonstrates the broad relevance of the IgG N-glycome. However, this fuels the need to control for common comorbidities, for example with endemic controls. Distinct glycan profiles have been observed in endemic normal, asymptomatic individuals bearing microfilaria and patients with chronic pathology. Most notably, agalactosylated and afucosylated IgG distinguished chronic from asymptomatic patients. Linking immune competence and IgG N-glycome composition, detailed characterization on the level of individual subclasses, or antigen-specific antibodies would provide even more insight into these diseases and underlying mechanisms. The IgG subclass-specific glycosylation signatures of liver fibrosis stages obtained by Scott et al. may serve as an example. Demonstrating a greater diagnostic performance than other non-invasive liver fibrosis tests, e.g. APRI, FIB4, their comparably simple workflow highlights the potential attractiveness of IgG glycome analysis for future clinical practice. The functional relevance of IgG glycosylation is well established and current knowledge of IgG glycosylation in different physiological states is reviewed by Zlatina and Galuska. One of the most established examples of functional effects of glycosylation is core fucosylation at Asn297 of the IgG Fc fragment that results in lower affinity toward Fcg receptor (FcgR) IIIa, decreasing antibodydependent cell-mediated cytotoxicity (ADCC) (Sun et al.). Various functions and pathways of Ig sialylation, another important regulator of Ig effector functions, have been highlighted by Vattepu et al. Consequently, glycoengineering has been developed to improve the anti-inflammatory properties of intravenous immunoglobulin (IVIg) often used as a treatment for different inflammatory and autoimmune diseases (Vattepu et al.). IVIg likely has many mechanisms of action although it's often reduced to sialylated IgG glycans that diminish the affinity of IgG for activating FcgRs and promote recognition by DC-SIGN leading to the expression of inhibitory FcgRIIb. Research by Mimura et al. demonstrates that galactosylated nonfucosylated IgG, which has a high affinity for FcgRIIIa, has a 20 times higher potency to inhibit ADCC compared to native IgG. These findings indicate that IVIg anti-inflammatory activity is partially mediated by blocking FcgRs on immune cells preventing activation, for example by autoantibodies. Glycoengineered recombinant Fc proteins may be a more efficient alternative to plasma-derived IVIg in inflammatory conditions, although binding to the low-affinity FcgRs is not entirely independent of the antigen-binding fragment. Effects of antibody glycoengineering could be tested on e.g. rhesus macaques, a common non-human primate model. The work of Crowley et al. expands our knowledge on the preferences of macaque FcgRs to specific human IgG1 glycovariants and demonstrates this model species' suitability for evaluation of FcgRs affinity glycoforms. While Ig(G) glycosylation and its functionality are mostly studied in humans and mice, analogous studies in other animals are still lacking, leaving this area largely unexplored. Current knowledge on Ig glycosylation in farm animals is reviewed by Zlatina and Galuska revealing that there are notable differences in Ig glycosylation between animals. AUTHOR CONTRIBUTIONS IT-A wrote the initial draft which all authors critically revised and edited. All authors approved the final version.
2022-05-21T13:10:25.124Z
2022-05-20T00:00:00.000
{ "year": 2022, "sha1": "41eb8629bd4739cea72d6d323518b805e3efbec0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "41eb8629bd4739cea72d6d323518b805e3efbec0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
267758230
pes2o/s2orc
v3-fos-license
The sexual and reproductive healthcare challenges when dealing with female migrants and refugees in low and middle-income countries (a qualitative evidence synthesis) Background Migrants and refugees face unprecedented inequalities in accessing sexual and reproductive health (SRH) in developed and developing countries. Most attention has focused on the rich world perspective, while there are huge numbers of migrants and refugees moving towards less developed countries. This article synthesizes the barriers to proper SRH care from low and middle-income countries perspective. Methods We performed a systematic review of articles containing primary source qualitative and quantitative studies with thick qualitative descriptions. Articles from various databases, including PubMed, Science Direct, HINARI, and Google Scholar, published between 2012 and 2022 were included. Because the context differed, we excluded articles dealing with migrants and refugees from low- and middle-income countries living in high-income countries. To select articles, a preferred reporting item for systematic reviews and meta-analyses (PRISMA) was used. The articles’ quality was assessed using the standard QASP checklist. We used a socio-ecological model to investigate barriers at various levels, and thematic analysis was used to identify the strongest themes at each level of the model. This synthesis is registered under PROSPERO number CRD42022341460. Results We selected fifteen articles from a total of 985 for the final analysis. The results show that despite the diversity of the participants’ homes and countries of origin, their experiences using SRH services were quite similar. Most female migrants and refugees claimed to have encountered discrimination from service providers, and linguistic and cultural obstacles played a significant role in their experiences. In nations lacking universal healthcare coverage, the cost of care was a barrier to the use of SRH services. Other main obstacles to using SRH services were a lack of knowledge about these programs, worries about privacy, inadequate communication, stigma in the community, and gender-related power imbalances. Conclusion To enhance the use of SRH by female migrants and refugees, it is vital to provide person-centered care and involve husbands, parents, in-laws, and communities in SRH coproduction. Training on cultural competency, compassion, and respect must be provided to healthcare personnel. Increasing financial access for migrant and refugee healthcare is crucial, as is meeting their basic requirements. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-024-17916-0. The sexual and reproductive healthcare challenges when dealing with female migrants and refugees in low and middleincome countries (a qualitative evidence synthesis) Introduction Women and girls accounted for 48% (134.9 million) of all international migrants worldwide in 2020, making up half of all migrants and refugees and in vulnerable humanitarian settings [1].They usually cannot find a job, do not receive any education, and cannot consult a physician for sexual and reproductive health (SRH) issues [2,3].According to the World Health Organization (WHO), migrant and refugee women are at a higher risk of rape, unwanted pregnancy, and unsafe abortion [4].Because of these circumstances, they also suffer from depression and social isolation more often [5][6][7][8][9].One of the fundamental rights enshrined in Agenda 2030 for Sustainable Development is access to universal health coverage (UHC), including SRH services to everyone including migrants and refugees [10].Sexual and reproductive health and rights are one of inalienable human rights according to the international conventions [11], and services encompass the provision of information and assistance in preventing, diagnosing, counseling, treating, and caring for individuals in all matters related to SRH [12].It is a prerequisite for achieving gender equality as well as good health and well-being.Hence, there is an increasing global need to address the sexual and reproductive health of migrants and refugees [3,13,14]. According to evidence from systematic reviews conducted in the developed world, being a migrant or refugee often results in power imbalances to make SRH decisions [15,16].Refugees and migrants traveling to developed countries face sociocultural barriers to obtaining and using SRH services [17][18][19][20][21].These impediments are combined with legal and other traditional power dynamics.Furthermore, they face language barriers, which result in inaccurate information about SRH issues, communication barriers, and feelings of embarrassment when discussing SRH issues [22][23][24]. Nearly 45 million women were displaced by the end of 2021, and 83% of them hosted in low-and middle-income countries experiencing a humanitarian crisis with urgent SRH needs [25].Understanding the barriers and needs is critical for improving migrants' SRH and the development of new interventions [13,26,27].Independent literature identified barriers in Africa [28,29], Asia [30][31][32], and Latin America [33,34] countries.However, there is no systematic qualitative evidence synthesis on SRH utilization barriers for migrants and refugees.This systematic analysis of the existing studies provides a holistic picture of the barriers that would inform targeted interventions at international or local levels.The findings will help program managers, policymakers, and healthcare providers develop appropriate tools to address barriers to SRH utilization in low and middle-income countries. Methods A preferred reporting item for systematic reviews and meta-analyses (PRISMA) was used to select the articles [35] (Fig. 1).All qualitative and quantitative studies with sufficient qualitative descriptions published between 2011 and 2022 were included from various databases such as PubMed, Science Direct, HINARI, and Google Scholar.We applied both textword and MeSH terms while searching, to increase the chance of getting the potential articles.The four main categories of the search were related to (1) barriers/facilitators, (2) SRH services, (3) migrants/refugees, and (4) women/girls.For example, we used the textword to search for barriers as (e.g."barriers" OR "facilitators" OR "problems" OR "qualitative research" OR "exploration") AND sexual and reproductive health service as ("reproductive health" OR "reproductive health service*" OR "reproductive health utilization" OR "sexual health service*" OR "contraceptives*" OR "antenatal utilization" OR "obstetric care utilization") AND the study population were searched as (e.g."migrant*" OR "refugee*" OR "transients" OR "refugee camps" AND females were searched as (e.g."women" OR "female" OR "young women" OR "pregnant women") (Appendix 1).We searched broadly to capture recognized or unrecognized, resettled, registered, or unregistered migrants and refugees. Inclusion and exclusion criteria In this study, only peer-reviewed articles describing studies in low and middle-income countries (LMIC) that were published in the last ten years from 2012 up to 2022 were included.The main phenomenon we were interested in was migrant women's and girls' experiences related to sexual and reproductive services utilization.The publication language had to be English.Those articles that contained migrants described as refugees, unregistered migrants, and asylum seekers, were part of this study.The perspectives of care providers were also included in this study.Qualitative and quantitative studies with thick descriptions of the qualitative findings were included. We excluded articles that dealt with migrants or refugees from low and middle-income countries moving to or residing in high-income countries since the context is expected to be different.We also excluded purely quantitative studies, letters, case reports, reviews, commentaries, books, protocols, theses, and editorials.Internal migration was also beyond the scope of this study.Relevant references were also searched to include potential studies in this synthesis. Study selection and data extraction Titles and abstracts were screened based on the inclusion and exclusion criteria by two researchers independently (TDD and ASB).Then, double-screening of the full text of potentially relevant sources was done.Finally, team members discussed any disagreements concerning eligibility.All qualitative data related to women's experiences of reproductive health and healthcare utilization barriers were extracted using a standardized form.The quality of the articles was checked by using the Critical Appraisal Skills Programme (QASP) by Oxford University [36].The Mendeley reference manager software was used to record all articles, and duplicates were removed.We did a full-text review of the texts that passed the initial screening.The ENTREQ statement [37], Enhancing Transparency in Reporting the Synthesis of Qualitative Research, was used to extract data.This is a 21-item tool to capture characteristics of the study that were included in the analysis.It contains information such as synthesis methodology, approach to searching, and other parameters required.All articles included in the study provided the relevant information.(Appendix 2).This synthesis is registered under PROSPERO number CRD42022341460. Data analysis process The findings were analyzed using thematic analysis.The data were first placed in an Excel sheet to review the contents of the study, which included study population characteristics, region, publication year, and other contexts such as socio-demographic and cultural aspects of the study area.We developed initial codes, then sub-themes, and themes under each level of the socio-ecological model.We chose to use this model because it demonstrates the complex interplay between individual (e.g., behaviors), social and community (e.g., norms), institutional and systemic (e.g., health services, education), and structural (e.g., laws, protection mechanisms) factors that affect the health and wellbeing of migrants and refugees [38,39].For some sub-themes that needed further elaboration, quotations were also extracted from the primary studies.Open code 4.02 was used for data management. Individual level barriers The strongest themes in the individual level barriers are communication, and SRH knowledge and perception related ones.Subthemes within the former category encompass patient-provider communication, communication between spouses, and parent-adolescent communication, while the latter category includes subthemes such as awareness about the availability and use of services, misinformation, low-risk perception of vulnerability, poor self-perception, reluctance to use services due to shame, and fear of side effects. Communication is the strongest aspect of individuallevel barriers.It is an important aspect of establishing a connection between service providers and clients, and in the absence of effective communication, service provision becomes almost impossible [15].It is through communication that health complaints, symptoms, diagnosis, treatment, follow-up, and prognosis are established.Refugees and migrants face more challenges because they come to a new host country that may speak a different language [15,52].Communication is not only difficult for them, but it is also one of the most problematic tasks for medical personnel.Both the client and provider present it as one of the main barriers to utilizing SRH services [44,51].Parent-daughter and couples communication is another barrier related to SRH utilization among refugees and migrants in low and middle-income countries [30,34,44,46]. A study on reproductive health among Venezuelan migrant women in Brazil identified communication as the main barrier to maternal antenatal and childbirth services.They expressed that not being fluent in Portuguese resulted in discrimination in the healthcare system and placed them in a position where they were unable to get enough attention from the healthcare providers.Because of the communication gap, they faced further challenges of long waits, contributing to service dissatisfaction and posing future barriers to seeking the service [34]. Parent-adolescent communication is another barrier to SRH utilization, especially among youth.In many Arab countries, communication about SRH remains taboo.Most parents reserve such information until marriage and it is usually incomplete [7,30,53].A study on healthcare provider and educator perspectives among Syrian refugees on adolescent SRH in Lebanon [23] identified inadequate SRH-related communication between parents and daughters.They usually block such communications with stress, shame, discomfort, and stigma which results in barriers to dialogue and utilization. "Parents don't bring up these topics (with their children). They'll tell you: 'You're opening up my daughter's eyes to something bad!' Or they'll say: 'She won't be thinking about these things until she hears about them. ' But who says she's not thinking about this? Maybe she is thinking about these topics but she is too afraid (to discuss with her parents)? If a mother cannot educate her daughter (on this issue) then she should ensure that her daughter is receiving the correct information elsewhere. " (High-school teacher, IDI). A similar study in Bihar, India [44] identified poor spousal communication as the barrier to not using family planning among migrants.The study identified poor couple communication regarding contraceptive use, and they placed the topic in the backseat, while other issues like household needs, children, and family issues dominate the discussions.Migrants perceive that contraceptive use is usually on the table, and husbands have little or no interest in bringing it forth.In such cases, women remain either silent or in fear of provoking any marital conflict by bringing issues to the front. "We do not bring up FP issues when husbands are at home, this might bring unnecessary conflict. We only do what they ask us to do. " (Woman, age 20). Other individual-level barriers include sexual and reproductive health knowledge-related factors such as lack of awareness about the availability and use of the service [28,30,44,48], misinformation [28,34], low-risk perception of vulnerability [30,45,46], poor self-perception [28,45], shame to use the service and fear of side effects [30,44].A study on the use of family planning services by Syrian women in a refugee camp in Jordan by West et al. [48], identified misinformation and poor SRH knowledge of refugees hindered the use of the services.Although family planning is good for their health and the wellbeing of their children, most mothers tend to not use the service because of negative thoughts and misinformation.Participants expressed their concern that byh using modern contraceptives they may lose their fertility.As expressed by one participant in that study, "We believe that after the first child it's preferable… not to have (unspecified) contraception methods because we think… maybe we won't be able to have more children… Some women have been sterile after they used contraception." (Participant 7, FGD). "They said it (the OCP) might cause me not to have children anymore." (Participant 6, FGD). Another woman expressed a lack of knowledge of where to get the service.She stated, "The most important (problem) is that people don't know about the contraceptive methods and where to get them… I don't know if there are any kinds of (FP) services here and where… no one told me….nobody cares." (Participant 9, FGD). Another study among migrants on the Guatemala-Mexico border identified poor information about health systems regarding SRH.Factors such as contraceptive misinformation, lack of information on access to barrier and non-barrier contraceptives, presence of cervical screening services, and lack of sex education played a role as barriers to SRH utilization [47].On the other hand, knowledge of the availability and accessibility of the service played a key role in utilizing the service.On the contrary, refugees in Uganda mentioned a lack of awareness about the availability of services, low self-perception, fear, shame, and anxiety as barriers to service utilization [28].Participants stated in that study, "I have never gone for contraceptives at the health facility" (16-year-old, IDI).Furthermore, disability, poor life skills, and school dropout are barriers to SRH utilization, particularly among young immigrants and refugees [30,41,42,54]. Social and community-level barriers The second category of barriers in the socio-ecological model deals with social and community-related factors.Gender-based violence and decision-making are the main themes under this category.Under the first theme, discrimination towards women in seeking care and gender-based violence; and under decision making theme lack of male involvement in seeking SRH care and gender-related traditional power dynamics play an enormous role.This has been observed among young girls in Uganda, Syrian migrants in Lebanon, Venezuelan migrants in Brazil, and migrant women in Malaysia, Cambodia, Laos, Thailand, and Vietnam [28,30,31,34,49].This shows that regardless of geographical variation and cultural differences, gender-based violence and power imbalance still play a key role as a barrier to using SRH services among migrants, immigrants and refugees in LMICs.A study by Fahme et al. among Syrian refugee girls in Lebanon [30], found that men have an overwhelming power to influence women on whether to take family planning methods or end it based on their (mis) conceptions.Women often have no choice but to comply with the male's opinion, without question.A healthcare provider stated, "We have had women coming (to the clinic) several days after getting Implanon requesting that it be removed because their husbands have heard that it causes cancer, or that it can migrate under the skin and embolize to the heart.Even if their husbands were initially accepting of the contraception, they may have heard from their friends or others that it poses health risks to the woman." (Midwife, FGD). Sexual and reproductive service utilization decisionmaking is another challenge.This includes denial of services, negative attitude to abortion care, husbands' sole decision, and lack of self-right which all play a key role as a barrier to SRH service utilization.When only husbands decide to use a male structure of power, it results in bad maternal and child health outcomes, including morbidity and mortality [30,45].In places where a woman's decision-making is severely limited, adverse health outcomes are inevitable.The decision on the number of children and timing of pregnancy is often determined by the husbands.A healthcare provider noticed, "They (Rohingya) want more children, their husbands want more children. He wouldn't allow these things (family planning). And their religious mindset. And they are totally illiterate, they do not know about family planning. " (Paramedic, KII). A young migrant in the refugee camp of Uganda [28], also stated, "I have never gone for contraceptives at the health facility.I only use the natural method my husband has told me.But I have plans of using one of the family planning methods." (16 years old, IDI). Institutional and health system-level barriers Under the institutional and health system level, service quality (lack of effective access and high cost of the service) and professional competency (compassion and poor policy knowledge) are the main barriers to utilize SRH services.The main problems under the service quality are lack of effective access related to the high costs of the services, the absence of 24-hour/7 day services, distance to the facility, lack of timely service, lack of health care resources, lack of health insurance, unavailability of suitable spaces to learn about SRH, limited options of services, poor satisfaction, and sustainability-related problems [44][45][46].Refugees and migrants who find themselves in impoverished conditions ca. not afford to pay for services, including medical costs Different studies in Lebanon, Jordan, Malaysia, and Thailand have indicated that the costs of services was the main challenge to accessing maternal or other reproductive health services [9,23,37,38,40,41]. A study on undocumented Myanmar migrants in Thailand found that migrants who need emergency services, face unprecedented challenges related to the cost of the service to get a cesarean section delivery [51].One of the healthcare providers working in the hospital stated: "In case of critical patients transferred to us who need to have emergency operations to give birth, the cost will be high. Even if we have a few cases, the expenses of the obstetric and newborn sections will be the highest amount when compared with other sections of the hospital" (HCP, IDI). Another study on Syrian migrant girls in Lebanon [30] identified the high amount of medical costs as a barrier to accessing and using the service. "I went to a hospital here, but no one helped us.I spent three days in the hospital in Saida (Lebanon), and no one helped us.The medical expenses were very high, and you are aware of our situation here. I went back to Syria to be treated. " (young girl, IDI). Besides medical costs, poor accessibility and service quality play a major role.Poor quality of the service including lack of needed resources and the absence of 24/7 services was the main challenge among migrants and refugees in Uganda [28], Bangladesh [45], Jordan [48], Cambodia, Laos, Thailand, and Vietnam [49].In places where service quality and accessibility are not ensured, they face SRH problems that put their lives and future at risk, besides their general frantic life conditions [8,9,31,42,50]. Healthcare professionals are the persons who should provide appropriate and effective healthcare.However, a lack of compassion can deter access and the use of the services.With refugees, their role is more important since options for different health care and professionals are very limited.Lack of female healthcare workers, denial of service based on marital status, discrimination, inhumane treatment, lack of confidence to provide the service, language barrier, and poor skills and policy knowledge were common barriers.Healthcare in turn often complains about burnout and work overload [28,30,43,44,48,50,51]. A study on the needs and priorities of Syrian refugees in Jordan by Al-Rousan et al. identified highly disrespectful and humiliating treatment from the healthcare providers, which is against human rights [50], and on the Mexico-Guatemala border, discriminatory treatment of foreign immigrants was observed [47].In such places, migrants usually become liable to pay a high cost in search of better attention and treatment from private clinics, which puts them under greater financial strain [47,50]. In most studies, refugees and migrants often complain about the absence of female healthcare professionals, which blocks service utilization.Women and girls, particularly in Middle Eastern countries where Syrians have sought refuge, are embarrassed to be treated by male doctors [30,50].The behavioral dimension of health care professionals and the gender of the care provider plays a key role in service utilization.Refugees and immigrants question the skills and confidence of healthcare providers in emergency settings.They compare and contrast with the physicians they used to get treatment in their home countries where they received much attention and visited preferred medical personnel [21,49,51,55]. Sometimes health care professionals ignore the needs of unmarried women and girls who need SRH services.One of the migrant women on the Thailand-Myanmar border [46] complained about the denial of service because of her marital status: "They seem to only give medications and condoms to married couples.If some of them are still in school, they wouldn't be given anything to prevent pregnancy." (pregnant refugee, IDI). Healthcare practitioners in many studies complain about work overload and burnout because of the high number of clients in their settings.They believe this will also play a key role as a barrier to providing quality SRH services to refugees [45,50]. Structural level barriers Structural barriers are those that operate at the macro level in the socio-ecological model.Legal and policyrelated barriers are the main challenge blocking migrants not to access services, lack of health insurance, mandatory seeking of documents to provide services, and policy knowledge of gaps by health care providers.These factors work alone or in combination in many studies in this synthesis.In some countries, SRH services are not accessible to migrants merely because of their migration status, whereas others seek documentation completion before rendering essential care that is needed for their survival.For example, women were denied comprehensive abortion care because of their legal status in humanitarian settings in Bangladesh [45] and stripped of their right to get SRH services in Malaysia because of employment contract clauses [56].The challenges include also discriminatory health Policies, discriminatory prohibition of pregnancy for migrant women, compulsory health screening, denial of marriage for low-skilled professionals, and denying of family planning services under the pretext of preventing promiscuity [31,56,57].Such barriers forced women to seek care from illegal and unsafe sources putting their health in a problematic position [9].A medical doctor expressed the condition of migrant pregnant women as; "They will automatically be illegal migrants because the moment they are pregnant, they will lose their visa and if they lose their visa, they become illegal migrants.But somehow, many of them do deliver locally." (private GP, KII). In Lebanon, husbands were prohibited from accompanying their pregnant wives because of the clinic policies deterring them from attending the service provision [30].Such policies discourage not only women from attending future service but also create mistrust and negative attitude towards care and medical personnel.The study in Malaysia has identified the worst case of reporting to the legal authorities for custody when women seek emergency lifesaving services [31]. Discussion This synthesis provides evidence about socio-ecological determinants that preclude women in humanitarian settings from accessing and utilizing services to realize their sexual and reproductive health and their right to the enjoyment of the highest standard of health.According to our findings, migration and refugee status are existing problems in many developing countries, and despite geographic and cultural differences, they face similar barriers to service utilization from the individual to the policy level, which primarily include communication-related barriers, gender-based violence, and decision-making, care quality and compassion and legal barriers in the host countries.Other main causes of poor SRH among refugees in various countries were confirmed by the findings [12,17,58,59].Our synthesis supplements additional knowledge on how cross-cutting barriers such as basic needs and person centerdness affect the utilization of SRH services and the well-being of women and girls.Training on SRHR, improving access to care, and compassion and communication are the cross-cutting facilitators of SRH service utilization for female refugees in developing countries.Hence, the discussions will focus on those facilitators of SRH service utilization. Availing person centered care (PCC) for migrants and refugees in LMICs Every women including those in the humanitarian context have the right to get a quality SRH care [10,60].The quality of care framework by the WHO for SRH places particular emphasis on the experience of care, which includes aspects such as communication, respect and dignity, and emotional support in their specific cultural context [4,61,62].These person-centered factors often influence patients' opinions about the value of the care them receive and their satisfaction with services.The effectiveness of health systems in meeting clients expectations and their level of trust are also reflected in the perceptions of the quality of care.These person-centered attributes also have an impact on treatment outcomes and on future demand for services [61]. In the context of person centered healthcare, communication is regarded as a critical starting point for establishing trust between the service provider and the client.Communication and linguistic barriers make it problematic for migrants to steer the healthcare system and restrict healthcare personnel from providing proper services to migrants, which reduces the effectiveness of initiatives of health promotion aimed at them [15,63].For example, migrants' inability to effectively explain illness signs and symptoms may reduce the likelihood of syndromic infection detection, resulting in insufficient HIV and STI treatment [64,65].Numerous studies have shown that in order to improve the experience and usage of services for migrants and refugees, health practitioners must be culturally competent and including language proficiency [17,66,67].These skills can reduce communication hurdles caused by linguistic and cultural barriers, engage sensitively with various community values, and address perceived and/or experienced discrimination against migrants and refugees by service providers.We advise that curricula for on-the-job continuous professional development or regular training for health care professionals should include cultural competency.In this context, cultural competency includes the understanding of language, cultural safety, cultural awareness, and cultural sensitivity among health workers in addition to honoring cultural values. Ensuring rights to healthcare resources and financial means Direct financial barriers are created by out-of-pocket payment requirements, particularly for female refugees and migrants in LMICs [5,31,41,68].Securing Sexual and Reproductive Health (SRH) services for undocumented migrants continues to pose a challenge in achieving Universal Health Coverage (UHC), particularly in many low and middle-income countries.Undocumented women in Thailand, Mynamar, and Turkey encounter additional obstacles in obtaining SRH information, family planning services, antenatal, and emergency obstetric services within those contexts [51,69,70].Indirect financial barriers may include transportation and housing costs [47].Moreover, in both developing and wealthy countries, lack of human resources and financial constraints have been recognized as barriers to improving access to and use of SRH services [60,71,72].Governmental and nongovernmental organizations have often raised concerns about financial issues.Lack of funding for SRH among migrants and refugees leads to delays in necessary diagnosis and treatment [2,12].For example, low levels of HIV testing among migrants from LMICs are caused by a lack of funding for migrant health, especially for preventive care [73].Overcoming administrative barriers to accessing care can be a barrier for both refugees and providers.For example, criteria such as proof of residence raise questions about the eligibility of refugees and service providers [74].Furthermore, it is a challenge for professionals to decide what services can be provided, as different categories of refugees have different entitlements.Even when legally permitted, administrative and financial barriers may limit access to care [31,66].Ensuring financial access and support could increase the uptake of services by migrant and refugee women in low-and middle-income countries.Policy guidelines should also take into account any administrative barriers imposed by the host country.In most countries, this gap exists.Countries need to respect laws in humanitarian settings in order to define a common ground that works for all [10,60,75]. The need for promoting awareness and education among men and boys regarding sexual and reproductive health and gender equity Key decisions about SRH use are made by husbands and in-laws, leaving women with no choice but to consent to avoid punishment and social stigma [44].Many policies and regulations in LMICs fail to address the different forms of violence that people may face in their destination countries, as opposed to their countries of origin. Studies show that sexual assault, forced sex, transactional sex and other forms of sexual exploitation are very common.Some research suggests that partners or family members may be the initiators of physical or sexual abuse [74,76].Assessing sexual exploitation by family members or close relatives can be challenging as girls may be reluctant to come forward and report such incidents.However, we believe that in addition to sexual exploitation and harassment by their partners, a significant proportion of girls are also victimized by family members and close relatives.This finding has been reported in other nonhumanitarian [10,77].Therefore, we suggest promoting awareness and education among men and boys regarding sexual and reproductive health and gender equity could contribute signifincatly in mitigating those challenges.In addition to addressing the needs of women and girls gender based violence as part of minimum initial package, an in-depth exploration of the problem is required. Availing basic services and inclusive cultural contexts Addressing the needs of migrants and refugees goes beyond the health aspect, and other stakeholders should be involved in meeting the basic needs.There are minimum initial service package outlined by UNHCR, however in most humanitarian settings, people suffer due to lack of basic services that may obscure their priority to SRH need [60]. Cultural differences between migrants and refugees, and members of host communities influence the ease of access to and use of services.Studies have shown that uptake of SRH services is significantly affected by stigma and prejudice based on gender, migration status and other environmental factors [17,20].In addition, numerous studies have shown that migrant and refugee communities often stigmatize young people seeking SRH services [17,78,79].In Sweden, a culturally tailored SRH education programme for refugee family members, including husbands, was provided during the settlement phase, and an evaluation found that it increased their understanding of sexual health and gave them the confidence to use the health system [22].Low-and middleincome countries should benefit from such initiatives for migrant women and their families, ensuring sensitivity to the diversity of local values and attitudes. The study among undocumented immigrant women in Turkey revealed that some women do not go to the hospital even during childbirth due to the fear of deportation [70].Such conditions could lead to maternal and fetal morbidity and mortality.For this reason, it is important to draw lessons from commendable approaches taken by authorities in the United States and Northern European countries, where undocumented immigrant women benefit from preventive reproductive health programs without being reported [75,80,81].Furthermore, to develop initiatives that destigmatize sexual health issues and the use of services by young migrants, health system interventions should focus on community members, religious and faith leaders, and multicultural groups [82,83].Mechanisms that engage community members in the co-production of healthy SRH should be put in place to improve the well-being of migrants and refugees in LMICs. Strength and weakness The use of the socio-ecological model provides a better understanding of the barriers across countries, including institutional and structural barriers.This synthesis critically appraised primary articles, and we included participants in refugee camps as well as those outside refugee camps.The synthesis provided priority areas for service packages in the health sector and beyond, recognizing the need for a multi-stakeholder approach in low-and middle-income countries.However, the weakness of this study is that, despite our best efforts to conduct a comprehensive search, some studies were not included, publications were limited to the English language, and the time period of only the last ten years may have missed some relevant qualitative data.The focus of current study is among cis-straight women.We also acknowledge that studies may not capture all issues as many women and girls are reluctant to disclose some of the challenges they face due to shame and fear, which is a common culture in many low and middle-income countries. Conclusion and recommendations Optimizing person-centered care, ensuring access to health resources and financing, educating husbands and communities on gender equality, and providing basic services in an inclusive context are the four areas that need intervention to improve SRH uptake among female migrants and refugees, including unregistered ones in low and middle income countries. Evidence-based SRH services should be made available to promote person-centered care, provide appropriate language support, respect their dignity, and maintain privacy and confidentiality.In particular, husbands and opinion leaders such as religious leaders and family members should be educated about sexual and reproductive health and rights.Further research is also needed to identify the impact of these structural inequalities, such as rights-based approaches to improving SRH for refugees and migrants.In most situations, research into different norms, power dynamics and political prioritization is also important to understand why SRHR remains a deprioritized issue among refugees and migrants. Nations should establish and communicate healthcare accessibility measures to attain Universal Health Coverage (UHC), extending the right to health for undocumented individuals.This emphasizes the principle that everyone, irrespective of their migration status, should have the ability to avail themselves of the necessary services. Non-health sectors need to overcome significant structural barriers to SRH.In addition, SRHR policies for migrants need to be broadened to cover incidents such as sexual assault and challenge the culture of gender-based violence.Responses to SRHR should be based on the recognition that refugees and migrants need adequate health systems and legal protections as they are vulnerable in their countries of origin, while travelling and at their final destination.If the right to health is to be maintained, preserved and fully realized in times of need, curative and preventive SRH services for migrants, especially migrants and refugees, must be adequately resourced. Fig. 1 Fig. 1 Flow diagram to select article for the systematic synthesis to identify the sexual and reproductive health challenges of female migrants in low income settings
2024-02-21T06:17:15.354Z
2024-02-19T00:00:00.000
{ "year": 2024, "sha1": "5e727fe6a8f6be78bc6c2dccc08469ef0acf6903", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "65e27656ef93e4ace210c74b315a30c57e3d4607", "s2fieldsofstudy": [ "Sociology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221685858
pes2o/s2orc
v3-fos-license
Heat stress responses and population genetics of the kelp Laminaria digitata (Phaeophyceae) across latitudes reveal differentiation among North Atlantic populations Abstract To understand the thermal plasticity of a coastal foundation species across its latitudinal distribution, we assess physiological responses to high temperature stress in the kelp Laminaria digitata in combination with population genetic characteristics and relate heat resilience to genetic features and phylogeography. We hypothesize that populations from Arctic and cold‐temperate locations are less heat resilient than populations from warm distributional edges. Using meristems of natural L. digitata populations from six locations ranging between Kongsfjorden, Spitsbergen (79°N), and Quiberon, France (47°N), we performed a common‐garden heat stress experiment applying 15°C to 23°C over eight days. We assessed growth, photosynthetic quantum yield, carbon and nitrogen storage, and xanthophyll pigment contents as response traits. Population connectivity and genetic diversity were analyzed with microsatellite markers. Results from the heat stress experiment suggest that the upper temperature limit of L. digitata is nearly identical across its distribution range, but subtle differences in growth and stress responses were revealed for three populations from the species’ ecological range margins. Two populations at the species’ warm distribution limit showed higher temperature tolerance compared to other populations in growth at 19°C and recovery from 21°C (Quiberon, France), and photosynthetic quantum yield and xanthophyll pigment responses at 23°C (Helgoland, Germany). In L. digitata from the northernmost population (Spitsbergen, Norway), quantum yield indicated the highest heat sensitivity. Microsatellite genotyping revealed all sampled populations to be genetically distinct, with a strong hierarchical structure between southern and northern clades. Genetic diversity was lowest in the isolated population of the North Sea island of Helgoland and highest in Roscoff in the English Channel. All together, these results support the hypothesis of moderate local differentiation across L. digitata's European distribution, whereas effects are likely too weak to ameliorate the species’ capacity to withstand ocean warming and marine heatwaves at the southern range edge. capacity and strong spatial structuring (King, McKeown, et al., 2018;Miller et al., 2019). Studies on local adaptation in L. digitata suggest that differentiation between populations could have occurred due to their geographic position (range central and marginal as well as southern and northern). King et al. (2019) investigated the expression of genes coding for heat shock proteins (HSP) in response to an hour-long heat shock in L. digitata from Scotland (range center) and Southern England (trailing edge). Maximum HSP response was present at 4-8°C higher temperatures in the southern populations in this short-term study, despite comparably low genetic diversity . The reduced genetic diversity and altered reproductive strategy in a southern marginal population in Brittany, France, also suggests that local differentiation has taken place (Oppliger et al., 2014;Valero et al., 2011). Overall, research on integrative responses such as growth is lacking when assessing the intraspecific thermal variation of L. digitata. Additionally, few studies on thermal responses of kelps incorporate physiology and population genetics over large geographic scales, although they may help to better predict climate change effects (Nepper-Davidsen, Andersen, & Pedersen, 2019). The main objective of this study was thus to assess differentiation in heat stress responses among populations of Laminaria digitata present along the entire Northeast Atlantic distribution zone through a mechanistic, common-garden experiment. We hypothesized that an increasing thermal selection pressure toward the southern distribution limit increased heat resilience of sporophytes from southern L. digitata populations. Because of high similarities of thermal characteristics across regions reported in previous comparative studies (Bolton & Lüning, 1982;tom Dieck, 1992), we expected local differentiation in response to heat to be of small extent and to occur mainly toward the upper temperature limit (see also King et al., 2019). We further expected phenotypic differentiation to occur more prominently in populations experiencing low amounts of gene flow, while we expected low genetic diversity to be associated with reduced heat resilience as a result of genetic drift and possible maladaptation, which we investigated by the use of neutral microsatellite markers. | Sample collection and preparation We collected 30-35 fertile L. digitata sporophytes (Figure 1a Entire sporophytes were stored in ambient seawater for up to two days before processing. At the sampling locations, clean material from the meristematic region was preserved in silica gel for microsatellite genotyping. For the heat stress experiment, six disks (Ø 20 mm) were cut from the meristematic region of each sporophyte (i.e., 180 disks per population) in a distance of 5-10 cm from the stipe-blade transition zone. Disks were stored moist in cool boxes (<15°C) and transported to the laboratory within 30 hr. All experiments were performed at the Alfred Wegener Institute in Bremerhaven, Germany. | Experimental design We designed the experiment ( Figure 2) as a mechanistic short-term exposure to heat stress around the upper survival temperature of L. digitata sporophytes (21°C for a two week exposure; tom Dieck, 1992). A temperature of 19°C was considered to be a sublethal treatment for all populations, 21°C a threshold treatment (lethal over a longer exposure time; tom Dieck, 1992;Wilson, Kay, Schmidt, & Lotze, 2015), and 23°C a critical stress treatment (Bolton & Lüning, 1982), which also surpassed mean daily maximum temperatures of all sampled populations in 2018 ( Figure 1c). We exposed all samples to the same temperatures, irrespective of the ecological significance for local populations, to investigate the thermal plasticity and potential of L. digitata across its entire distribution range. The heat stress experiment was conducted in independent runs in common-garden conditions with material from Spitsbergen, Tromsø, Helgoland, Roscoff, and Quiberon. Due to logistic constraints, Bodø had to be excluded, and Spitsbergen material was only tested for growth and fluorescence characteristics and not for biochemistry and pigments. For each population, five replicate pools each contained all meristem disks of six distinct sporophytes to prevent pseudoreplication. Irradiance ranged between 30 and 35 µmol photons m −2 s −1 at the bottom of the beakers in a 16:8-hr light:dark (L:D) cycle (ProfiLux 3 with LED Mitras daylight 150, GHL Advanced Technology, Kaiserslautern, Germany). Beakers were aerated gently to ensure motion of disks and even light and nutrient availability. To allow recovery from sampling stress, disks were cultivated at 10°C for two (Tromsø) or nine days (Spitsbergen due to logistic issues), or at 15°C for four (Roscoff, Quiberon) or three days (Helgoland) before the acclimation phase of the experiment. From each replicate pool, eight disks were then randomly assigned to one replicate 2 L glass beaker in each of the four temperature treatment groups (15, 19, 21, 23°C, n = 5). Six disks per replicate were marked by punching a small hole on the outer rim with a Pasteur pipette to be frozen for biochemical and pigment analysis during the experiment. The two unmarked disks were used for growth and fluorometric measurements over the course of the experiment. At the beginning of the experiment, disks were acclimated at 15°C for five days to obtain a similar metabolic state (day −5 to day 0; Figure 2). Although the northern populations Spitsbergen and Tromsø do not usually experience temperatures this high ( Figure 1c), 15°C is a temperature within the growth optimum of L. digitata (Bolton & Lüning, 1982;tom Dieck, 1992), which is considered to be stable (Wiencke, Bartsch, Bischoff, Peters, & Breeman, 1994), even for the Spitsbergen population (Franke, 2019). Starting the heat stress treatment on day 0, temperature was increased by increments of 2°C day −1 until the desired temperature was reached. The maximum temperature 23°C was applied for five days, while 21°C and 19°C were applied for six and seven days, respectively, according to the acclimation scheme ( Figure 2). On day 8, temperature was set to 15°C for all treatment groups to initiate a recovery period of seven days. Measurements took place at the beginning of the experiment (day −5; Figure 2), F I G U R E 2 Timeline of the heat stress experiment of Laminaria digitata. Dotted lines separate experimental phases of acclimation at 15°C (days −5-0), heat treatment (days 0-8), and recovery at 15°C (days 8-15). Growth and F v /F m were measured on days −5, 0, 3, 6, 8, and 15. On days 0 and 8, rapid light curves were performed and samples were frozen for biochemical and pigment analyses the beginning of the heat treatment (day 0), before applying the maximum temperature 23°C (day 3), in the middle of the heat treatment (day 6), at the end of the heat treatment (day 8), and after the recovery period (day 15). | Relative growth rates Two disks per replicate were repeatedly measured for growth over the course of the experiment (n = 5). Disks were blotted dry and weighed for growth analyses. Relative growth rates (RGR) were calculated as where x 1 = weight (g) at time 1, x 2 = weight at time 2, t 1 = time 1 in days, and t 2 = time 2 in days. | PAM Fluorometry Fluorescence parameters were assessed to estimate photoacclimation reactions in response to temperature (Davison, Greene, & Podolak, 1991;Machalek, Davison, & Falkowski, 1996) and were all conducted using a PAM-2100 chlorophyll fluorometer (Walz, Effeltrich, Germany). Maximum quantum yield of photosystem II (F v /F m ) was repeatedly measured in two disks per replicate over the course of the experiment following 5 min dark acclimation (n = 5). Before and after the heat treatment (day 0 and day 8), rapid light curves (RLC) were conducted after F v /F m measurements on one disk (n = 3). RLC irradiance steps ranged from 0 to 511 µmol photons m −2 s −1 . Based on the photon flux density (PFD) and the effective quantum yield, relative electron transport rates (rETR) in photosystem II were calculated following Hanelt (2018) as rETR was plotted against PFD, and the resulting curves were fitted following the model of Jassby and Platt (1976) to calculate the maximum relative electron transport rate rETR max , the saturation irradiance I k , and the photosynthetic efficiency α of each curve. Nonphotochemical quenching was calculated following Serôdio and Lavaud (2011) as where F m = maximum fluorescence of a dark-adapted sample, and F m ′ = maximum fluorescence of a light-adapted sample. NPQ versus irradiance curves were fitted following the model of Serôdio and Lavaud (2011) to calculate maximum nonphotochemical quenching NPQ max , the saturation irradiance E 50 , and the sigmoidicity coefficient n. | Biochemistry Biochemical and pigment analyses were conducted with material from Tromsø, Helgoland, Roscoff, and Quiberon. We assessed the early photosynthetic product mannitol, which is accumulated during summer (Schiener, Black, Stanley, & Green, 2015), and elemental carbon and nitrogen to estimate carbon assimilation and nutrient storage in response to temperature. In wild sporophytes, assimilated mannitol is metabolized into the long-term storage polysaccharide laminarin and translocated into the distal thallus (Gómez & Huovinen, 2012;Yamaguchi, Ikawa, & Nisizawa, 1966). As the meristematic region only contains minimal amounts of laminarin in wild sporophytes (Black, 1954), and as maximum laminarin contents occur with a seasonal delay of 1-2 months in late autumn (Haug & Jensen, 1954;Schiener et al., 2015), we did not assess laminarin storage in our short-term experiment on isolated meristematic disks. Before the start and at the end of the heat treatment (day 0 They were ground under dim light conditions, weighed to 50-80 mg, and extracted in 90% aqueous acetone in darkness for 24 hr at 7°C. | Statistical analyses of physiological parameters As we measured two disks per replicate, we calculated growth rates and F v /F m from mean values per replicate. One disk was removed from the Spitsbergen 23°C treatment due to bleaching during the heating ramp. Despite identification efforts in the field, almost none of the microsatellite markers amplified in two samples from Spitsbergen (see also 2.3.2). This led to the assumption that the two samples were of which is morphologically very similar to L. digitata (Dankworth, Heinrich, Fredriksen, & Bartsch, 2020;Longtin & Saunders, 2015). One replicate pool probably containing meristem disks from both species was therefore removed from the experiment. Due to the mannitol extraction performed in triplicates, means of the three subsamples of each mannitol replicate were analyzed. In carbon and nitrogen analyses, four data points were deleted due to a measuring error on day 0. In the xanthophyll pool and de-epoxidation analyses, one outlier was deleted due to implausibly high zeaxanthin contents about four times higher than the next highest value. All analyses of the heat stress experiment were performed in the R statistical environment version 3.6.0 (R Core Team, 2019). We fitted generalized least squares models for all parameters and tested for significance using analyses of variance (ANOVA). All models were fitted using the "gls" function from the R package "nlme" (Pinheiro, Bates, DebRoy, & Sarkar, 2019) with weights arguments to counteract heterogeneity of variance of normalized model residuals (Zuur, Ieno, Walker, Saveliev, & Smith, 2009 were modeled as interactive fixed effects and a compound symmetry correlation structure was incorporated using a time covariate and replicate as grouping factor (Pekár & Brabec, 2016;Zuur et al., 2009). Analyses of variance were then performed on the models with the "anova" function to assess the effects of the fixed effects temperature, population and exposure time, and their interactions. For all biochemical, pigment, and fluorometric analyses, initial contents at day 0 were incorporated in the models as covariates to account for baseline differences, and temperature and population were modeled as fixed effects. Analyses of variance were performed to assess the effects of the initial value covariate and the fixed effects temperature and population, and their interaction. Pairwise comparisons were performed using the R package "emmeans" (Lenth, 2019) and using the "Satterthwaite" mode for calculation of degrees of freedom and Tukey adjustment of p-values for multiple comparisons between independent groups. For pairwise comparisons in the repeated measures analyses (growth and F v /F m ), the "df.error" mode for calculation of degrees of freedom was applied. Because of the repeated measures design and because the "df.error" mode overestimates the degrees of freedom (Lenth, 2019), p-values were adjusted by means of the conservative Bonferroni correction for multiple testing to reduce the probability of type I errors. Correlation analyses (Kendall's rank correlation) were conducted between all parameters measured after the heat treatment (relative growth rates calculated between day 0 and day 8) using the "cor.test" function from the default R package "stats" (R Core Team, 2019). Amplification was faulty for the population of Helgoland sampled in 2018, which could be linked to poor preservation or insufficient dehydration. Therefore, the dataset of the same population sampled at the same site in 2016 was used in the genetic analysis instead. In total, 190 individuals were initially genotyped for twelve microsatellite markers and 179 were retained. | Genetic diversity Prior to genetic analysis, the presence of null alleles was tested using the ENA method in FreeNa (Chapuis & Estoup, 2007). Single and multilocus estimates of genetic diversity were calculated for VAZ:Chl a ratio mg mg −1 Chl a = V + A + Z Chl a De − epoxidation ratio = Z + 0.5A V + A + Z each population as the mean number of alleles per locus (N a ), unbiased expected heterozygosity (H e , sensu Nei, 1978), observed heterozygosity (H o ), and number of private alleles (P a ) using GenAlEx 6.5 (Peakall & Smouse, 2006). In addition, allelic richness (AR) was computed using FSTAT 2.9.3 (Goudet, 2001) for each locus using the rarefaction method. Linkage disequilibrium between pairs of loci and single estimates of deviation from random mating (F IS ) was calculated according to Weir and Cockerham (1984), and statistical significance was computed using FSTAT based on 7920 permutations for linkage disequilibrium and 10 4 for F IS . To test the null hypothesis that populations did not differ in genetic diversity, a one-way ANOVA was performed for AR, P a, and H e in R (R Core | Population structure Population structure was investigated first by the analysis of the pairwise estimates of F ST (Weir & Cockerham, 1984), and their significance were computed using FSTAT (Goudet, 2001). Second, a Bayesian clustering method as implemented in Structure 2.3.4 (Pritchard, Stephens, & Donnelly, 2000) was used to determine the existence of differentiated genetic groups within L. digitata populations categorizing them into K subpopulations. A range of clusters (K) from one to six was tested with 100 iterations, a burn-in period of 100,000, and a Markov chain Monte Carlo of 500,000 (Gilbert et al., 2012). The most likely value of K was determined using Evanno ΔK (Evanno, Regnaut, & Goudet, 2005) obtained using Structure Harvester (Earl & vonHoldt, 2012). Replicates of Structure runs were combined using CLUMPP software (Jakobsson & Rosenberg, 2007). | Heat stress experiment The significant main effects of independent factors are only reported in the absence of significant interactive effects. Therefore, in the presence of significant interactive effects, the simultaneous effects of two or more independent variables on a given dependent variable are given more emphasis than significant main effects. | Growth The significant population × temperature × time interaction for relative growth rates ( Figure 3; Table 1) indicates that growth in the temperature treatments differed significantly between populations over exposure time. However, there were differences in general growth activity between populations already during acclimation at 15°C Over the recovery period at 15°C (Figure 3c), specimens from all populations showed significantly decreased growth after exposure to 23°C compared to lower temperature treatments (Bonferroni tests, p < .05). Spitsbergen and Tromsø essentially ceased growth (RGR < 0.001 and 0.002 g g −1 day −1 , respectively), while Helgoland, Roscoff, and Quiberon maintained slow growth (0.006, 0.004, and 0.01 g g −1 day −1 , respectively). However, during recovery after exposure to 23°C, there were no significant differences between growth rates of the different populations (Bonferroni tests, p > .05). Quiberon material recovered best, in that there were no significant differences between the 15 and 21°C treatments while disks in these treatments simultaneously grew significantly faster than those from the former 23°C treatment (Bonferroni tests, p < .01). In the more detailed time course of growth rates ( Figure A1), it became evident that all populations showed a trend of recovery from 21°C as growth rates increased between day 8 and day 15 ( Figure A1), which was significant only for Quiberon (RM ANOVA; | Photoacclimative responses Maximum quantum yield of photosystem II (F v /F m ) in the temperature treatments differed between populations over time, which is represented by the significant population × temperature × time interaction ( Figure 4, Table 1). After acclimation, all samples showed no signs of stress with F v /F m ranging between 0.7 and 0.8 (Figure 4a). At the end of the heat treatment ( Figure 4b), temperature effects on quantum yield contrasted between the two populations of At higher temporal resolution ( Figure A2), a general difference between southern and northern populations became more pronounced. While the significant decrease in quantum yield at 23°C took place between day 6 and day 8 for Helgoland, Roscoff, and Quiberon (RM ANOVA; Table A1; Bonferroni tests, p < .05), this decrease already started between day 3 and 6 in Spitsbergen and Tromsø material (Bonferroni tests, p < .001). Only specimens from Spitsbergen, as the most susceptible population, significantly decreased quantum yield also at 21°C, between day 6 and day 8 (Bonferroni test, p < .01). The stronger heat susceptibility of Spitsbergen material became evident also following the recovery period ( Figure 4c). While all other populations recovered from 23°C, in that there were no F I G U R E 3 Relative growth rates of Laminaria digitata disks over the experimental phases of (a) acclimation at 15°C, (b) heat treatment, and (c) recovery at 15°C. Mean values ± SD (n = 5, for Spitsbergen n = 4 Contrary to quantum yield, the photoacclimation parameters obtained from rapid light curves at the end of the heat treatment, maximum relative electron transport rate rETR max ( Figure A3a), saturation irradiance I k ( Figure A3b), and photosynthetic efficiency α ( Figure A3c) did not show significant effects or interactions of temperature and population (Table A2). In contrast, nonphotochemical quenching (NPQ) parameters showed no significant interaction effects, but significant effects of population on maximum nonphotochemical quenching NPQ max and saturation irradiance E 50 , and of temperature on the sigmoidicity coefficient n ( Figure A4; Table A3). Mean NPQ max ( Figure A4a | Biochemistry Tissue mannitol and carbon contents were not significantly affected by interactive effects of population and temperature ( Figure 5; Tukey tests, p < .05). Carbon contents were not affected by temperature, but differed significantly only between populations (Figure 5b; Table 2). As with mannitol, Tromsø material maintained a higher carbon content, in that the means were significantly (7%-9%) higher in Tromsø and Helgoland material than in Roscoff and Quiberon material ((TRO = HLG) > (ROS = QUI); Tukey tests, p < .001). Nitrogen contents were significantly affected by interactive effects of population and temperature ( Figure 5c; | Pigments Chlorophyll a content was not significantly affected by interactive effects of population and temperature, but differed TA B L E 1 Results of generalized least squares models to examine variability of relative growth rates (RGR) and maximum quantum yield (F v /F m ) of Laminaria digitata disks in the heat stress experiment significantly between populations ( Figure 6a; Table 3 The mass ratio of xanthophyll pigments per chlorophyll a (VAZ : Chl a ratio) was affected significantly by initial values, and interactive effects of population and temperature ( Figure 6b; Table 3). De-epoxidation ratios of xanthophyll cycle pigments were affected significantly by initial values, and interactive effects of population and temperature ( Figure 6c, Table 3). The significant differences between populations in mean de-epoxidation ratios over Note: Molar mannitol content, carbon content, nitrogen content, and C:N ratio were tested against initial values as covariate and interactive effects of population and heat stress temperature treatment. n = 5, n = 4 for Quiberon in carbon, nitrogen, and C:N ratio. numDF, numerator degrees of freedom; denDF, denominator degrees of freedom. denDF = 59 for carbon, nitrogen, and C:N ratio. Statistically significant values are indicated in bold text. F I G U R E 6 Pigment characteristics of Laminaria digitata disks after acclimation (day 0, empty circles) and after the heat treatment (day 8, colored points). (a) Chlorophyll a contents, (b) mass ratio of xanthophyll pigments per Chlorophyll a (VAZ : Chl a ratio), (c) de-epoxidation ratio of xanthophyll pigments. Mean values ± SD (n = 5, n = 4 for Tromsø 23°C in VAZ : Chl a ratio and deepoxidation ratio). Significant differences between mean population responses are indicated by lowercase letters (Tukey tests, p < .05). Significant differences between temperature treatments within populations are indicated by dashed lines (Tukey tests, p < .05). Significance levels are given in the text all temperatures (Table 3) | Microsatellite amplification Null alleles were present in every population for at least two markers (Table A5). However, differences between F ST values in the pairwise comparison were never greater than 10 -3 (data not shown). Therefore, we concluded that the frequency of null alleles was negligible and our dataset was analyzed without taking into account correction for null alleles. No significant linkage disequilibrium was observed in any of the populations (Table A6). We thus considered all of the markers as independent. The number of alleles per locus ranged from 2 to 22 (Lo454-27 and Ld371, respectively). | Genetic diversity Values of genetic diversity averaged over the 12 loci are provided in Year: year of the samples used for genetic analysis (except for Helgoland, the genotyped individuals are the same than those analyzed for the heat stress experiment); n, number of individuals for which at least 11 markers amplified; N a , mean number of observed alleles; AR, allelic richness standardized for equal sample size (21 individuals); P a , mean number of private alleles per locus; H e , expected heterozygosity; H o , observed heterozygosity; F IS , fixation index (inbreeding coefficient) of individuals with respect to local subpopulation. All parameters are expressed as means over all markers ± standard error. *, significant departure from random mating after correction for multiple testing (p < .0069, FSTAT). locus by locus see Table A7). Most quantities varied by a factor of 1.5 among populations; the lowest genetic diversity was always observed in Helgoland and the highest in Roscoff. Variation was the highest for the mean number of private alleles (P a ) which ranged from 0.083 to 0.583. The differences between populations were not significant when each parameter was tested independently (oneway ANOVA, data not shown). However, a Fisher test of pairwise differences between means revealed that AR and P a were significantly lower in Helgoland compared to Roscoff (data not shown). In addition, three of the twelve loci were monomorphic in Helgoland, compared to the other populations, in which a maximum of one monomorphic locus was observed (Table A7). | Genetic structure Genetic differentiation was significant for each pairwise population comparison (p = .003 for all pairs; FSTAT) with an average F ST value of 0.3795 (Table A8) (Table A7), where one allele is fixed for all southern populations. | Reproductive system L. digitata from Tromsø and Helgoland did not show any significant departure from random mating (F IS ). We identified F IS > 0.1 for Spitsbergen, Bodø, Roscoff, and Quiberon, ( | D ISCUSS I ON We identified a uniform growth limit across European Laminaria digitata populations following a short-term application of 23°C, which conforms with previous studies (Bolton & Lüning, 1982;tom Dieck, 1992). | Similarities in growth and biochemical responses along the latitudinal gradient Growth responses among our tested populations suggest that the upper temperature tolerance limit of Laminaria digitata is uniform along its European latitudinal distribution. Growth is an integrative parameter of all metabolic processes and can thus be interpreted as a proxy for organismal stress response. We observed that growth almost completely ceased in the 23°C treatment for all populations (Figure 3), while all populations showed signs of recovery from 21°C when transferred to 15°C ( Figure A1). The populations of Tromsø and Spitzbergen showed significantly lower overall growth rates than the southern populations. The lower growth rates of the Arctic populations might be related to prevailing local environmental conditions during sampling (e.g., long day lengths, cold temperature) which may influence growth rates and circannual rhythmicity in kelps (Olischläger & Wiencke, 2013;Schaffelke & Lüning, 1994). Still, results of our study using vious studies using laboratory-cultivated whole juvenile L. digitata sporophytes, which also showed uniform upper temperature limits on both sides of the Atlantic and Spitsbergen (Bolton & Lüning, 1982;Franke, 2019;tom Dieck, 1992 In addition to the strong similarities in the upper thermal limits of growth in our study, carbon contents ( Figure 5b) and chlorophyll a contents ( Figure 6a) did not differ between temperature treatments at all. In contrast, the overall trend of increasing mannitol contents at high temperatures (Figure 5a) has been described for Saccharina latissima (Davison & Davison, 1987) and might be linked to the seasonal increase in kelp mannitol storage in summer during the period of slow growth (Haug & Jensen, 1954;Schiener et al., 2015), which, in wild sporophytes, is followed by a peak of the long-term storage compound laminarin in autumn (Haug & Jensen, 1954;Schiener et al., 2015). The consistent responses of growth and biochemical contents across populations reported here indicate a strong acclimation potential of L. digitata's metabolism to high temperature. Acclimation to wide temperature ranges would reduce selective pressure of temperature in the wild and might explain the small magnitude of local differentiation observed in this study. | Differences in growth and photosynthetic parameters among marginal populations Despite the stability of the upper thermal growth limit, we observed subtle physiological differences in the common-garden heat stress experiment, mainly in the marginal populations of Spitsbergen, Helgoland, and Quiberon. Maximum quantum yield of photosystem II was most sensitive to thermal stress at 21°C and 23°C in Spitsbergen material (Figure 4; Figure A2). This is concordant with the subarctic to Arctic regional climate and provides first evidence for a loss of function in a leading-edge L. digitata population, but whether this represents an adaptive trait is yet unknown. Generally, very few cold-temperate algae occurring in the Arctic show true adaptations to the Arctic climate compared to their Atlantic populations (Bischoff & Wiencke, 1993;Wiencke et al., 1994), possibly because the Arctic did not provide a sufficiently stable environment for adaptive evolutionary processes to occur (Wiencke et al., 1994). At the southern range edge, a slight advantage of Q uiberon material to grow at elevated temperatures became evident in the growth response at 19°C during the heat treatment, and in the full recovery from the 21°C treatment (Figure 3; Figure A1). In contrast, photoacclimative responses suggest that the marginal population on the island of Helgoland was most resistant to heat stress. Photosystem II of Helgoland material was minimally impaired by 23°C (Figure 4). Additionally, reactions of xanthophyll pigments (Figure 6b,c) were significantly weaker in Helgoland material than other populations. Increased xanthophyll contents may indicate a photoprotective acclimation reaction (Latowski, Kuczyńska, & Strzałka, 2011;Pfündel & Bilger, 1994;Uhrmacher et al., 1995), while the de-epoxididation ratio of xanthophyll cycle pigments represents the current capacity to quench excessive energy from the photosystem (Pfündel & Bilger, 1994). Helgoland material did not show a significant increase in xanthophyll pigments and presented significantly lower de-epoxidation ratios and therefore lower nonphotochemical quenching (NPQ max , Figure A4) than all other populations. Therefore, the two populations growing in the warmest of the tested locations, which may experience >4 week long periods of mean in situ temperatures of 18°C to 19°C in summer (Helgoland: Bartsch, Vogt, Pehlke, & Hanelt, 2013;Wiltshire et al., 2008;Quiberon: Oppliger et al., 2014;Valero, unpubl.), showed slight physiological advantages to shortterm heat exposure in growth and stress responses. The southernmost populations of Quiberon and Roscoff were curiously the only populations with significantly reduced tissue nitrogen contents in the heat treatments ( Figure 5c). A variety of factors including temperature affects nutrient uptake and consequently tissue nitrogen contents, which could be species-specific (Roleda & Hurd, 2019). Therefore, published studies on the impacts of heat stress on nitrogen uptake and storage in kelps differ in their reports of decreased (Gerard, 1997), unaffected (Nepper-Davidsen et al., 2019, or increased nitrogen contents (Wilson et al., 2015). Whether the underlying cause of reduced nitrogen during heat in our study is adaptive, maladaptive, or neutral toward heat resilience in the southern populations remains unclear until further investigation. | Population genetics in relation to physiological thermal responses Population genetics suggest that the slight phenotypic divergence of L. digitata might have been facilitated through phylogeographic separation into two clades and low genetic connectivity between populations. The hierarchical division into a northern and a southern clade in the Northeast Atlantic (Figure 7a) is likely due to postglacial recolonization by two distinct genetic groups located in refugia proposed for the Armorican/Celtic Sea (Brittany and South West UK) and a potential northern refugium at the west coast of Ireland and/or Scotland (Neiva et al., 2020; see also King et al., 2020). Currently, the highest genetic diversity (H e ≥ 0.6) published for L. digitata populations was observed in Scotland (King et al., 2019, Northwest Ireland (Neiva et al., 2020), and Northeast Ireland (Brennan et al., 2014), which all exceeded the genetic diversity of the populations investigated in this study. Due to a lack of data, it remains unclear whether a potential glacial refugium of L. digitata also corresponds to the well-described Southwest Ireland refugium proposed for many marine species (Kettle, Morales-Muñiz, Roselló-Izquierdo, Heinrich, & Vøllestad, 2011;Provan & Bennett, 2008). Populations at the "leading edge" (high latitude) are said to be associated with low genetic diversity due to recolonization processes following the Last Glacial Maximum (Hampe & Petit, 2005; for marine seaweeds of the North Atlantic see Assis, Serrão, Claro, Perrin, & Pearson, 2014;Neiva et al., 2016;Provan & Maggs, 2012). Therefore, effects of genetic drift (e.g., depleted genetic diversity, increased inbreeding) may be expected to reduce physiological function in these populations. Here, genetic diversity characteristics of L. digitata at its northern range limit (i.e., Spitsbergen) were not significantly lower compared to the other populations in this study and were similar to other Northern Norwegian populations (Neiva et al., 2020). A similar pattern was observed for another Arctic to cold-temperate kelp species, Saccharina latissima (Guzinski, Mauger, Cock, & Valero, 2016). Therefore, rather than effects of genetic drift, a lack of selection pressure in the Arctic might have led to a potential reduction of heat tolerance at the northern distribution limit (i.e., relaxed selection; Lahti et al., 2009;Zhen & Ungerer, 2008). Probably due to the continuous rocky substrata along the Brittany coast, connectivity may be maintained between Q uiberon and neighboring populations, which may explain a certain level of gene flow between Roscoff and Quiberon via stepping stone habitats (Figure 7c). Low gene flow can reduce inbreeding depression and associated deleterious effects and may facilitate local adaptation at this southern range edge (Fitzpatrick & Reid, 2019;Sanford & Kelly, 2011). Genetic diversity characteristics for Brittany L. digitata populations in this study comply with previous reports (Oppliger et al., 2014;Robuchon et al., 2014). Compared to Roscoff, genetic diversity of L. digitata from the island of Helgoland was significantly lower. The population's reduced genetic diversity can be partly explained by genetic isolation due to habitat discontinuity as Helgoland is a rocky substrate surrounded by continuous sandy seafloor (Reichert, Buchholz, & Giménez, 2008). This may rather suggest maladaptation due to less effective selection (such as in Fucus serratus; Pearson et al., 2009). However, samples from Helgoland presented the weakest heat stress response in this study. Therefore, we can hypothesize either that historically greater diversity/connectivity was reduced via isolation and drift after resilience to local conditions was established, or that strong selective forces toward the upper thermal limit of L. digitata have counterbalanced the effect of genetic drift. Significant departures from random mating were only observed for the populations of Bodø and Roscoff (F IS ; Table 4) and match the magnitude of recent descriptions for L. digitata populations Neiva et al., 2020). The higher F IS values in Roscoff L. digitata in our study compared to the nearby population of Santec might be explained by the distance of >1 km between sites, which may already cause substantial variation in F IS (Billot, Engel, Rousvoal, Kloareg, & Valero, 2003). In contrast, the higher F IS values of Quiberon L. digitata in our study compared to Oppliger et al. (2014) who sampled at the same location (Pointe de Conguel North) may be an artifact of differing microsatellite markers or might indicate a change in the reproductive system over time (Oppliger et al., 2014;Valero et al., 2011). In all cases, in the absence of data on reproductive ecology, the underlying causes remain speculative. | Outlook The mechanistic temperature treatments applied in this study do not represent realistic temperature scenarios for all tested populations, especially not for the northern clade. However, during our sampling period in August 2018, acute heat spikes surpassed 20°C on twelve days on Helgoland, and on nine days in Quiberon in the shallow sublittoral (in situ data; Bartsch, unpubl.; Valero, unpubl.). Also in South England, L. digitata already encounters marine heatwaves reaching 20°C (Burdett, Wright, & Smale, 2019;Joint & Smale, 2017). According to predictions of ocean warming (Müller et al., 2009) and marine heatwaves (Oliver et al., 2018), L. digitata will possibly encounter prolonged summer periods of 21°C-23°C at its warm distribution limit until the end of the century. CO N FLI C T O F I NTE R E S T All authors declare that they are free of competing interests. Notes: Fresh weight relative growth rates and maximum quantum yield F v /F m over all time points (T-5 (only F v /F m ), T0, T3, T6, T8, T15) were tested against interactive effects of heat treatment and time for each population separately. Generalized least squares models were performed as described in the methods section, but without the fixed effect for population. Tested values are means of 2 per replicate (n = 5, n = 4 for Spitsbergen). numDF, numerator degrees of freedom; denDF, denominator degrees of freedom. Statistically significant values are indicated in bold text. TA B L E A 2 Results of generalized least squares models to examine variability of photoacclimation parameters of Laminaria digitata disks obtained via rapid light curves in the heat stress experiment ( Figure A3) Notes: Maximum relative electron transport rate rETR max , saturation irradiance I k, and photosynthetic efficiency α were tested against initial values as covariate and interactive effects of population and heat stress temperature treatment. n = 3, n = 2 for Spitsbergen. numDF, numerator degrees of freedom; denDF, denominator degrees of freedom. Statistically significant values are indicated in bold text. TA B L E A 3 Results of generalized least squares models to examine variability of nonphotochemical quenching parameters of Laminaria digitata disks obtained via rapid light curves in the heat stress experiment ( Figure A4) Notes: Maximum nonphotochemical quenching NPQ max , saturation irradiance E 50, and sigmoidicity coefficient n were tested against initial values as covariate and interactive effects of population and heat stress temperature treatment. n = 3, n = 2 for Spitsbergen. numDF, numerator degrees of freedom; denDF, denominator degrees of freedom. Statistically significant values are indicated in bold text. TA B L E A 4 Correlation coefficients (Kendall's rank correlation tau) and p-values in parentheses between relative growth rates (RGR), maximum quantum yield (F v /F m ), biochemical, and pigment characteristics of Laminaria digitata during / after the heat treatment. Billot et al., 1998) and Laminaria ochroleuca (Lo; Coelho et al., 2014). The p-value after multiple testing correction for 5% nominal level is 0.000126. No linkage disequilibrium is significant in the dataset. TA B L E A 6 (Continued) TA B L E A 7 Estimates of genetic diversity and deviation from random mating for each locus and each population of Laminaria digitata tested in this study. Billot et al., 1998) and Laminaria ochroleuca (L o ; Coelho et al., 2014); n, number of individuals for which the marker amplified; N a , number of observed alleles; AR, allelic richness standardized for equal sample size (21 individuals); P a , number of private alleles per locus; H e , expected heterozygosity; H o , observed heterozygosity; F IS , fixation index (inbreeding coefficient) of individuals with respect to local subpopulation; #NV, no calculation of F IS in monomorphic loci. Note that in Helgoland, Roscoff and Quiberon, the locus Lo454-27 is fixed while it is polymorphic for Spitsbergen, Tromsø and Bodø. This explains why this locus was not included in the study of Robuchon et al. (2014). Notes: All p-values obtained with 300 permutations using FSTAT were 0.003 and therefore significant (the p-value corrected for multiple testing is .003). F I G U R E A 1 Relative growth rates (RGR) of Laminaria digitata disks from (a) Spitsbergen, (b) Tromsø, (c) Helgoland, (d) Roscoff, and (e) Quiberon over the heat stress experiment. Points represent growth rates between subsequent measuring days. Mean values ± SD (n = 5, for Spitsbergen n = 4). Points at day 0 represent growth over acclimation at 15°C, the end of the heat treatment at day 8 is marked with a vertical dotted line, and zero growth is marked with a horizontal dotted line. For statistical analysis, see Table A1. F I G U R E A 2 Maximum quantum yield (F v /F m ) of Laminaria digitata disks from (a) Spitsbergen, (b) Tromsø, (c) Helgoland, (d) Roscoff, and (e) Quiberon over the heat stress experiment. Mean values ± SD (n = 5, for Spitsbergen n = 4). End of the acclimation at 15°C and end of the heat treatment are marked with dotted lines. For statistical analysis, see Table A1. F I G U R E A 3 Photoacclimation parameters of Laminaria digitata disks obtained via rapid light curves after acclimation at 15°C (day 0, empty circles) and after the heat treatment (day 8, colored points). (a) Maximum relative electron transport rate rETR max (relative unit), (b) saturation irradiance I k (µmol photons m −2 s −1 ), (c) photosynthetic efficiency α (rETR/µmol photons m −2 s −1 ). Mean values ± SD (n = 3, for Spitsbergen n = 2). Analyses of variance returned no significant differences between populations (indicated by lowercase letters) and temperatures (Table A2). F I G U R E A 4 Nonphotochemical quenching parameters of Laminaria digitata disks obtained via rapid light curves after acclimation at 15°C (day 0, empty circles) and after the heat treatment (day 8, colored points). (a) Maximum nonphotochemical quenching NPQ max (relative unit), (b) saturation irradiance E 50 (µmol photons m −2 s −1 ), (c) sigmoidicity coefficient n (unitless). Mean values ± SD (n = 3, for Spitsbergen n = 2). Significant differences between mean population responses are indicated by lowercase letters (Table A3; Tukey tests, p < .05). (Evanno et al., 2005) plotted against K, associated with K = 2 to K = 5 obtained with Structure Harvester during the analysis of genetic structure of Laminaria digitata populations
2020-09-03T09:06:26.629Z
2020-08-17T00:00:00.000
{ "year": 2020, "sha1": "8eea45842dbe4a23a07bb033172a821d9ae9bd8f", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.6569", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "845984b2185dab4396f16d9af07f5d4f2c46cb72", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
249288388
pes2o/s2orc
v3-fos-license
The Burden of a Violent Past: Formative Experiences of Repression and Support for Secession in Catalonia Abstract This letter studies the impact of past violence and repression on current territorial preferences in a contemporary democracy. Does a violent past lay the grounds for pro-secessionist preferences, or does it lead individuals to cling on to the territorial status quo? We study whether exposure to the events of the Spanish Civil War and its immediate aftermath made people more or less likely to support Catalan secession from Spain. Our analysis employs a dataset that combines a large N of individual-level survey data with historical data about repression and violence in each Catalan municipality. Findings indicate that current preferences for secession tend to diminish among the oldest Catalan generation that was exposed to higher levels of violence in their municipality. Most crucially, we show that exposure to violence created a sense of apathy towards politics among the oldest cohort, which eventually leads to a lower predisposition to support secession, a feeling that was not transmitted to subsequent generations. Our findings qualify some of the existing knowledge on the effects of past political violence on present political attitudes. hunger' is time varying and, in some circumstances, leads Ukrainians to behave more loyally towards Moscow. In a cross-national observational study, Torcal and Montero (2006) show that people living in countries with a recent authoritarian past, such as Spain, Portugal or Greece, are more likely to abstain from voting. Finally, and despite finding an anti-regime-bias logic in voting, Balcells (2012) concludes that younger generations are more influenced by past violent experiences than older generations, and that self-reported victimization does not have an effect on identities articulated around the centre-periphery cleavage. As the previous summary illustrates, the question is far from settled. Thus, most previous research has focused on the long-term effect of violence and repression on people's political behaviour, mainly vote choice but also abstention (Lupu and Peisakhin 2017). Others look at attitudes, with a special focus on trust (Valencia and Tur-Prats 2020). Yet, previous works have largely overlooked an important outcome, namely, people's support for or rejection of the territorial status quo. Being more left-wing or right-wing, or having more or less trust, though undoubtedly important, might constitute a mild sign of rejection (or of bias). However, the willingness to change the territorial status quo is a clear indication of dissidence and can even constitute a better proxy of people's support for the current political system. In this research note, we precisely complement this vivid debate by examining to what extentand in which directionviolence and repression experienced during the Spanish Civil War and its immediate aftermath left an imprint on the Catalans' willingness to support the creation of a new state. We examine whether the effect of a violent past on current secessionist preferences, if any, is circumscribed to the old cohort or has travelled over time. In addition, we explore whether exposure to violence leads people to avoid involvement in new conflicts or a sense of political apathy that may trigger a lower willingness to support independence. The coup d'état against the Spanish Second Republic was designed, among other things, as a reaction against conceding power to the Catalans. As historians have documented (Solé i Sabaté and Villarroya i Font 1989), the centre-periphery cleavage was a salient topic before and during the war, and has remained salient ever sinceat varying degrees and together with the left-right divide. For instance, one of the first measures implemented by the Francoist army after occupying Catalonia was to ban the Catalan language and impose a unitary idea of the Spanish nationas illustrated in the indoctrination of children on Spanish nationalism in school curricula. If this created a reaction along the lines of the anti-regime-bias expectation, then we should observe that cohorts who were exposed to more violence during their formative years should have been more likely to embrace the secessionist claim. Yet, one might need to consider that while repression from the Francoist regime was more intense, violence also came from the Republican side. Thus, on the one hand, it can be argued that violence from both the Francoists and the Republican supporters created the feeling, among cohorts exposed to it, that the situation was irreconcilable and that an alternative solution to the current state of affairs was Catalan independence. On the other hand, it can also be sustained that cohorts exposed to violence from both sides were more likely to develop a feeling of apathy or disaffection towards politics (Torcal and Montero 2006), which eventually made them less likely to support the secessionist claim. Experiencing violence and repression at first hand could have fostered feelings based on the awareness of the risks that political conflicts may entail. It might have increased the perception that a radical political confrontation and polarization can break social harmony and represent a risk for personal integrity. Supporting secession from Spain implies favouring a rupture of the status quo. Even within the framework of a democratic regime, it may recall the sort of hazardous political stances that cohorts with formative memories of political violence fear. Yet, to add to the mixed theoretical expectations, Spain is often portrayed as an example of a country that was capable of following a successful democratic transition and hence able to heal the wounds of the past (Higley and Gunther 1992). If this is true, we should not observe any difference in support for secession across different cohorts, as the scars of the conflict should have been healed. Methodologically, we test each of the previous expectations, utilizing fixed effects (FE) and random effects (RE) models applied to a large-N dataset that combines individual-and municipality-level data from Catalonia. The individual-level data come from repeated crosssectional surveys that cover 27 years . The municipality-level information has been obtained from historical records about repression and violence during and after the Spanish Civil War. Our main strategy consists of considering this combined multilevel dataset as a single cross-section in order to attain a representative sample of individuals within municipalities for Catalonia. Then, we perform multilevel random-effects models and specify cross-level interactions between cohort groups at the individual level and the degree of violence and repression at the municipality level, while including control variables at both levels to exclude potential observed confounders. We additionally rely on a fixed-effects dummy-variable approach and an extension to deal with cross-level interactions that adjusts for contextual unobserved heterogeneity (Giesselmann and Schmidt-Catran 2020), as well as on the more comprehensive hybrid framework of the 'within and between random effects' (REWB) models (Bell, Fairbrother and Jones 2019). We find that cohorts that lived in contexts where repression and violence were more intense during their formative period are also less likely to support Catalan independence today. Therefore, our results show that when it comes to secessionist attitudes, the anti-regime-bias logic is not supported. In the second part of this letter, we explore why this is the case, finding support for two different logics: first, we show that older cohorts were more likely to develop a feeling of political apathy and fatalism, which might have alienated them from a high-stakes issue like Catalan independence; and, secondly, findings point to the idea that the bias against independence among older cohorts is circumscribed to them and was not transmitted to other generations. Research Design Our empirical analysis exploits a pooled dataset of repeated cross-sectional surveys that spans over 27 years . 1 All samples are representative of the Catalan voting-age population and have wide geographic coveragemunicipalities included in the pooled sample cover 96 per cent of the Catalan population. Our main outcome is the question: 'With regard to the Spanish state, do you think Catalonia should be…?' Respondents could choose one of the following options: 'a region', 'an autonomous community', 'a state within a federal Spain' or 'an independent state'. We created a binary indicator that captures whether the respondent wants Catalonia to become an independent state versus the rest. Although this question aims at measuring a respondent's preferred territorial option and only indirectly captures their position on an independent Catalonia, it has been consistently asked over a long period, and, most crucially, it correlates strongly with other direct questions (Guinjoan and Rodon 2016). Our main explanatory variable is the percentage of people repressed during and after the Civil War by the Francoist regime (up until 1950) minus the percentage of people executed by the Left during the Civil War. The former comes from Solé i Sabaté and Villarroya i Font (1989); the latter comes from 'The list of juridical repair of victims of Franco's regime', published in 2015 by the Catalan government (Generalitat de Catalunya 2017). Percentages are based on the 1936 population as compiled in the census right before the Civil War. Both datasets were collected by historians, who thoroughly gathered information from several archives, cross-checking the data from different sources. Left-wing repression only took place during the Civil War, while Francoist repression continued after the conflict (mainly up until 1949). While left-wing repression essentially captures the number of killings during the conflict, right-wing repression includes the number of executions, people imprisoned during and after the war (up until 1945) and other 1 The Online Appendix includes the summary statistics, sources, additional analyses and several robustness checks. repressive measures (that is, sanctions). This implies that the indicator is mostly positive and hence right-skewed. All in all, this indicator allows us to distinguish between both types of repression, as they are qualitatively different and might generate different effects (Balcells 2012). Francoist repression was perpetrated against the other ideological side (the Left), but it was also justified in stopping any national dissidence (Catalan identity or others) and Catalan secession. In contrast, left-wing repression mainly targeted wealthy, religious and conservative individuals in a pattern largely unrelated to their anti-or pro-secession stances. By taking the difference between both indicators, we are taking into consideration both dynamics and that one type of violence can potentially cancel out the other. For instance, if a municipality only experienced Francoist repression, the effect on secessionist preferences could arguably be higher compared to a municipality that experienced both Francoist repression and left-wing violence. We focus on studying the generation likely to have experienced the violent events at first hand (those turning 18 years old between 1917 and 1949 in our sample) and consider the rest of the age groups altogether. 2 Our approach implies a within-cohort analysis, that is, a comparison between senior citizens with formative experiences dating back to the Civil War and the first years of the Francoist Regime who experienced dire violence and repression in their municipalities, and equivalent elders who were less exposed to such dreadful events in their villages. We also perform a between-cohort comparison, looking at the difference in the effect of violence at the municipal level in the oldest generation compared to its impact in the rest of the cohort groups. The latter allows us to detect an eventual intergenerational transmission of the effects of violence. Our modelling strategy mainly relies on RE models where individuals are nested within municipalities, as well as on FE models with municipality dummies. We hypothesize that the effect of past violence on current political attitudes is channelled through the cohort's formative experiences of being closely exposed to that violence. 3 We performed two types of RE models. In our first RE approach, we implement two-level models where individuals are nested within municipalities. In doing so, we consider our dataset as one large cross-sectional sample from a single time point. By pooling different datasets, we can obtain a good sample for most Catalan municipalities. 4 It is also important to bear in mind that our period of analysis crucially captures changes in the saliency of the independence debate by covering several contemporary contextual circumstances, which implies various governments, both at the regional and the national levels, and different economic periods. In these RE models, we include several important controls that adjust for potential observed confounders, helping to rule out that other observed factors related to the individual characteristics of respondents or the composition of municipalities confound the relationship between violence and support for secession through the formative experiences of older citizens. At the individual level, we include a respondent's Left-Right ideology, as Catalonia's political 2 Separate results for each cohort are also provided in Sections E.F of the Online Appendix, confirming that the effect of violence is circumscribed to the oldest generation. 3 With such a design, it is not necessary to deal with the age-period-cohort identification problem as we do not need to identify each of these separate effects. We are interested in estimating not current period effects (from 1991 to 2016), but the period effects of the past in the form of the formative experiences of the oldest cohort. We neither need to account for age effects, as they are adjusted for when comparing individuals from the oldest cohort with different degrees of exposure to violence. Yet, we provide an age-period-cohort analysis as a robustness check in Section E.C of the Online Appendix. 4 Our data can be viewed as neither a panel of individuals, nor a panel of municipalities. We are able to reach an adequate representative sample of individuals across municipalities thanks to pooling the repeated cross-section surveys over time (following the procedure also employed in Rodon and Guinjoan [2018]). With such a design, we attain a satisfactory crosssection dataset at two levels. In this case, we control for time by dividing the sample into two periods in the main models, as well as including year dummies or defining a third level for year in age-period-cohort (APC) RE models (see Sections E.C and E.F in the Online Appendix). Furthermore, current period effects are not of much interest here since we are investigating a period effect of the long distant past that does not change in our dataset over time. It can be considered constant, as linked to the formative experiences of the oldest generation. competition follows a bidimensional structure (Left-Right and pro-/anti-independence dimensions) and the weight of the repressive past might be different between individuals ideologically aligned with the Francoist dictatorship and the rest (Galais and Serrano 2020). As a proxy for an individual's political identity, an important factor shaping political attitudes and vote choice in the Catalan case (Rodon and Guinjoan 2018;Serrano 2013), we control for the language a respondent speaks at home. We also include several indicators at the municipality level that help us tackle different contextual circumstances related to independence support (Rodon and Guinjoan 2018). We incorporate: (1) the percentage of people born in Catalonia; (2) the percentage of older people (64 years old or more); (3) the percentage of men; (4) the population of the municipality; and (5) the electoral district (province) a respondent belongs to. Finally, all models are based on individuals that were born in Catalonia. We do not know where respondents born outside Catalonia lived before; therefore, it is not possible to capture the intensity of repression (if any) that they were exposed to when they were young. 5 In our second RE approach, we explicitly include the time dimension in the analysis. We perform a REWB model, a hybrid approach explained by Bell, Fairbrother and Jones (2019; see also Fairbrother 2014; Tormos 2019). In this three-level model, individuals are nested within municipality-year units, which, in turn, are considered observations of each municipality (a panel of municipalities). This hybrid modelling strategy allows estimating the effects of both time-invariant and time-varying contextual covariates. It combines the between-effects estimator of RE models with the within-effects estimator of FE models. In addition, we also run FE models to control for unobserved heterogeneity at the context level by specifying municipality dummies. We extend this approach in two different ways. On the one hand, we perform further models that include year dummies in addition to municipality dummies. On the other hand, we follow Giesselmann and Schmidt-Catran's (2020) advice to properly specify cross-level interactions in FE models by including dummies of the interaction of the group variable with the variables of the interaction. In our case, this means including dummies of the interaction of municipalities with municipal violence and with an individual's generation. This type of model is called 'country fixed effects and slopes' (cFES) and it is aimed at controlling for heterogeneity in the two variables interacted. Table 1 shows the results of several logistic regressions with the aforementioned specifications. Models 1 to 5 are RE models with individuals nested within municipalities while considering the whole dataset as a large single cross-section. Models 6 and 7 are hybrid REWB models with individuals nested within municipality-year units and then within municipalities. Finally, Models 8 to 10 are FE models with municipality dummies (8), municipality and year dummies (9), and a cFES model for the slopes of the interacted variables (10). Most of the models only include the independent variables of interest (1, 2, 6, 8 and 9), while the remaining ones also add controls at both the individual and the context levels (3, 4, 5 and 7). The coefficients of interest come from the interaction between the cohort dummy (old versus the rest) and the municipality-level repression indicator (the difference in violence/repression by the two ideological sides). The RE models are specified with random slopes for the lower-level variable 5 Sections E.A and F.F in the Online Appendix discuss the null influence of an individual's language and their Left-Right position, which can be considered post-treatment, on our estimates. Section C in the Online Appendix deals with population movements and selective sorting. Sections E and F in the Online Appendix include, respectively, a control for time that divides the sample into two periods in the main models, and year dummies or defining a third level for year in APC RE models. Notes: Effects of the difference in the percentage of the population repressed by both sides in each municipality during and after the Civil War in the formative experiences of senior citizens. * p < 0.1; ** p < 0.05; *** p < 0.01. involved in the cross-level interaction (cohort), as advised by Heisig and Schaffer (2019) to obtain robust estimates. 6 In all the models in Table 1, the interaction is statistically significant, negative and similar in strength across specifications. 7 This means that support for secession is lower among the old cohort that lives in places that experienced a relatively high level of repression by the Francoist regime during and after the Civil War. The rest of the cohorts who could not have experienced such violence directly do not seem to have become affected. Interestingly, the old cohort is not per se less pro-independence when the cross-level interaction is specified (see Model 2). Old cohorts are only less prone to independence if they live in municipalities where repression by the Francoist dictatorship was more intense. As Model 4 shows, the effect is particularly pronounced when we restrict the sample to the 1991-99 period. During these years, when secession was not very popular and the older cohort was still demographically relevant, Catalans who were exposed to the Civil War and subsequent repressive period and lived in a more repressed locality were particularly anti-secession. 8 Figure 1 graphically portrays the cross-level interaction of the RE models. In Figure D1 in Section D of the Online Appendix, we show that the magnitude of the interaction effects across RE and FE specifications is similar. Results We performed an equivalent analysis but using Left-Right ideology as a dependent variable (see Section E.A in the Online Appendix). In contrast to studies examining vote choice (Balcells 2012;Villamil 2020), our results indicate that being exposed to higher levels of violence did not affect an individual's Left-Right position, the other relevant dimension of competition in Catalonia. Finally, in additional models, we examine the effect of the percentage of people that experienced violence/repression at the municipality level, regardless of the perpetrators, and we unpack the effect of both types (see Section E.B in the Online Appendix). Results show that total violence is associated with a lower support for independence among cohorts that experienced it, essentially because these were places where the Francoist repression was more intense. In contrast, the effect of left-wing violence on secessionist support, adjusted for the intensity of Francoist repression, does not have an effect on secessionist attitudes. The Persistence of the Gap Our empirical analysis shows that cohorts exposed to violence during their formative years are less likely to embrace the secessionist project. Findings, therefore, go in a different direction than the anti-regime-bias logic. When it comes to being opposed to or in favour of the creation of a new state, our results show that people exposed to more violence are not more likely to develop a bias against one of the ideological pillars of the perpetrators of violence. Why do we observe such effects? We explore two different mechanisms. First, our intuition, aligned with previous work (Torcal and Montero, 2006), is that people who were exposed to a higher intensity of violence developed an anti-politics feeling, which also made them more likely to take a stance against secessionism. The political issue of Catalan independence has the capacity to polarize society, and this might not sit well among the old cohorts that 6 An alternative modelling strategy (see Section F.E in the Online Appendix) uses a linear probability model with pooled individual-and aggregate-level variables together and clustered standard errors at the aggregate level of municipalities (Wang, 2021). 7 The exception is Model 5, run on the subsample of the period 2000-16. During this period, the majority of individuals in the oldest cohort had died. The remaining individuals might be unrepresentative of the original cohort due to possible biases in its composition (e.g., the differential survival of those well-off). 8 In order to avoid an eventual model dependency of our analysis, we replicated the interactions using linear RE models instead of logistic ones on support for secession (see Section F.E in the Online Appendix), as well as with territorial preferences as the dependent variable (an ordinal scale with four response options) separately (see Section F.C in the Online Appendix). were exposed to violent events. Attitudes against politics can be expressed through a variety of forms, including apathy, apoliticism, cynicism and distrust towards politics. If the effects of violence operate through the development of anti-politics feelings, we would spot a spread of such political attitudes among the older generation that was exposed to violence in their municipality during their formative years. To that end, we exploit a series of indicators measuring this type of attitude, as well as feelings of political efficacy, using agree-disagree scales that have been consistently included in the dataset. Figure 2 shows that there are indeed differences in two of those indicators. Senior citizens in municipalities that experienced higher levels of violence are more likely to believe that 'elections are not really useful because the same people always rule' than the same cohort group in municipalities with less violencea feeling of fatalism towards the outcome of democracy. The cohort group exposed to higher violence in their formative years is also more inclined to think that 'it is better not to get involved in politics'an indication of political apathy. These people may connect their traumatic experiences of violence during and after the Spanish Civil War to the political polarization and ideological confrontation of those times. As a result, they might think that getting involved in politics and taking sides can be dangerous, even in peaceful times. All in all, dictator's Franco cynical saying, 'Do like me, don't get involved in politics', constitutes a metaphor of how this cohort may feel towards the independence project. Secondly, part of the transmission of traumatic memory hinges upon the idea that new cohorts take up the values of older ones. If the relationship between violence and support for secession is circumscribed to older cohorts, and intergenerational transmission does not occur, the automatic process of generational replacement would progressively dilute the attitudinal legacy of the past (Tormos 2019). Was the intergenerational transmission of the memory of violence more likely to happen in places that experienced more of this violence? The absence of such a main contextual effect of violence in support for secession in most models of Table 1 (Models 2-7 and 10) constitutes an indication that memories live as long as those who hold them. There are no signs of intergenerational transmission or attitudinal transmission related to the municipal level. The significance and shape of the interaction indicate that younger cohorts who were not directly exposed to violence, but lived in municipalities that experienced it, do not become affected. 9 Perhaps the lack of such contextual effects is due to an overt will to forget the past. The Francoist dictatorship silenced those memories. Forgetfulness might have endured well into the present. Explicit policies of collective memory were implemented in Catalonia just a decade ago. Conclusions Findings in this letter stand in contrast with the most recent empirical evidence on the long-term effects of violence and repression on political attitudes and political behaviour. Contrary to the anti-regime-bias hypothesis, which would have predicted higher support for independence among the generation that experienced more violence perpetrated by the Francoist regime, but in line with other recent works (Wang 2021;Zhukov and Talibova 2018), we show that cohorts More detailed results where the rest of the cohorts are decomposed into ten-year groups yield equivalent results, as can be seen in Section E.D in the Online Appendix. Only the oldest cohort with formative experiences potentially linked to violence is affected. exposed to violence during their formative years are less likely to embrace the secessionist project today. In line with the disaffection argument sustained by some previous works (Torcal and Montero 2006;Zhukov and Talibova 2018), we find that the oldest cohort was more likely to develop a feeling of apathy towards politics and that the generational transmission of the antiindependence feeling did not occur. Our results are also in line with previous works showing the heterogeneous effects of violence (Rozenas and Zhukov 2019) and how certain conditions mediate the long-term effects of violence on behaviour and attitudes (Villamil 2020). A first potential explanation of these different findings is that polarization in the pro-or antiindependence debate activates the memory differently than polarization along the left-right dimension. It might be that supporting secession implies a more disruptive event than conflict over Left and Right, and the former makes cohorts with formative memories of political violence more concerned (or even afraid) about the outcome than the latter. Another explanation may be that the long-term effect of violence travels differently over time in democracies or even in contexts with two dimensions of political competition. Overall, we show that cohorts exposed to violence are less likely to support a disruptive event, such as the independence project, or, in other words, they develop a bias in favour of the status quo, in this case, being supportive of territorial stability. Thus, our findings qualify those of Balcells (2012, 311), who concluded that 'civil war victimization experiences did not have a major influence on identities articulated around the centre-periphery cleavage'. In addition, results point to the lack of intergenerational transmission, while other works do (Lupu and Peisakhin 2017). One reason may be that, compared to other studies, certain political events in Spain, such as the transition to democracy or economic growth, might have broken the transmission of these values. All in all, this letter helps us understand how the burden of the past travels to the present day and, especially, shows that violence does not always create an attitudinal 'bonus' against the perpetrators. On some occasions, like the one we have analysed here, it entails a negative burden that the cohort that experienced it is not able to overcome.
2022-06-03T15:17:34.272Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "1fa4f9fcf0cb048fb7b64d0d73f06c9fc369c001", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/6649C5A56B4EF3A9FBD6483B6E87DC4B/S0007123422000035a.pdf/div-class-title-the-burden-of-a-violent-past-formative-experiences-of-repression-and-support-for-secession-in-catalonia-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "064dc610db6472e39f04b4c6a134a975d56446c7", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
236372134
pes2o/s2orc
v3-fos-license
Is Traditional Masculinity Still Valued? Men’s Perceptions of How Different Reference Groups Value Traditional Masculinity Norms Traditional masculinity norms are generally defined as hegemonic because they contribute to maintaining men’s favorable position in the gender hierarchy. Nevertheless, many observers argue that traditional masculinity norms are fading away under the pressure of feminist movements and are being replaced by more progressive, non-hegemonic masculinity norms. The present research examines men’s perceptions of how traditional masculinity norms are viewed by three reference groups: society as a whole, other men, and women. We assessed these perceptions via two experiments based on the self-presentation paradigm and involving American (N = 161) or British (N = 160) men. Participants in both experiments perceived traditional masculinity as being valued by other men but not by society as a whole or by women. We discuss the implications of these findings in the light of current changes in masculinity norms. In October 2018 the English journalist Piers Morgan mocked James Bond actor Daniel Craig for carrying his daughter in a baby sling. Morgan's comment that Daniel Craig was an "emasculated Bond" (Heritage, 2018) conforms to the so-called traditional masculinity idea of how men should behave. Although masculinity norms take different forms in different cultural and social contexts (Arciniega et al., 2008;Doss & Hopkins, 1998;Janey et al., 2013), Western traditional norms are usually considered hegemonic because they contribute to maintaining men's favorable position in the gender hierarchy (Connell 1995;Connell & Messerschmidt, 2005;Messerschmidt, 2019). At the same time, these norms have detrimental effects for men. For example, men's tendency to conform to these norms means they are more likely than women to be victims of violent crimes, be imprisoned, or die from traffic accidents (see American Psychological Association Boys and Men Guidelines Group, 2018). Many scholars argue that traditional masculinity norms are dying out (Thompson & Bennett, 2015;Wade, 2015) as a result of feminist movements advocating greater equality between men and women and challenging traditional views of gender (Bohan, 1993). In other words, the traditional masculinity norms prevalent before the emergence of feminism would be replaced by more "progressive" masculinity norms (Anderson & McCormack, 2018;Buschmeyer, 2013;Flecha et al., 2013;Padgett, 2017). Reactions to Piers Morgan's comment about James Bond support this point of view, as some people, both men and women, derided the idea that wearing a baby sling is incompatible with masculinity and claimed that it is something "James Bond would do." Further evidence for the decline of traditional masculinity norms is provided by the relatively low scores on traditional masculinity scales reported by many studies, which suggest that men (and women) tend to disagree with items reflecting traditional masculinity (Smiler, 2004;Thompson & Bennett, 2015). However, most research has focused on inter-individual differences in the endorsement of traditional masculinity norms and has tended to overlook perceptions of how they are viewed by others (i.e., whether men perceive other groups as valuing traditional masculinity; for a commentary on this issue, see Wong et al., 2013). This issue is of great importance, as men's propensity to comply with traditional expectations will depend on whether they perceive traditional masculinity to be valued by society as a whole. If society is no longer perceived as valuing these norms, men have no reason to follow them. Hence, the present study's first aim was to determine whether or not men feel that society still values traditional masculinity norms. However, what is perceived to be valued by society as a whole is not necessarily seen as being valued by all subgroups of people. Because local groups (e.g., family, university peers, religious peers, work colleagues) may be perceived as favoring different kinds of masculinity (Wong et al., 2013), we also examined men's perceptions of how these norms are valued by two socially-relevant reference groups, that is, whose perspective is used as a frame of reference by the actor (Shibutani, 1955): other men (ingroup) and women (outgroup). Although the issue of whether men perceive traditional masculinity as valued by the two gender groups has never been studied, to the best of our knowledge, there are reasons to believe that men do not perceive women and men as valuing traditional masculinity to the same extent. For example, many adolescent males report being particularly pressured to endorse traditional masculinity when they are with other boys (Duckworth & Trautner, 2019), and men feel greater discomfort when they imagine themselves performing a behavior typical of women (which is proscribed by traditional masculinity norms) in front of other men than when they do so in front of women (Bosson et al., 2006). In addition, traditional forms of masculinity are less valued by women than by men (Levant & Richmond, 2007;Maltby & Day, 2001). Hence, it would seem reasonable to infer that men will tend to perceive both society as a whole and women as not valuing traditional masculinity, but perceive other men as valuing it. Present Research Across two experiments, the present research examined whether men perceive traditional masculinity to be valued by three reference groups: society as a whole, women, and other men. We based our methodology on the self-presentation paradigm (Jellison & Green, 1981), which can be used to examine perceptions of societal norms (i.e., norms prevalent in society as a whole; Félonneau & Becker, 2008;Jellison & Green, 1981) and local norms (norms specific to a group Dubois & Beauvois, 2005;Pillaud et al., 2013), and to compare norms attributed to different reference groups (Iacoviello & Spears, 2018). Studies using the self-presentation paradigm typically ask participants to provide three sets of responses to a scale of items referring to a certain issue, with each set of responses given in accordance with a specific instruction. First, participants are asked to give their personal opinion (standard instruction). They are then asked to respond in such a way as to generate a positive impression of themselves in the eyes of a reference group (self-enhancement instruction). Finally, they are asked to respond in such a way as to generate a negative impression of themselves in the eyes of the same reference group (self-depreciation instruction). A higher score under the self-enhancement instruction than under the self-depreciation instruction shows that an individual perceives the reference group as valuing the issue (i.e., it is normative). The items in our two experiments referred to traditional masculinity. Before completing the items, each participant was assigned to one of the three experimental conditions, which differed in terms of the reference group the participant had to think of when giving a positive and negative impression of himself. The three reference groups were society as a whole (society condition), other men (ingroup condition), and women (outgroup condition). The difference between each participant's score under the selfenhancement instruction (self-enhancement score) and under the self-depreciation instruction (self-depreciation score) indicated the extent to which that participant perceived traditional masculinity as valued by his assigned reference group. We hypothesized that men would perceive traditional masculinity as not being valued either by society as a whole or by women (outgroup), but as being valued by other men (ingroup). Hence, we expected self-enhancement scores to be similar to self-depreciation scores in the society and outgroup conditions but higher than self-depreciation scores in the ingroup condition. We tested this hypothesis for overall perceptions of traditional masculinity. We also examined perceptions of ten specific traditional masculinity norms, but without advancing any hypotheses regarding them. In addition to testing our main hypothesis, we used participants' responses under the standard instruction to assess the extent to which they endorsed traditional masculinity. Doing so enabled us to examine the extent to which men's self-descriptions are influenced by their perceptions of how each reference group (society as a whole, men, women) values traditional masculinity. However, we did not formulate any specific hypothesis in this case since previous studies suggest two possibilities. According to self-categorization theory (Turner et al., 1987) and a number of supporting studies (e.g., Van Knippenberg & Wilke, 1992), men's selfdescriptions should be influenced primarily by the ingroup. Conversely, given that interdependencies between individuals define opportunities and constraints, interacting individuals often influence each other (Kelley & Thibaut, 1978;Rusbult & Arriaga, 1997). Therefore, the social interdependence between men and women (Glick & Fiske, 2001), created by the fact that men interact with women (and not just with other men) every day, may motivate men to respond to norms promoted by women and by society as a whole, as well as to prescriptions from the ingroup. Consequently, all three reference groups may be equally influential. Experiment 1 Experiment 1 used the self-presentation paradigm to determine whether a sample of American men perceived traditional masculinity as valued by society as a whole, by men (ingroup), and by women (outgroup). We also examined the relationship between participants' self-descriptions and their perceptions of the extent to which each reference group values traditional masculinity. Method Participants. We used Amazon's Mechanical Turk to recruit a sample of American men to take part in an online survey. Participants received US$0.80 in compensation for their time. As recommended by Simmons et al. (2013), we aimed to recruit approximately 50 participants per experimental condition. In the end, 173 participants completed the online questionnaire. Because heterosexuality is one of the most relevant dimension of masculinity (e.g., Bosson et al., 2006;Falomir-Pichastor et al., 2019;Herek, 1986), and because gay men (as compared with heterosexual men) are likely to interpret and react very differently to items regarding the 'negativity toward sexual minorities' and 'disdain for homosexuals' dimensions of traditional masculinity, the present research focuses on heterosexual male participants. Consequently, we excluded participants who said they were not heterosexual (n = 10). 1 Excluding a further two participants who reported being not sincere and/or not focused when answering the questionnaire gave us a final sample of 161 American heterosexual male participants (M age = 40.53 years, SD age = 12.30). Most of our participants identified as non-Hispanic/ non-Latino White (70.8%). The others reported being Asian American (9.3%), Hispanic/Latino White (9.3%), African American (6.8%), Native American (2.5%), or Multiracial (0.6%). One participant did not report his race. We assigned each participant randomly to one of three experimental conditions in a between-participants design (reference group: ingroup vs. outgroup vs. society). A sensitivity analysis in G*Power indicated that the final sample size gives us an 80% probability of detecting effects with a size of η p 2 = 0.058 (d = 0.49) or greater. Thus, our sample was large enough to detect medium-size effects. Procedure. In line with the self-presentation paradigm, we began by asking participants to give their personal opinions on a 20-item scale describing ten traditional masculinity norms (standard instruction). Responses to all the items were given on 7-point scales ranging from one (completely disagree) to seven (completely agree). 2 The following two pages asked participants to respond to the same items in such a way as to make a positive impression on another group (self-enhancement instruction) and then in such a way as to make a negative impression on the same group (self-depreciation instruction). After responding to this three sets of twenty items, participants provided demographic information and answered two yes-no questions to check if they had been sincere and focused when completing the questionnaire. Before answering these last two questions, participants were told that they would receive the US$0.80 compensation regardless of their answers. Finally, we debriefed participants about the purpose of the experiment and asked them to consent to their data being used. Dependent variables. We used traditional masculinity scores under the standard instruction as a measure of personal endorsement of traditional masculinity and assessed participants' perceptions of how the different reference groups value traditional masculinity by subtracting self-depreciation scores from self-enhancement scores. Self-descriptions. The standard instruction asked participants to indicate the extent to which they agreed with 20 items describing 10 traditional masculinity norms (two items per norm, see Appendix A for a full list of the items). 3 We created the items specifically for our study and expressed them in the first person in order to measure self-descriptions. Seven of the norms corresponded to the norms in Levant et al. (2013) Male Role Norms Inventory-Short Form (MRNI-SF; i.e., Restrictive Emotionality, Self-Reliance through Mechanical Skills, Negativity toward Sexual Minorities, Avoidance of Femininity, Importance of Sex, Dominance, Toughness). The remaining three norms corresponded to norms within Mahalik et al.'s (2003) Conformity to Masculine Norms Inventory (CMNI; i.e., Disdain for Homosexuals, Self-Reliance, Pursuit of Status). After recoding, we computed a personal endorsement of traditional masculinity index (α = .86, M = 4.35, SD = 0.91). Table 1 shows means, standard deviations, and inter-item correlations for each norm; correlations between the norms are shown in Table 2. Perceptions of Whether Reference Groups Value Traditional Masculinity. Participants completed the same 20 items under the self-enhancement and self-depreciation instructions with reference either to society as a whole (society condition), to other men (ingroup condition), or to women (outgroup condition), depending on the experimental condition. In the society condition [instructions for the ingroup and outgroup conditions are shown in brackets], the self-enhancement instruction was: "We would like you to complete the questionnaire in such a way to generate a good image of yourself in the eyes of people in general [other men; women]. More specifically, we ask you to answer as if you were attempting to get people in general [other men; women] to like and approve of you." The self-depreciation instruction was: "We would like you to complete the questionnaire in such a way to generate a bad image of yourself in the eyes of people in general [other men; women]. More specifically, we ask you to answer as if you were attempting to get people in general [other men; women] to dislike and disapprove of you." After recoding, we calculated a perception of traditional masculinity index for each participant by averaging his self-enhancement responses for each item (α = .90, M = 4.86, SD = 1.09) and his self-depreciation responses for each item (α = .93, M = 3.49, SD = 1.58), and then subtracting his mean self-depreciation score from his mean self-enhancement score (Iacoviello & Spears, 2018). Thus, a positive perception of traditional masculinity index indicated that the participant perceived his reference group as valuing traditional masculinity (i.e., endorsing traditional masculinity is normative), whereas a negative perception of traditional masculinity index indicated that the participant perceived his reference group as valuing the rejection of traditional masculinity (i.e., endorsing traditional masculinity is counter-normative). A perception of traditional masculinity index of 0 indicated that the participant perceived his reference group as not valuing traditional masculinity (i.e., endorsing traditional masculinity is non-normative). Next, we used a similar procedure to examine participants' perceptions of the extent to which their reference group valued each of the ten traditional masculinity norms. Because all the inter-item correlations for the ten norms under the standard instruction were significant (see Table 1), we averaged each participant's two responses for each norm under the self-enhancement instruction and under the self-depreciation instruction. We then computed a perception index for each norm by subtracting each mean self-depreciation score from each mean self-enhancement score. 4 Results We tested our hypothesis by first analyzing results for perceptions of the extent to which three reference groups value traditional masculinity. We then examined the relationship between these perceptions and self-descriptions (i.e., endorsement of traditional masculinity) in order to assess the extent to which participants' self-descriptions conformed to their perceptions of how their reference group valued traditional masculinity. Perception of Specific Traditional Masculinity Norms. We performed full-factorial ANO-VAs on the perception indices for each of the ten traditional masculinity norms, with reference group (society vs. ingroup vs. outgroup) as a between-participants factor. Participants in the society condition perceived society as a whole as valuing only two of the ten norms (Self-Reliance through Mechanical Skills, Toughness), whereas they perceived society as considering seven of the norms to be non-normative (Self-Reliance, Pursuit of Status, Avoidance of Femininity, Importance of Sex, Dominance, Disdain for Homosexuals, Restrictive Emotionality) and one norm (Negativity toward Sexual Minorities) to be counter-normative (Table 3). We obtained similar results for participants in the outgroup condition, except that they perceived women as also valuing Pursuit of Status. Participants in the ingroup condition perceived all the norms as being valued by men. Self-Descriptions. In order to examine conformity dynamics, we looked at the relationship between the perception of the reference groups' attitudes toward traditional masculinity and self-descriptions. Thus, we investigated whether this relationship depended on the reference group (i.e., whether participants' self-descriptions matched a specific reference group's perceived norm). We first computed two Helmert contrasts with reference group as the contrasted variable. In line with our main hypothesis, results showed that the ingroup was perceived as having a different norm to both the outgroup and society as a whole. Therefore, the first contrast (C1) compared the ingroup condition (coded +2) with the outgroup and society conditions (both coded -1). This contrast would indicate whether self-descriptions differ between the ingroup condition on the one hand and both the outgroup and society conditions on the other hand. The second contrast (C2) compared the outgroup condition (coded +1) with the society condition (coded -1); the ingroup condition was coded 0. This second contrast would indicate whether participants report different self-descriptions when the reference group is the outgroup than when the reference group is the society as a whole. We then performed a linear regression analysis on self-descriptions with reference group's perceived masculinity norm (centered continuous variable), C1, C2, and their interactions (except those including the two orthogonal contrasts) as predictors. Perceived masculinity norm was positively linked to personal endorsement of this norm, β = 0.15, t(155) = 4.29, p < .001, 95% CI [0.08, 0.21]. This effect was not Discussion The results of Experiment 1 supported our hypothesis by showing that male participants perceived traditional masculinity as not being valued by society but as being valued by other men. Moreover, they perceived women's attitudes toward traditional masculinity to be more similar to those of society as a whole than to those of men, even though there was a weak but significant tendency to perceive women as valuing traditional masculinity. This was mainly due to participants perceiving women as valuing three specific traditional masculinity norms (Self-Reliance through Mechanical Skills, Toughness, Pursuit of Status), even though they were not perceived as valuing the other seven norms we measured. Finally, participants' endorsements of traditional masculinity (i.e., self-descriptions) were related to their perceptions of the extent to which their reference group values this norm and this was the case for all three reference groups. This last finding could be interpreted as preliminary evidence for perceived ingroup norm, perceived outgroup norm, and perceived society norm playing a combined role in shaping men's self-views. Before interpreting these findings further, we replicated our experiment in a different cultural setting: the United Kingdom. Method Participants. We recruited 175 British men to take part in an online experiment via the Prolific crowdsourcing platform. As in Experiment 1, our aim was to have approximately 50 participants per experimental condition. Each participant received £1.40 in exchange for their time. Excluding participants who reported being non-heterosexual (n = 12) and those who reported not been sincere and/or focused when answering the questionnaire (n = 3) gave us a final sample of 160 participants (M age = 36.85 years, SD age = 12.64), all of whom were British, although four said they were currently living outside the UK (in Finland, Hungary, New Zealand, and Spain). 5 A sensitivity analysis in G*Power indicated that a sample of this size gives us an 80% probability of detecting effects with a size of η p 2 = .053 or greater. Because the effect size of differences in perceptions of the reference groups' attitudes toward traditional masculinity norms obtained in Experiment 1 (η p 2 = .25) was greater than this threshold, our sample has sufficient power to detect the investigated effect. Procedure. We followed an identical procedure to Experiment 1, with the same items assessing traditional masculinity under the self-description, self-enhancement and self-depreciation instructions as a within-subjects factor. Each participant was randomly assigned to one of the three independent conditions (i.e., reference group: society, ingroup, outgroup). Once they had completed the items under the three instructions, they provided demographic information and answered the two questions assessing whether they had been sincere and focused when answering the questionnaire. Finally, they were debriefed about the purpose of the experiment and asked to consent to their data being used. Answers to all the items were given on 7-point scales ranging from 1 ("Completely disagree") to 7 ("Completely agree"). Dependent variables Self-descriptions. After recoding, we used each participant's responses to the items under the standard instruction to compute a traditional masculinity index (α = .79, M = 4.05, SD = 0.70). 6 Perception of masculinity norms. We also computed indices for overall perceptions of traditional masculinity and perceptions of each of the ten masculinity norms. For overall perception of traditional masculinity, we subtracted each participant's mean score under the self-depreciation instruction (α = .90, M = 3.95, SD = 1.46) from his mean score under the self-enhancement instruction (α = .90, M = 4.50, SD = 1.01), as in Experiment 1. Thus, a positive index shows that masculinity is perceived as normative, an index of zero shows that masculinity is perceived as non-normative and a negative index shows that masculinity is perceived as counter-normative. For the ten specific traditional masculinity norms, we began by calculating mean scores for each norm (two items per norm) under the self-enhancement instruction and under the self-depreciation instruction. Because all the inter-item correlations were significant under the standard instruction, we computed a perception score for each norm by subtracting the mean score under the self-depreciation instruction from the mean score under the self-enhancement instruction. Means, standard deviations, and inter-item correlations for each norm are shown in Table 4. Correlations between the norms are shown in Table 5. Results As in Experiment 1, we first looked at perceptions of the three reference groups' attitudes toward traditional masculinity and then examined the relationship between these perceptions and participants' self-descriptions (i.e., personal endorsement of masculinity) in order to assess whether these self-descriptions were influenced by participants' perceptions of the extent to which their reference group values traditional masculinity. Perceptions of Traditional Masculinity as a Whole. An ANOVA on the perception of traditional masculinity index, with reference group (ingroup vs. outgroup vs. society) as a between-participants factor, showed a significant main effect of reference group, F(2,157) = 36.97, p < .001, η p 2 = .32, indicating that the three reference groups were perceived as having different attitudes toward traditional masculinity. In line with our Table 6). Perceptions of Specific Masculinity Norms. We also performed a series of full-factorial ANOVAs on the perception indices for each masculinity norm, with reference group (society vs. ingroup vs. outgroup) as a between-participants factor. As Table 6 shows, participants perceived society as valuing only three of the ten norms (Self-Reliance through Mechanical Skills, Toughness, Self-Reliance), as considering three norms to be non-normative (i.e., Pursuit of Status, Importance of Sex, Restrictive Emotionality), and as considering four norms to be counter-normative (Avoidance of Femininity, Dominance, Disdain for Homosexuals, Negativity toward Sexual Minorities). Participants in the outgroup condition had similar perceptions of the ten norms, except for Self-Reliance, which was perceived to be non-normative, and Restrictive Emotionality, which was perceived to be normative. Participants in the ingroup condition perceived all of the norms to be normative, except for Negativity toward Sexual Minorities, which was perceived to be non-normative. Self-Descriptions. As in Experiment 1, we computed two Helmert contrasts with the reference groups. The first contrast (C1) compared the ingroup condition (coded +2) to the outgroup and society conditions (both coded -1). The second contrast (C2) compared the outgroup condition (coded +1) to the society condition (coded -1), with the ingroup condition being coded 0. We then performed a linear regression analysis on participants' self-descriptions, with perception of value attributed to traditional masculinity (centered continuous variable), C1, C2 and their interactions (except those including the two orthogonal contrasts) as predictors. Results showed a positive link between participants' self-descriptions and their perceptions of the value attributed to traditional masculinity, β = .13, t(154) = 4.26, p < .001, 95% CI [0.07, 0.19]. This effect was not moderated either by C1, β = −.02, t(154) = −1.08, p = .280, 95% CI [−0.06, 0.02], or by C2, β = .04, t(154) = 0.97, p = .334, 95% CI [−0.04, 0.12]. These findings are in line with those observed in Experiment 1 and suggest that endorsement of traditional masculinity is positively associated with perceptions of the extent to which the reference group values traditional masculinity, and this is the case for all three reference groups. Discussion In line with our hypothesis, participants in Experiment 2 perceived traditional masculinity as being valued by men, but not by society as a whole or by women. Moreover, as in Experiment 1, participants' endorsements of traditional masculinity were related to their perceptions of the extent to which their reference group valued traditional masculinity. This was the case for all three reference groups, suggesting that participants' self-descriptions are influenced by the norms they perceive among a reference group. Note. Different subscripts indicate significant differences between conditions (p < .05). SRMS = selfreliance through mechanical skills; T = toughness, SR = self-reliance; PS = pursuit of status; IS = importance of sex; RE = restrictive emotionality; AF = avoidance of femininity; Do = dominance; DH = disdain for homosexuals; NSM = negativity toward sexual minorities. *p < .05. General Discussion Because gender roles are evolving towards equality, some scholars have suggested that traditional masculinity norms may be eclipsed by more progressive masculinity norms (e.g., Anderson & McCormack, 2018;Padgett, 2017;Wade, 2015; see Thompson & Bennett, 2015, for a discussion). We postulated that men still perceive traditional forms of masculinity as normative, but that this norm is perceived as emanating specifically from the gender ingroup. In order to test this hypothesis, we investigated the idea that men perceive both society as a whole and women as not valuing traditional masculinity, but other men as valuing this traditional norm. Results for samples of American (Experiment 1) and British (Experiment 2) heterosexual men supported this hypothesis. Traditional Masculinity Norms Despite the general similarity in perceptions of traditional masculinity as a whole, perceptions of the value different reference groups accord to specific masculinity norms varied slightly. Thus, other men were perceived as valuing most of the ten masculinity norms we tested (e.g., anti-femininity, disdain for homosexuals), but the only norm perceived as being valued by all three reference groups was Self-Reliance through Mechanical Skills. This difference in perceptions may be due to this specific norm being viewed as consistent with promoting diversity and equality for all social groups (particularly women and sexual minorities), whereas the other norms are seen as incompatible with this ideal (McDermott et al., 2019). If so, this suggests that Self-Reliance through Mechanical Skills is no longer considered a hegemonic norm. For example, being able to do small household repairs may now be seen as a valued skill for everyone, regardless of their gender. Future studies should explore this issue more thoroughly. We also noted differences between the results for American and British men. For example, our American participants perceived both men and women as valuing Pursuit of Status, but the British participants perceived only men as valuing this norm. This may be due to a cultural difference between the US and the UK. However, given that the literature classifies US and UK as two of the world's most individualistic countries (Hofstede, 2001), it may also reflect a difference between our specific samples. Is Traditional Masculinity Still Valued? Participants in both of our experiments perceived other men as valuing traditional masculinity. Even though we believe this perception stems from what men actually value (indicated by men's everyday experiences with other men), there may be a disparity between the extent to which men perceive traditional masculinity as valued and the extent to which it is valued. Indeed, people sometimes privately reject a norm but incorrectly believe that others endorse it (Allport, 1924;. This phenomenon, known as pluralistic ignorance, has been reported in groups of men. For example, male workers were found to believe that male coworkers endorse traditional masculinity norms more strongly than they do (Munsch et al., 2018), and male undergraduates were found to believe that other male undergraduates endorse sexist beliefs more strongly than they do (Kilmartin et al., 2008). Hence, future studies should examine whether this disparity exists in the case of traditional masculinity norms and, if it does, why? More research is also needed to provide a better understanding of men's reactions to changes in masculinity norms, both in society as a whole and among men (Falomir-Pichastor et al., 2019;Iacoviello et al., 2020). Norm Perceptions and Endorsement of Traditional Masculinity In both of our experiments, participants' self-descriptions were linked to their perceptions of how others valued traditional masculinity norms, whichever reference group they were asked to consider. While correlational findings must be interpreted with caution, the fact that norm perceptions were measured before selfdescriptions could suggest that the latter is influenced by the former. It would then appear that men's attitudes and behaviors relating to traditional masculinity are influenced by all three reference groups (men, women, society as a whole). The fact that men seem to conform to masculinity norms that are valued both by other men and by women means that their attitudes and behaviors are compromises based on what they perceive to be normative in different socialization contexts. At first sight, this may appear inconsistent with people's tendency to conform mostly to ingroup norms Van Knippenberg & Wilke, 1992). However, the small number of studies to examine the impact of outgroup norms has shown that these norms can also influence people's behavior (Jetten et al., 1996). Thus, when intergroup contact and interdependence are high (as in the case of relations between men and women; Glick & Fiske, 2001), men may wish to give good impressions of themselves to both groups and therefore conform to both men's and women's expectations. This raises the question of how men can simultaneously conform to men's and women's attitudes toward traditional masculinity when these attitudes are perceived to be contradictory, as we found in our studies. One possibility is that men simply try to find the right balance between what they believe is expected from them by other men and by women. If this is the case, men's behaviors will be a compromise between what they perceive to be normative in different socialization contexts. Another possibility is that men adapt their behavior to the norm they perceive as applying in each situation (Kallgren et al., 2000). For example, men may endorse traditional masculinity norms when they are exclusively with other men (e.g., in all-male settings, such as some sport activities; see also the Trump "locker-room talk" controversy) and more progressive norms when they are with women (e.g., at work). It is also worth noting that male participants in our experiments perceived both society as a whole and women as having the same attitudes towards traditional masculinity, so they would be expected to align their behavior with these (more progressive) attitudes in contexts involving both men and women. Thus, men may freely express the traditional masculinity norm only in all-male contexts and mostly hide expressions of this norm in other situations (see also Bosson et al., 2006). As Messerschmidt (2019) noted, "the newest research confirms the omnipresent nature of hegemonic masculinities-locally, regionally, and globally-yet simultaneously demonstrate how these complex, specific masculinities are essentially hidden in plain sight" (p. 89). More research is needed to better understand how men deal with these perceived antagonistic norms. Conclusion At a time when gender roles and norms are increasingly being challenged, we feel it is important to know what men and women believe people of their gender ingroup and outgroup expect of them. The present findings shed light on the dynamics that shape men's behaviors by showing that men perceive traditional masculinity to be normative only in ingroup contexts. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. Supplemental Material Supplemental material for this article is available online. Notes 1. Results did not differ significantly when including non-heterosexual participants. 2. Research materials and databases for Experiments 1 and 2 can be found online on the Open Science Platform, https://doi.org/10.17605/OSF.IO/WJH26. We followed the APA's ethical guidelines for both experiments, which were approved by the authors' institution's ethical committee. 3. We also assessed endorsement of four femininity norms as complementary material. For the sake of clarity and transparency, we describe results for these norms in Appendix B. 4. Both of our experiments also measured valorization of conformity, need to belong, and identification with men. Because these variables were not directly relevant to our hypothesis, analyses of these variables are not described here. 5. Neither excluding these four participants, nor including non-heterosexual participants had any significant effect on the results. 6. As in Experiment 1, we also included items assessing the femininity norm. Islam Borinca holds a doctorate in social psychology from the University of Geneva. His research focuses on the misunderstanding of prosocial behavior outside a group via indirect contact and the moderating role of prejudice, the intergroup context and social norms. It also examines gender roles, norms and behaviors as well as gender diversity. Juan M. Falomir-Pichastor is professor in social psychology at the University of Geneva. His research focuses on social influence and intergroup relations, and addresses issues such as the influence of social norms on prejudice and discrimination, the promotion of health and proenvironmental behaviors, or judgments related to social justice.
2021-07-27T00:05:26.158Z
2021-05-27T00:00:00.000
{ "year": 2021, "sha1": "3534faf2649d73a34077d499f6912d3cd6a0c6d0", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/10608265211018803", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "84d874ee475a4604234765ed5fd02549d2d055df", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Psychology" ] }
214189858
pes2o/s2orc
v3-fos-license
A modified penalty function method for treating multi freedom constraints in finite element analysis of frames In finite element analysis, treating boundary condition having multi freedom constraints is needed to produce modified system of equation based on master stiffness equation considering multi freedom constraint. Generally the operation of imposing multi freedom constraints can be developed using Master-Slave Elimination, Penalty Augmentation or Lagrange Multiplier Adjunction methods. The master-slave method is useful only for simple cases but exhibits serious shortcomings for treating arbitrary constraints. The Penalty Augmentation method and Lagrange Multiplier Adjunction are better in many applications. But there are not free of disadvantages. The penalty method has difficulty of choice of weight values that balance solution accuracy with the violation of constraint conditions. The multiplier method is sensitive to the degree of linear independence of the constraints, and the bordered stiffness is singular in the case of the dependent constrains. To reduce the disadvantages of these methods, this paper presents a new method for treating multi freedom constraint. Proposed method is similar to penalty function method and it is called “modified penalty function method”. The concept of the modified penalty function method is based on the constructing equivalent solver system of equations from traditional modified system using regulatory parameters. The basic of method for selecting the regulatory parameters is reducing the violation of modified system. For solving equivalent system the authors proposed an iterative algorithm for seeking the solution. The calculation programs are established based on proposed algorithm. The calculation results using the proposed method matched with the calculation results using other methods. Introduction In finite element method, it is necessary to modify the system of master stiffness equation for imposing multi freedom constrains and getting equation solver. The modified process can be developed using basically there method: Master-Slave Elimination, Penalty Augmentation and Lagrange Multiplier Adjunction [1]. The master-slave method is easy to explain and is useful when a few simple linear constraints are imposed by hand, but exhibits serious shortcomings for treating arbitrary constraints. Therefore the master-slave is not widely utilized in treating multi freedom constants. The penalty augmentation method and Lagrange multiplier adjunction method are better suited to general implementations of the finite element method, whether linear or nonlinear, and both techniques are mostly developed in boundary treatment. But Penalty Augmentation method and Lagrange Multiplier Adjunction method are not free of disadvantages. The main disadvantage of penalty method is difficulty of choice of weight values that balance solution accuracy with the IOP Conf. Series: Journal of Physics: Conf. Series 1425 (2020) 012097 IOP Publishing doi:10.1088/1742-6596/1425/1/012097 2 violation of constraint conditions. The multiplier method is sensitive to the degree of linear independence of the constraints, and the bordered stiffness is singular in the case of the dependent constrains. This paper is intended to introduce a new method for treating multi freedom constraints in static analysis of frame system using finite element method. The proposed method is similar to the penalty function method and is called modified penalty function method. Boundary condition and Multi freedom constraints In finite element structural analysis [1,2], separating known and unknown components is needed for developing equations for a linear solver. The necessary step is applying the physical support conditions as displacement boundary conditions to eliminate rigid body motions and render the system non-singular. The constraint conditions can be single freedom constraints or multi freedom constraints. Single freedom constraints that are mathematically expressable as constraints on individual degrees of freedom. It can be linear homogeneous (u=0) or linear non-homogeneous (u= prescribed value). Multi freedom constraints are functional equations that connect two or more displacement component. These can be linear homogeneous, linear non-homogeneous or nonlinear homogeneous. For frame structures shown in fig.1 the constraint conditions are Multi freedom constrains Imposition The set of multi freedom constraints is expressed as may be collected into single matrix relation . Au b  (1) Where u contains all degree of freedom and A is a row vector with same length as u Accounting for multi freedom constraints is done by changing the assembled master equations (2) to produce a modified system of equations (3) . K u f (3) The modified system (3) is incorporating the multi freedom constrains to finite element model. The modified system is that submitted to the equation solver, which returns u and recover u in necessary cases. Penalty function method The concept of penalty function method is that each multi freedom constraint is viewed as the presence of a fictitious elastic structural element called penalty element that enforces it approximately. This element is parameterized by a numerical weight w. The exact constraint is recovered if the weight goes to infinity. The multi freedom constraints are imposed by augmenting the finite element model with the penalty elements. The penalty augmented system can be developed by minimization of the augmented potential energy function and expressed as ( . W is a diagonal matrix of penalty weights and f p is nodal load vector. The important advantage of the penalty function method is its lack of sensitivity with respect to whether constraints are linearly dependent. But there is main disadvantage is the choice of weight values W that balance solution and escape the violation of constraint conditions. The difficult is selecting appropriate weights. It is needed the knowledge of the magnitude of stiffness coefficients and required extensive numerical experimentation. Lagrange multiplier method The concept of this method based on the technique of variational calculus. The potential energy of the unconstrained finite element model is To impose the constraint, adjoin Lagrange multipliers collected in vector λ (unknowns) and form the Lagrangian with condition (1) The multiplier-augmented system of equation, getting by extremizing L with respect to u and λ yields, is expressed as The main advantage of this method is it gives exact solution. But it has the important disadvantage of sensitivity with respect to whether constraints are linearly dependent. In these cases the border stiffness can be singular. Modified penalty function method To escape the disadvantages of penalty function method and the author proposed a modified penalty function method for treating multi freedom constraint solving problems. To resolve load vector into two components and express the master stiffness equations as written Pr Ku f f  (9) Where f p is nodal load vector and f r is nodal reaction load vector From the equations (1) and (9), having the system of equations Where 12 ,  are called regulatory parameters with condition 12   The systems (11) and (9) are equivalent. The following step is seeking the solution of equivalent system (11). For solving linear system is used the iterative method [4]. The iterative algorithm for seeking the solution is proposed as described below. Consider u* and f r * are the roots of the system (11). To construct the iterative sequences When u=u* and f=f r * the system (11) Taking subtractive operation: (12)-(13), changing the index from "k" to "k+1", getting Setting equation from left-hand sides of (14.1) and (14.2), written as To multiply both sides of the equations (15) by Defined the norm .  on the both sides of (16), getting From (17), getting condition below From the condition (18) The difficulty is selecting the regulatory parameters. In this research is proposed a new approach. Based on Suppose to select regulatory parameters so that the below conditions are satisfied Where "n" is the number of nodal displacement unknowns. It can be easy to prove the convergence of proposed method as below. Examples and results For purpose of this research the analysis of frame having multi freedom constrains is implemented. The calculation programs are established based on the algorithms developed by penalty function method, Lagrange multiplier method and modified penalty function method. Example formulation The frame is composed of three bars made of the same material and had the same geometrical properties (frame is shown in fig 4). The geometric parameters, material parameters and loading parameters are given Figure 5. Nodal number, element number and nodal displacement of examined frame Results The results of frame analysis using penalty function method with variable weight values W are shown in Table 1. The correct solution is written in bold alphanumeric. The results of frame analysis using Lagrange multiplier method are shown in Table 2. The results of frame analysis using proposed modified penalty function method are shown in Table 3&4. Table 4. Displacement and reaction results using modified penalty function model with convergence tolerance
2020-01-09T09:15:28.259Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "707c07d74040e3ef66353df926e194d90039e651", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1425/1/012097", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "0604570be49867e7d463411cbca0bddd53e75f4d", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
30863695
pes2o/s2orc
v3-fos-license
AND BIONOMICS Forest Fragments with Larger Core Areas Better Sustain Diverse Orchid Bee Faunas ( Hymenoptera : Apidae : Euglossina ) Male orchid bees were attracted to chemical baits and collected in nine Atlantic Forest fragments in southeastern Brazil. Fragments differed in size and shape. Three additional sites were also sampled in a nearby large fragment. Three hypothetical core areas of each fragment were measured as the total area minus an area of 50, 100, and 200-m-wide perimeter. Abundance and richness were not correlated with either fragment size or ratio area/perimeter, but were positively correlated with the size of core areas. These results suggest that orchid bee conservation requires the preservation of the fragments with the largest possible core areas. Neither size nor shape alone (area/perimeter ratio) seemed to be good indicators of the value of a given fragment for sustaining diverse and abundant faunas of orchid bees. Deforestation almost always results in fragmentation of the original forest into isolated patches of tall trees embedded in a modifi ed matrix (Tocher et al 1997).Species richness and population sizes of forest-dependent animals and plants usually decline as a result of forest loss and fragmentation (Franklin & Forman 1987, Collinge 1996, but see Cane 2001).Because there are no methods to determine the minimum areas of reserves with reference only to ecosystem properties (see Soulé & Simberloff 1986, Beier 1993), biologists have been forced to conduct viability analysis for a few "indicator" or "umbrella" species as an effi cient way to address the viability of the whole system (Soulé 1987, Noss 1991).These analyses, however, have focused on large vertebrates, which require large areas (e. g.Picton 1979, Freemark & Merriam 1986, Dodd 1990, Laurance 1990, 1994, Beier 1993, Lankester et al 1991, Newmark 1991, Opdam 1991, Herkert 1994, Brooks et al 1999, Chiarello 1999) but little is known about the effects of fragmentation on faunas of invertebrates (see Hopkins & Webb 1984, Klein 1989, Daily & Ehrlich 1995). The few studies involving fragmentation and orchid bees were carried out in recently fragmented landscape in the Amazon Basin (Biological Dynamics of Forest Fragments Project, Powell & Powell 1987, Becker et al 1991) or in Atlantic Forest areas in which fragmentation took place over a century ago (Bezerra & Martins 2001, Tonhasca Jr et al 2002, Souza et al 2005, Nemésio & Silveira 2007, Aguiar & Gaglianone 2008, Farias et al 2008).These studies involved few fragments and only related fragment size to bee diversity. Although orchid bees are able to fl y several kilometers each day in search for food and aromatic compounds (Janzen 1971), there are evidences that some species are unable to cross open spaces only a few dozen meters wide between two forest fragments (Powell & Powell 1987, Becker et al 1991).This suggests they are strongly dependent on forest environments. Male orchid bees are the main (and frequently the only) pollinators of about 650 species of orchids (Ackerman 1989), as well as other plant species (reviewed by Dressler 1982).For this reason, their conservation is a matter of concern.Most euglossine species are only found in forest habitats (Roubik & Hanson 2004) which have been widely destroyed in the Neotropics.To effectively conserve orchid bees in remnant forests, it is necessary to understand all the effects of habitat fragmentation on these bees.The main goal of this paper was to assess the infl uence of size and shape of fragments on their orchid bee community structure. Material and Methods Study sites.Data were collected in nine forest fragments near the urban areas of the Belo Horizonte metropolitan region (state of Minas Gerais, Brazil), with more than three million inhabitants.Belo Horizonte (19º58' -20º06'S, 43º55' -44º04'W, elevation: 800-1,100 m) is at the border of two major Brazilian biomes, the Atlantic Forest and the "Cerrado" (Brazilian savanna).The dominant forest in the region is the semideciduous forest, called "low mountain rain forest" by Rizzini (1979), at elevations of 300-800 m.The forest canopy reaches 15-25 m and trunks vary from 40 cm to 60 cm in diameter.There are relatively few epiphytes and lianas, but the understory is well developed.Denser stands of larger trees grow in the humid ravines.The forest gets sparser and shorter as the altitude increases, being substituted at the top of the tallest hills by patches of Cerrado or (above 1,000 m) by "campos rupestres" (rocky fi elds).The regional climate is the AW of Köppen (tropical with rainy summers and a dry winter with mean annual temperature of 18C).These forest fragments are surrounded by areas with different degrees of anthropogenic disturbance.Locations, sizes, elevations and shape parameters of the nine sampled fragments are given in Table 1. Sampling.Male orchid bees were captured monthly at a single fi xed spot in each site, between 10:00h and 16:00h, during one year, between May 1999 and April 2000.Five chemicals (benzyl acetate, 1,8-cineole, eugenol, methyl trans-cinnamate, and vanillin) were used to attract the bees.They were imbued in cotton waddings hanging from branches at about 1.5 m above the soil surface and distant from each other at least 2.0 m.Monthly, the three sites at RSC were always sampled on the same day, as well as the three sites at Barreiro (Barreiro large, median, and small) and the two sites of Catarina (Catarina large and small).This practice was adopted to avoid bias due to infl uence of possible different climatic conditions if sampling in sites of the same area were taken in different days.All collected specimens were pinned, identifi ed and deposited at the entomological collection of the Taxonomic Collections of the Universidade Federal de Minas Gerais.Taxonomy follows Nemésio (2009).Data analysis.Sizes and "shapes" of forest fragments were correlated with the abundance and species richness of their orchid bee fauna through the Spearman rank correlation test, considering a 5% signifi cance level.Shape of the forest fragment was calculated as: (i) the area/perimeter ratio and (ii) the core area size.The core area is that area resulting from the exclusion of a uniform border of a given width off the fragment.Three core areas were estimated for each fragment, excluding borders 50 m, 100 m, and 200 m wide (respectively, CA 50 , CA 100 and CA 200 ) measured from the forest edge.When, after excluding a given border, a fragment was split into two or more core areas, only the largest one was used for analysis.When no area was left after the exclusion of a border stripe of a give width, we tried two analyses: (i) this fragment was not considered in the correlation test for that category; (ii) the area of the fragment was considered to be zero and it was considered in the correlation test.The sites in RSC were not used for core area analyses, since they Split into four areas of 1.5 ha, 3.0 ha, 7.5 ha and 12.0 ha, respectively; 6 Split into two areas of 10.0 ha, and 11.0 ha, respectively; 7 Split into two areas of 91.1 ha and 95.7 ha, respectively. Table 1 Sampled sites in Belo Horizonte metropolitan region and some important features.Areas: SM = Serra da Moeda; BS = Área de Proteção Especial (APE) do Barreiro (small fragment); BM = APE do Barreiro (medium-sized fragment); BL = APE do Barreiro (large fragment); CS = APE do Catarina (small fragment); CL = APE do Catarina (large fragment); FCH = APE de Fechos; PM = Parque das Mangabeiras; TAB = APE de Taboões.Different fragments in a same area are named after their relative sizes: s = small; m = median; l = large.Campo = campo rupestre (rocky fi eld); Cerrado = Brazilian savanna.CA's are estimates of core area, obtained through subtraction, from the total area of the fragment, of the corresponding area of 50, 100, and 200 m of edge.Ratio a/p = Ratio area/perimeter.are not fragments, but sites situated in the same continuous large area.For the same reason, they were considered as the largest fragments when effect of fragment size was analyzed.To avoid bias, when the nine fragments of the Belo Horizonte region were analyzed in respect to their core area, two sets of data were generated: the fi rst including all nine fragments and the second excluding the two sites situated at the highest elevations (FCH and SM, Table 1), leaving only the seven sites situated at approximately the same elevation.Data and analyses focusing on elevation were published elsewhere (Nemésio 2008). The similarity in faunistic composition among the twelve sites was estimated by the percent similarity index of Renkonen, recommended by Wolda (1981) for small samples.Based on those similarities, the areas were grouped using UPGMA (Sneath & Sokal 1973).The resulting similarity matrix was correlated to a matrix of geographic distance among the sites.Nevertheless, since the elements are not independent (Fortin & Gurevitch 1993), the Mantel permutation test was used for these correlations (Douglas & Endler 1982, Manly 1994, Sokal & Rohlf 1995).For calculating Z statistics, 1,000 permutations were used, as recommended by Fortin & Gurevitch (1993). Results A total of 2,381 male orchid bees belonging to at least 14 species were collected at the nine areas in Belo Horizonte and the three sites at RSC (Table 2).Abundance and species richness were not correlated with fragment size, independent of the data set employed. No correlation was found between core area size and abundance or richness considering the nine fragments.Nonetheless, when the two sites situated at the highest elevations (Serra da Moeda and Fechos, both above 1,300 m) were excluded and only the seven fragments situated approximately at the same altitude (900-1,100 m) were considered, both abundance (CA 100 and CA 200 : r s = 0.90, n = 5, P < 0.05) and richness (CA 100 and CA 200 : r s = 0.98, n = 5, P < 0.05) were correlated with the core area of fragments for the widest perimeter categories (CA 100 and CA 200 ).This result was also achieved when the fragments with CA 100 = 0 were considered to have area zero and included in the analysis (for abundance CA 100 and CA 200 : r s = 0.69, n = 7, P < 0.05; for richness CA 100 and CA 200 : r s = 0.72, n = 7, P < 0.05).Abundance and species richness were not correlated with the area/perimeter ratio in any analysis. The ordination of the sites according to their faunas (Fig 1 ) shows a great overall similarity among the sites, with the most distinctive of them (RSC-1) still sharing more than 40% similarity with the others.The seven sites at approximately the same altitude in Belo Horizonte region showed more similarity to each other than to the two other sites at the highest elevations or to the RSC sites. When similarity was correlated to geographic distance through the Mantel test, a signifi cant correlation was obtained (r = -0.494;t = -2.68;n = 12; P = 0.004), i.e., the shorter the distance, the greater the similarity among sites. Discussion Orchid bee species richness and composition.The species collected in the present study are essentially the same collected in a previous work carried out in the same region (Nemésio & Silveira 2007), although only the fragment Parque das Mangabeiras was sampled in both studies.The only species recorded in the present study and not collected by Nemésio & Silveira (2007) was Eufriesea violacea (Blanchard).Nonetheless, this species was collected at RSC and not in the Belo Horizonte region (see Table 2).The same is true for species composition; details on the currently known distribution of these species were also presented by Nemésio & Faria Jr (2004) and by Nemésio (2009).The high similarity among orchid bee faunas of all sampled sites may refl ect the connections among the fragments and also their obvious common biogeographic history. The correlation between geographic distance and similarity of faunas revealed by the Mantel test and clearly seen in Fig 1 should be pointed out.Besides, it is noticeable that fragments situated approximately at the same elevation grouped together (FCH and RSC-2; SM and RSC-3 and the seven fragments situated at approximately the same elevation in Belo Horizonte -[BS + (CL + CS)] + [(BL + BM) + TAB + PM]).Interestingly, BS grouped fi rst with the two sites of Catarina reserve (CL + CS) instead of grouping with BL and BM, its neighbor sites.This is due to the strong infl uence of Eulaema nigrita Lepeletier (see Table 2), a species regarded as typical of open and/or disturbed areas (see Tonhasca Jr et al 2002 andNemésio &Silveira 2006b; for alternative hypothesis, see Bezerra &Martins 2001 andNemésio &Silveira 2006a).The four largest fragments of the Belo Horizonte area situated at similar elevations (BM, BL, TAB, and PM) grouped together (sites where the dominance of El. nigrita is weaker than in BS, CL and CS) (see Fig 1). Fragment size.Nemésio & Silveira (2007) found a positive correlation between fragment size and abundance (but not to species richness, though it was suggested that fragment size could influence species richness).Nevertheless, the data presented here do not corroborate such correlation.The difference between those results may be due, primarily, to the areas sampled in each study.All four fragments sampled by Nemésio & Silveira (2007) are at similar elevations (850-1,100 m), whereas in this study fragments between 900 m and 1,400 m were sampled.Moreover, the four fragments of the fi rst study were quite distant from each other (2.3-12.8km) while, in the present study, different degrees of connection were selected. Barreiro-small (2 ha) presented a high abundance and species richness, most probably because it is between two larger fragments (45-180 ha) and distant only a few tens of meters from both of them.Thus, many of the bees collected there may have been attracted from the larger fragments nearby.This, surely, contributed for the reduction of the correlations, since high values of abundance and richness were attributed to one of the smallest areas.It also should be noted that the orchid bee faunas of neighbor fragments tended to be the most similar to each other, when distance between fragments was smaller than 100 m (Barreiro large, Table 2 Number (N) and relative abundance (%) of male orchid bees according to species collected in the twelve sampling sites: Barreiro-small (BS), Barreiromedian (BM), Barreiro-large (BL), Catarina-small (CS), Catarina-large (CL), Fechos (FCH), Serra da Moeda (SM), Taboões (TAB), Parque das Mangabeiras (PM), and the three sites at RSC (RSC-1, 2, and 3).median, and small; Catarina large and small; distances between all other fragments were larger than 1,000 m).This also suggests that some migration between close-by fragments does occur.Moreover, fragment sizes and shapes were more homogeneous among the forest patches studied by Nemésio & Silveira (2007), probably promoting similar environmental conditions among fragments. Fragment shape.It is known that two patches of the same size but with different amount of edge may have different population dynamics (Fahrig & Merriam 1994) and it is common sense that, among fragments of the same size, the one with shape most closely approaching a circle should be preferred for conservation purposes, since it would best reduce the edge effects (e. g.Diamond 1975, Begon et al 2006). Although Game (1980) argued that "in certain circumstances the optimal shape may be other than circular", she recognizes that, if extinction rate is highly dependent on shape, then the optimal shape is circular (Game 1980:631).In relatively isolated fragments or sets of fragments, as in the present study, immersed in an urban matrix, the main challenge is to reduce the extinction rate and not to increase the immigration rate. The area/perimeter ratio is suggested as a practical way to assess the "shape quality" of fragments.The higher its value, the more similar to a circle a fragment will be; the lower the value, the higher the edge effects will be.However, this ratio is of limited use when fragments of different sizes are compared, since the area/perimeter ratio of two areas of same shape but with different sizes are not equivalent, with the larger area also presenting a larger area/perimeter ratio. Given that it is not shape itself that counts for organisms depending on deep-forest environments, but the actual area that is isolated from edge effects in the fragment, the use of core areas should be preferred as a tool to evaluate fragment quality for conservation.Moreover, the study of the correlation between core areas and population abundance and species richness is a practical tool for estimating the absolute distance below which edge effects are important for different kinds of organisms.Thus, our data, combined with those presented in a previous work (Nemésio & Silveira 2006b), suggest that at 50 m from the edge, the orchid bee community is still heavily affected by edge effects.Data on the orchid bee fauna of a large fragment of Atlantic Forest (36,000 ha) showed that the orchid bee faunas at 400 m and 500 m from the forest edge are more similar to those at 2,000 m and 4,000 m from the edge than to that at 50 m (Nemésio & Silveira 2006b). The data presented here suggest that the orchid bee faunas of the fragments with the largest core areas at least 100 m far from the closest edge are richer and more abundant than those occurring in areas with limited core areas.Thus, large but narrowly linear reserves will not be effective in conserving orchid bees.Our data do not allow us to estimate the minimum area for effective conservation of euglossine species dependent on deep-forest environments.However, considering the fact that orchid bees are believed to fl y a few to several thousands meters in search of the resources they need (Janzen 1971), it can be expected that those forest species demanding well preserved environments will need reserves of several hundreds to a few thousands hectares for long-term conservation.However, small fragments are important to conserve less restrictive species. This study suggests that the best areas to be preserved in the Atlantic Forest domain, as far as orchid bee conservation is concerned, are those still holding well preserved core areas at least 100 m far from the closest forest edge.However, complementary studies involving larger number of fragments are necessary to defi ne which the minimum size of such core areas would be for each species.The employment of core areas, thus, seems to be a useful tool for conservation policy, since areas to be preserved can be objectively selected.Since different organisms will respond differently to specifi c distances to the forest edge, combinations of minimum edge distances for core areas estimated for several taxa should be employed in selecting the best areas to be preserved.
2017-07-14T14:06:34.522Z
2010-07-01T00:00:00.000
{ "year": 2010, "sha1": "0c223adce3f6a3a01ce2d6960ba0d929a90d1edc", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/ne/a/WLDkrhKYKhN9RK3Hbdc9sfG/?format=pdf&lang=en", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0c223adce3f6a3a01ce2d6960ba0d929a90d1edc", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
247159790
pes2o/s2orc
v3-fos-license
A New Paradigm of Multiheme Cytochrome Evolution by Grafting and Pruning Protein Modules Abstract Multiheme cytochromes play key roles in diverse biogeochemical cycles, but understanding the origin and evolution of these proteins is a challenge due to their ancient origin and complex structure. Up until now, the evolution of multiheme cytochromes composed by multiple redox modules in a single polypeptide chain was proposed to occur by gene fusion events. In this context, the pentaheme nitrite reductase NrfA and the tetraheme cytochrome c554 were previously proposed to be at the origin of the extant octa- and nonaheme cytochrome c involved in metabolic pathways that contribute to the nitrogen, sulfur, and iron biogeochemical cycles by a gene fusion event. Here, we combine structural and character-based phylogenetic analysis with an unbiased root placement method to refine the evolutionary relationships between these multiheme cytochromes. The evidence show that NrfA and cytochrome c554 belong to different clades, which suggests that these two multiheme cytochromes are products of truncation of ancestral octaheme cytochromes related to extant octaheme nitrite reductase and MccA, respectively. From our phylogenetic analysis, the last common ancestor is predicted to be an octaheme cytochrome with nitrite reduction ability. Evolution from this octaheme framework led to the great diversity of extant multiheme cytochromes analyzed here by pruning and grafting of protein modules and hemes. By shedding light into the evolution of multiheme cytochromes that intervene in different biogeochemical cycles, this work contributes to our understanding about the interplay between biology and geochemistry across large time scales in the history of Earth. Introduction Multiheme cytochrome c (MHC) catalyzes diverse chemical reactions in prokaryotes providing them with a remarkable metabolic versatility Bewley et al. 2013;Deng et al. 2018). These metalloproteins exploit the redox, spin, and acid-base properties of the heme cofactors to perform chemical reactions that are pivotal to many biogeochemical cycles, including those of nitrogen, sulfur, and iron (Pereira and Xavier 2006;Mayfield et al. 2011;Paquete et al. 2019). Some MHC families share sequence and structural similarities that unequivocally reflect a common ancestral origin (Sharma et al. 2010). Understanding their evolution has been a challenge due to the low preserved amino acid sequence and limited available 3D structures that are better preserved than sequence across large timescales (Chothia and Lesk 1986;Illergård et al. 2009;Ingles-Prieto et al. 2013). Nevertheless, the current paradigm for the emergence of the different known MHC follows the proposed fusion of redox modules, going from simple MHC to more complex and containing more heme cofactors per polypeptide chain. This comes from the observation of the presence of repetitive redox modules in MHC that has enabled to relate different MHC families (Roldán et al. 1998;Carrondo et al. 2006;Santos-Silva et al. 2007;Clarke et al. 2011;Pokkuluri et al. 2011;Pereira et al. 2017;Edwards et al. 2020). In the case of the pentaheme nitrite reductase (NrfA) and the octaheme hydroxylamine oxidoreductase (HAO), structural similarities of the heme-core and of the three interface-forming helices have long been pointed out as a sign of the common origin between these two MHCs that have distinct functions in the nitrogen cycle (Igarashi et al. 1997;Einsle et al. 1999). Octaheme nitrite reductases (ONRs) later discovered showed even more similarities with NrfA with respect to its structure and function (Polyakov et al. 2009;Tikhonova et al. 2012). Cytochrome c 554 (cyt c 554 ) that is unrelated to NrfA, also showed similarities with HAO regarding the heme-core structure (Iverson et al. 1998), suggesting that diverse evolutionary mechanisms took place for the emergence of these proteins. In all of these proteins, catalysis takes place at a heme with one open coordination site (pentacoordinated) in the iron for access of the substrate. Using phylogenetics and placing NrfA at the root, an ancestral ONR was proposed to be an intermediary for the evolution of NrfA to the presumably more recent HAO family of proteins (Klotz et al. 2008). In that proposal, an unknown triheme cytochrome fused with NrfA to originate ONR. That simple evolutionary scheme required revision as more homologous MHCs were characterized. For example, the structure of the copper-containing sulfite reductase MccA (Hermann et al. 2015), which also reduces nitrite, and the octaheme nitrite reductase IhOCC (Parey et al. 2016) revealed similarities with NrfA, ONR, and HAO. However, unlike ONR and HAO, the pentacoordinated heme of MccA is located in the N-terminal region (Hermann et al. 2015). This feature is also found in the octaheme tetrathionate reductase (OTR; Mowat et al. 2004) that was proposed to have diverged from NrfA via a different route based on the lack of the interface-forming helices characteristic of HAO and ONR (Klotz et al. 2008). More recently, the structure of the cell-surface nonaheme OcwA from the electroactive bacterium Thermincola potens JR was determined (Costa et al. 2019). This protein, which has iron-oxide reductase activity in vitro in agreement with its proposed physiological role, did not reveal structural similarities with other structurally characterized cell-surface iron reductases (Edwards et al. 2012;Liu et al. 2014;Edwards et al. 2015;Wang et al. 2019;Edwards et al. 2020). In contrast, OcwA showed similarities with NrfA at the C-terminus and cyt c 554 at the N-terminus, conserving the penta-coordinated hemes of both proteins. These observations led to the proposal that an ancestral OcwA-like protein could represent the evolutionary link between NrfA and cytochrome c 554 with the different octaheme cytochromes (Costa et al. 2019). In this scenario, OcwA would have originated from a gene fusion event between ancestral NrfA and cyt c 554 . Subsequent loss of one of the two penta-coordinated hemes and diversification would have led to the different extant octaheme cytochromes (Costa et al. 2019). However, these previous studies (Klotz et al. 2008;Costa et al. 2019) lacked a phylogenetic analysis with a combination of structural and character (sequence) information, and an unbiased root placement method. Here, we used these criteria combined with minimal functional-site characterization of the hemes to refine the previous evolutionary proposals. Our phylogenetic analysis revealed that cyt c 554 and NrfA are products of a truncation event from different octaheme cytochromes and that the last common ancestor (LCA) is inferred to be an octaheme cytochrome able to reduce nitrite. Identification of the Homologous Group of MHC The MHCs analyzed in this work (table 1) are involved in diverse metabolic pathways. NrfA, ONR, IhOCC, MccA, and OTR are involved in dissimilatory nitrite and/or sulfite and/or tetrathionate reduction. Cyt c 554 , HAO, and hydrazine dehydrogenase (HDH) are involved in ammonia oxidation reactions. HAO oxidizes hydroxylamine within the aerobic ammonia oxidation metabolism; HDH oxidizes hydrazine in the anaerobic ammonia oxidation (anammox) metabolism; cyt c 554 functions as an electron transfer protein from HAO to membrane-bound cytochrome c 552 (for a review, see Paquete et al. 2019). OcwA is involved in dissimilatory iron reduction by extracellular electron transfer (Carlson et al. 2012;Costa et al. 2019). Nevertheless, all of them display nitrite or nitric oxide reductase activity in vitro (Costa et al. 2019;Paquete et al. 2019). Sequence and structural similarities allowed the assignment of the recently identified undecaheme OmhA (Gavrilov et al. 2021) as belonging to this homologous group (table 1). This protein was isolated from the S-layer of the thermophilic and Gram-positive bacterium Carboxydothermus ferrireducens (Gavrilov et al. 2012). OmhA shares 29.5% semi-global sequence identity and its structure has a C-α root mean square deviation . 1). The exception is cyt c 554 and NrfA that lack similarity between each other as these two cytochromes align at opposing ends (N-and C-terminal, respectively) of the octa-, nona-, and undecaheme cytochromes. Based on this observation, cyt c 554 and NrfA are used to define N-and C-terminal modules of the MHC analyzed in this work ( fig. 1), respectively. All other MHCs share a similar heme-core arrangement with three or more hemes within these modules displaying good structural superimposition. One key difference is the presence of one or two penta-coordinated hemes. NrfA, ONR, HAO, HDH, and IhOCC contain the pentacoordinated heme within the "C-terminal module," whereas cyt c 554 , MccA, and OTR contain the penta-coordinated heme within the "N-terminal module" ( fig. 1). OcwA and OmhA are currently the only proteins that possess both penta-coordinated hemes. Another notable observation is the presence or absence of the three C-terminal oligomerization-forming helices that are absent in cyt c 554 (lacks the C-terminal module) and in OTR ( Comparison of the Backbone Trace and Heme-Core Architecture In order to assess the structural distance within these MHCs, the backbone and heme-core arrangements were compared. As shown by the network representation in figure 2 (corresponding distance matrices are available in supplementary tables S1-S4, Supplementary Material online), OTR is significantly different from all other MHCs. This protein appears as an outlier with respect both to the backbone and heme-core structural comparisons ( fig. 2). Based on the backbone trace and heme-core architecture, the remaining MHCs clustered in three distinct clades: HAO, HDH, and IhOCC (clade 1); NrfA and ONR (clade 2); cyt c 554 , MccA, OcwA, and OmhA (clade 3). In clades 1 and 2, intracluster structural distances are globally shorter than in clade 3, which did not always meet the selected threshold for node (structure) connection (gray lines in fig. 2). Nevertheless, clade 3 MHCs were consistently found together when compared with the rest of the MHC ( Comparison of the Heme Centers as Minimal Functional Sites Minimal functional sites in metalloproteins (MFSs) are portions of the 3D structure that focus on the region around the metal site(s). MFSs of heme centers were extracted from representative structures of the MHC listed in table 1, compared all versus all with the MetalS 2 tool (Andreini et al. 2013), and clustered into groups of structurally similar MFSs. This analysis resulted in 10 groups, comprising 72 MFSs out of the 77 included in the analysis, Table 2 shows the groups to which each heme center in the MHC belongs. As MFSs describe the local environment around a metal cofactor, each group of MFSs identifies a set of heme centers in MHC with common structural features, which for the sake of simplicity we refer to as a heme "type." Observation of table 2 confirms that (1) OTR represents an outlier with respect to all the other MHCs, as only the heme centers at positions 3 and 11 are of a type shared by other MHCs; and (2) HAO, HDH, and IhOCC represent a clear subgroup (clade 1 above), as all of them have the same pattern of heme types, which includes three centers FIG. 2. Distance network analysis using structural information. Structural comparisons were performed using distance comparisons of the backbone structure at the N-terminal (A) and C-terminal (B) and of the hemecore at the N-terminal (C) and C-terminal (D). Each node (black dot) represents one structure. Gray lines connect structures with distances ≤3.5 and 0.75 Å for the backbone and heme-core structures, respectively. For each MHC, the PDB code of the 3D structure, the position of hemes in the sequence (numbered according to OmhA numbering in fig. 1) and the group of MFSs to which each heme center belongs are indicated. Overall, the results of the analysis of MFSs are consistent with those obtained for the Comparison of the backbone trace and heme-core architecture, which found HAO and IhOCC clustered in clade 1 and the remaining MHCs divided in clades 2 and 3 ( fig. 2). Character-Based Phylogeny A total of 5700 sequences (accession codes and sequences are provided in supplementary files, Supplementary Material online) were collected from NCBI RefSeq database (supplementary table S5, Supplementary Material online). These correspond to highly homologous sequences to at least one of the reference MHC listed in table 1. These sequences ranged from 23.1 to 100% of global identity. Globally the diversity of prokaryotes harboring homologous MHC was greatly expanded from those whose MHCs have been structurally characterized (table 1). A total of 22 phyla and two domains (Bacteria and Archaea) were found in this analysis (supplementary table S5, Supplementary Material online). NrfA was the most represented family within the database in terms of number of sequences and diversity, whereas the cell-surface MHCs (OcwA and OmhA) were very poorly represented. As OTR was shown to be an outlier in the structural comparisons (see backbone trace, heme-core architecture, and minimal functional-site comparisons), it was not included in this analysis. HAO and HDH homologs greatly overlapped as HAO and HDH from Kuenenia stuttgartiensis are phylogenetically closer than HAO from Nitrosomonas europaea. In this sense, this group was joined hereon (HAO/HDH). After redundancy reduction (see Sequence Collection and Clustering) phylogenetic inference was performed using multiple sequence alignments (MSAs) either containing NrfA (NrfA+) or cyt c 554 (cyt c 554 +) along with the rest of the MHC, as these cytochromes align at opposing ends of the sequence of the remaining MHCs. Mismatch between the phylogenetic position of some sequences and the taxonomic classification of the organisms from which these sequences derive were observed across all clades analyzed (supplementary figs. S4-S6, Supplementary Material online). This suggests widespread occurrence of horizontal gene transfer (HGT) for these MHCs. Notably, OcwA and OmhA sequences only presented vertical transfer in the current data set. However, this may be a consequence of the poor representation of these two MHCs in the public database analyzed. It contains only two and four highly homologous sequences, respectively, and from bacteria belonging the same genus, Thermincola and Carboxydothermus, respectively (supplementary table S5, Supplementary Material online). Conservation of Critical Residues for Function In order to gain further insights about the emergence of the unique features that differentiate these MHCs, the conservation of critical residues for function was searched and compared within each MHC clade ( fig. 4). The ability to oxidize hydroxylamine or hydrazine appears to have a relatively recent origin within clade 1, judging by the branching points of the homologous sequences that conserve the tyrosine residue at the crosslink position (supplementary fig. S7, Supplementary Material online). In clade 2, all ONRs and NrfAs contain the typical known features for nitrite reduction, namely the lysine as the proximal ligand of the penta-coordinated heme and the catalytic tyrosine, histidine, and arginine, which correspond to K131, Y217, H282, and R113 in Sulfurospirillum deleyianum (PDB entry 1QDB_A), respectively. The only exception is the NrfA variants without the lysine proximal ligand of the penta-coordinated heme (supplementary fig. S8, Supplementary Material online) that appear to have emerged recently within clade 2. Clade 3 is more diversified in terms of function and structure than clades 1 and 2. Nevertheless, the identified residues that are important for catalysis are all conserved in OcwA and MccA, which have catalytic residues assigned in their structures. All OcwA and OmhA sequences conserve the nonbonding histidines that are localized in the distal side of the pentacoordinated hemes. These correspond to H282 and H381 in T. potens JR (PDB entry 6I5B_A). OcwA and OmhA conserve also two extra histidines nearby the first and second penta-coordinated hemes (positions 4 and 7 in table 2), H281 and H380 in T. potens JR (PDB entry 6I5B_A). T. potens JR H380 is conserved in all OcwA and OmhA sequences, whereas T. potens JR H281 is conserved in all OcwA sequences and in half of the OmhA sequences. All MccA sequences conserve the known residues that are important for catalysis, namely Y123, K208, Y285, Y301, R366, K393, C399, and C495 in Wolinella succinogenes (PDB entry 4RKM_A). The eighth heme (position 11 in table 2) has the uncommon CX 15 CH motif (Hermann et al. 2015). In addition, we found also the presence of the CX 17 CH hemebinding motif for the eighth heme of the MccA sequences in our data set. Discussion The dissection of the origin and evolution of MHC has attracted significant scientific interest but has also been characterized by numerous challenges due to the small sequence similarity that was preserved among these proteins along time. Nevertheless, a great number of MHCs have been identified as homologs based on structural similarities. The timeline for their emergence is typically thought to progress by fusion of redox modules towards more complex MHC. These can reach up to 16 hemes for structurally characterized MHC, but genes coding putative MHC containing as much as 113 hemes per polypeptide chain have been reported (Roldán et al. 1998;Carrondo et al. 2006;Santos-Silva et al. 2007;Clarke et al. 2011;Pokkuluri et al. 2011;Pereira et al. 2017;Edwards et al. 2020;Leu et al. 2020). Towards understanding the evolution of MHC, we focused our attention on a group of MHC that is involved in the biogeochemical cycles of iron, nitrogen, and sulfur. In addition to gene fusion, gene fission is also a wellestablished mechanism for diversity of domain combination in all domains of life (Kummerfeld and Teichmann 2005;Pasek et al. 2006;Weiner and Bornberg-Bauer 2006). However, only gene fusion was considered when proposing the origin of the octa and nonaheme cytochromes by NrfA and cyt c 554 (Klotz et al. 2008;Costa et al. 2019). Indeed, the previous proposals for MHC evolution were biased towards a fusion event (Klotz et al. 2008;Costa et al. 2019). By performing a thorough phylogenetic inference with both structural and sequence data, we now show that NrfA and cyt c 554 were likely formed by truncation of octaheme cytochromes from different clades. Phylogenetic analysis revealed three main clades composed by HAO/HDH and IhOCC (clade 1), NrfA and ONR (clade 2), and OcwA, OmhA, MccA, and cyt c 554 (clade 3). NrfA and cyt c 554 belong to different clades and consequently, the MHCs that are phylogenetically closer to NrfA are not the same for cyt c 554 . The closest homolog of NrfA is ONR, whereas for cyt c 554 , the closest homolog is MccA. This implies that a fusion event between NrfA and cyt c 554 was not responsible for OcwA origin. Although previous studies presented unrooted trees Character-based phylogenetic analysis for the homologous MHC. Maximum likelihood phylogenetic trees based on cyt c 554 + (A) and NrfA+ (B) multiple sequence alignments. Bayesian phylogenetic trees based on cyt c 554 + (C) and NrfA+ (D) multiple sequence alignments. Protein sequences fell in three main clades according with their tree positioning and are represented in different colors (red, blue, and green) and labelled with 1, 2 and 3. Asterisks indicate the root placement according with the minimal ancestor deviation method (Tria et al. 2017). Bootstrap/SH-aLTR (ML trees) or posterior probabilities (Bayesian trees) confidence percentages values are presented near each node for the major splits. Tree scale represents number of substitutions per site. FIG. 4. Cladogram with the most parsimonious solution for the evolution of the MHC after phylogenetic analysis. Circles and rectangles represent the hemes and catalytic activities, respectively, for each MHC. Dark circles and rectangles represent hemes and activities inherited from the LCA, whereas gray and white represent heme or activity gain and loss, respectively. Gains and losses of hemes and activities from the LCA to extant MHC are depicted near each respective position in the cladogram. Penta-coordinated hemes are marked within black rectangles. ND represents no data. Soares et al. · https://doi.org/10.1093/molbev/msac139 MBE as the implicit root for those phylogenetic analyses, based on the premise that nitrate/nitrite ammonification is the most ancestral reaction when compared with ammonia oxidation reactions. However, for a global cycle dating perspective, isotope signatures cannot at present, discriminate between all of the different reactions within the nitrogen cycle (Stüeken et al. 2016). Moreover, since those studies, other MHCs capable of nitrite reduction besides NrfA were identified (e.g., IhOCC; Parey et al. 2016) and dissimilatory iron reduction is also considered to be a very early form of bioenergetic metabolism (Vargas et al. 1998;Johnson et al. 2008). Attempts at a rigorous assessment of relative dating using taxonomics in order to solve the root placement is complicated by the evidence that most of these MHCs have undergone considerable HGT, which was detected in the present analysis. Likewise, HGT of MHC has been extensively reported in the literature, including for NrfA (Welsh et al. 2014), HAO (Bergmann et al. 2005), and several MHCs involved in extracellular electron transfer, such as MtrCAB and OmcA from Shewanella spp. (Zhong et al. 2018;Baker et al. 2022). Outgroup rooting is by far the most used method to infer the root position in a given phylogenetic tree but it is unsuitable when the ingroup is already very divergent and no appropriate outgroup is available. Considering this, we used minimal ancestor deviation method for rooting our trees (Tria et al. 2017). This method is an updated version of the midpoint rooting method by enabling perturbations to the molecular clock and therefore heterotachy. This analysis placed the root between clade 1 and the junction of clades 2 and 3. Using this information, we can conclude that clade 1 is the most ancient, as it represents the basal clade from the root position, followed by clades 2 and 3. Using the most parsimonious solution for inferring ancestral states, the LCA is inferred to be an octaheme cytochrome that was able to reduce nitrite. In the timeline that we propose ( fig. 4), clade 1 was the first to branch into HAO/HDH and IhOCC, both having the same heme-core architecture of the LCA. The penta-coordinated heme of HAO/HDH is a P460 heme and most of the sequences collected for HAO/HDH conserve the tyrosine residue at crosslinking position of the representative structures (supplementary fig. S7, Supplementary Material online). Contrarily, the P460 heme is absent in IhOCC and it is predicted to be absent also in the LCA. In this scenario, the P460 heme evolved de novo in the HAO/HDH subclade. Indeed, an HAO family protein that lacks the tyrosine crosslink (HAOr) was recently isolated from K. stuttgartiensis and showed no hydroxylamine oxidation activity, while presenting nitrite reduction to nitric oxide (Ferousi et al. 2021). The second branching clade was clade 2, that latter diversified into ONR that conserved the overall heme-core arrangement of the LCA, and NrfA that further lost the three N-terminal hemes. The lysine residue as the proximal ligand of the penta-coordinated heme seems to be an innovation of this clade, conserved in all ONR sequences. In contrast, not all sequences that were collected for NrfA conserve this lysine residue even though no structure is available for these variants. Clade 3 appeared more recently by the addition of a second penta-coordinated heme (position 4 in table 2). OcwA and OmhA still conserve also the first penta-coordinated heme (position 7 in table 2) that is common to the two more ancient clades (1 and 2), and OmhA gained two extra hemes at the N-terminus. In contrast, MccA and cyt c 554 have lost the first penta-coordinated heme (position 7 in table 2), and cyt c 554 further lost the last four C-terminal hemes. Inference that the LCA for the MHC involved in the nitrogen and sulfur cycles was an octaheme cytochrome and that loss of hemes took place, was previously proposed and discussed Simon and Klotz 2013). However, figure 1 shows that the sequence alignments used by Kern et al. (2011) are not congruent with the respective 3D structure alignments. Kern et al. (2011) aligned the penta-coordinated hemes of MccA and cyt c 554 (position 4 in table 2) with the ones of NrfA, ONR, and HAO (position 7 in table 2), which results in an 180°y-axis misalignment from the aligned protein structures of these two groups presented in figure 1. Two consequences arise from this: the tree topology artificially exacerbates the distance between these two groups and by proposing an alignment of NrfA with cyt c 554 , Kern et al. (2011) failed to realize the existence of redox modules, which is clearly apparent in figure 1. Since Kern et al. (2011), many MHC structures have been solved, which facilitates the correct identification of the mentioned redox modules. The cladogram of figure 4 provides additional support to the analysis performed here given that it recapitulates the patterns of heme "types," as described by MFSs, observed in extant MHC (supplementary fig. S9, Supplementary Material online). In this scenario, clade 1 conserved the local heme environments from the LCA, whereas clades 2 and 3 underwent successive modifications. The common branching of clades 2 and 3 from clade 1 involved the modification of the two first hemes (positions 3 and 5 in table 2). The subsequent origin of clade 3 involved the addition of a new penta-coordinated heme (position 4 in table 2) that had similar characteristics of the first penta-coordinated heme (position 7 in table 2) and further modification of the heme at position 5 (table 2) Conclusion MHC were abundantly employed by nature in the development of multiple biogeochemical cycles across large Multiheme Cytochrome Evolution · https://doi.org/10.1093/molbev/msac139 MBE time scales, which were of high importance for the colonization of all extant ecological niches of life on earth. Previous proposals for the evolution of MHC have been biased towards gene fusion events (Roldán et al. 1998;Carrondo et al. 2006;Santos-Silva et al. 2007;Clarke et al. 2011;Pokkuluri et al. 2011;Pereira et al. 2017;Edwards et al. 2020). Our study changes this perspective by showing that fission of heterogeneous redox modules also drives the evolution of MHC and should be a priori equally considered. It shows that NrfA and cyt c 554 likely resulted from truncation events of an ancestral ONR and MccA, respectively, and that the common ancestor of the MHC analyzed here was likely an octaheme cytochrome similar to extant IhOCC with nitrite reductase activity. Evolution from this ancestral octaheme cytochrome included pruning and grafting of heme-binding polypeptide modules, which led to the emergence of extant MHC that catalyze very distinct reactions within the nitrogen, sulfur, and iron biogeochemical cycles. Altogether, this work opens a new perspective in our understanding about the evolution of MHC and their changing role in the interplay between biology and geochemistry across large timescales. Materials and Methods Backbone and Heme-Core Comparisons Protein structures were retrieved from the PDB database (table 1). Chain A of each protein was selected for subsequent structural comparisons. The data were divided in two subsets for analysis, one including all MHCs except NrfA and another including all MHCs except cyt c 554 , as these align at opposing ends of the octa-, nona-, and undecaheme MHC. Heme-core comparisons were performed using PyMOL Molecular Graphics System (version 2.0 Schrödinger, LLC). The structural positions of the 33 nonmobile atoms of the porphyrin ring were selected for each heme. For the analysis including cyt c 554 , the three common N-terminal hemes to all MHCs except NrfA were selected (positions 3, 5, and 6 in table 2). For the analysis including NrfA, the four common C-terminal hemes to all MHCs except cyt c 554 were selected (positions 8, 9, 10, and 11 in table 2). Pair-fit function was used to align the corresponding positions of each pair. Pairwise RMSD values were used to construct a distance matrix that was then fed into Cytoscape 3.7.1 to construct networks based on structural distance (Shannon et al. 2003). Representation was generated using the prefuse force directed layout algorithm with 1 normalized weight values option and 1,000 iterations. Backbone structure of the MHC was compared using mTM-align (Dong, Pan, et al. 2018;. From the multiple structure alignment generated, pairwise RMSD were retrieved and used to build a matrix that was then used as input data for network representation in Cytoscape, in a similar procedure to that performed for the heme-core structural comparisons. Minimal Functional Sites Analysis MFSs were extracted for each heme in the PDB structures listed in figure 1 and compared using the MetalS 2 tool (Andreini et al. 2013). The MFSs were then clustered by average linkage clustering using a threshold value of 2.25, which has been previously shown to indicate significant structural similarity between sites (Rosato et al. 2016). This procedure resulted in the clustering of 69 out of a total of 77 MFSs. A further three MFSs were included in clusters as all their similarity scores below 2.50 (Valasatava et al. 2015) were associated with MFSs all belonging to the same cluster. The residue numbers for each heme in the reference PDB structures are listed in supplementary (Altschul et al. 1990). Sequences from all the proteins whose structure has been determined were used as queries (access date: September 15, 2021). An e-value of 1 −50 was selected as a threshold. Protein sequences were filtered accordingly with the expected number of hemebinding motifs (e.g. five heme-binding motifs for NrfAs). The common heme-binding motif (CX 2 CH) and also other less common heme-binding motifs (CX 2 CK, CX 3 CH, CX 4 CH, CX 11 CH, CX 15 CH, and CX 17 CH) were used in the filtering steps. Collected sequences for each protein group were aligned using MAFFT 7 (Katoh and Standley 2013). Using MEGA 7 platform (Kumar et al. 2016), overall mean distances were individually computed using the p-distance model (Nei and Kumar 2000), uniform rates, and pairwise deletion methods. Standard errors were calculated using Bootstrap method with 1,000 replications (supplementary table 5, Supplementary Material online). CD-HIT (Li et al. 2001;Huang et al. 2010) was used to cluster highly homologous sequences in order to reduce the size of the data set but maintain the overall diversity. A total of 25 representative sequences without reference sequences (that were previously used as queries) were selected for each group. Reference sequences that contain 3D structures were added separately. Sequence Alignments For phylogenetic analyses, two MSAs were performed. One MSA contained protein sequences for all targeted MHC except NrfA (c 554 + MSA) and the second MSA contained all MHCs except cyt c 554 (NrfA+ MSA). MSA are provided in supplementary files, Supplementary Material online. Globally, protein sequences that included examples of reported 3D structures were aligned using PROMALS 3D (Pei et al. 2008) with default parameters. This program integrates sequence information derived from predicted secondary structure, profile-profile Hidden Markov Models, and structural information derived from sequence-structure and structure-structure alignments as constraints for consistency-based progressive alignments. In addition, Soares et al. · https://doi.org/10.1093/molbev/msac139 MBE user-defined constraints of alignable heme-binding motifs and other important and identified structural elements, for example, distal axial ligands and active-site residues were used as anchors for the alignment. This MSA was used as a structural reference alignment in MAFFT 7 (Katoh and Standley 2013) to align the sequences retrieved from NCBI (see Sequence Collection and Clustering). The L-INS-I algorithm was used with the "leave gappy regions" option. Resulting MSA was inspected in MEGA 7 and manual refinement was performed when necessary. Low-quality positions containing >75% gaps were filtered with trimAl version 1.3 (Capella-Gutiérrez et al. 2009) within the Phylemon 2.0 platform (Sánchez et al. 2011). For the pairwise sequence alignment of OcwA and OmhA, the Needleman-Wunsch algorithm (Needleman and Wunsch 1970) was used and represented in ESPript 3.0 (Robert and Gouet 2014). Phylogenetic Analysis As NrfA and cyt c 554 align at opposing ends of the octa-, nona-, and undecaheme MHC, phylogenetic inference was performed using two MSA in separate (cyt c 554 + and NrfA+ MSA; see Sequence Alignments) generating two phylogenetic trees. Each tree was reconstructed using maximum likelihood and Bayesian methods. Cyt c 554 + MSA contained 749 positions and NrfA+ MSA contained 793 positions. For maximum likelihood inference, phylogeny was reconstructed using IQ-tree (version 2.1.2 COVID-edition built October 22, 2020; Minh et al. 2020) on XSEDE (Towns et al. 2014) and CIPRES (Miller et al. 2010) platforms. For model selection, ModelFinder (Kalyaanamoorthy et al. 2017) method was selected. The best model according to Bayesian information criterion (BIC) was WAG + R8 and WAG + R7, for the cyt c 554 + and NrfA+ MSAs, respectively. For generation of branch support values, ultra-fast bootstrap (Hoang et al. 2018) and SH-aLRT (Guindon et al. 2010) statistical methods were used. Confidence values were based on 1,000 replications for each method. For Bayesian inference, phylogeny was reconstructed using MrBayes (version 3.2.7a x86_64; Ronquist et al. 2012) on XSEDE (Towns et al. 2014) and CIPRES (Miller et al. 2010) platforms. As MrBayes does not include the FreeRate model for heterogeneity across sites available at the ModelFinder (Kalyaanamoorthy et al. 2017;Minh et al. 2020), the best model was selected using the smart model selection (Lefort et al. 2017) at the ATGC: Montpellier Bioinformatics Platform (http://www. atgc-montpellier.fr/sms/). The best model according to BIC was WAG + G + I for both cyt c 554 + and NrfA+ MSA. Furthermore, best-fit parameters for cyt c 554 + MSA were: proportion of invariable sites of 0.035; number of substitution rate categories of 4 and gamma shape parameter of 1.738. For NrfA+ MSA best-fit parameters were proportion of invariable sites of 0.030; number of substitution rate categories of 4; gamma shape parameter of 1.993. Analysis ran using two independent runs with 12 Metropolis-Coupled Markov chain Monte Carlo each for 2 million generations, sampling from the posterior distribution every 5000 generations. Chain convergence was assessed using Tracer version 1.7.2 (Rambaut et al. 2018). Minimum estimated sample size (ESS) and potential scale reduction factor (PSRF; Gelman and Rubin 1992) values were within acceptable values. ESS values were higher than 200 and PSRF values were between 1.000 and 1.001. A majority-rule consensus tree with all compatible groups and posterior probabilities of the bipartitions were used to reconstruct the MHC phylogeny, after discarding the first 50% of the sampled trees as burn-in. Tree rooting was performed by the minimal ancestor deviation method (Tria et al. 2017). HGT was assessed by comparing the phylogenetic position from the different protein sequences and the classification of the belonging organisms based on NCBI taxonomy database (Federhen 2012). Mismatch in phyla and class positions were considered as an evidence for HGT. Taxonomic classes were assigned to each tip of the maximum likelihood and Bayesian trees previously constructed. Tree visualization and representation were performed using the Interactive Tree Of Life tool (Letunic and Bork 2019). Supplementary Material Supplementary data are available at Molecular Biology and Evolution online.
2022-03-01T14:19:39.612Z
2022-02-25T00:00:00.000
{ "year": 2022, "sha1": "a6609f5269268354f43f43b2b3dbad1aa4aaa5fa", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/mbe/advance-article-pdf/doi/10.1093/molbev/msac139/44118570/msac139.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5ba041f52b36a0672cbacf0f51bf5cffe6e48d74", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
265733222
pes2o/s2orc
v3-fos-license
DEVELOPING LIFE SKILLS IN PRE-SERVICE TEACHER TRAINING . This article is focused on studying the importance of developing life skills in pre-service teacher training and identifying strategies for their implementing. The notion of ‘life skills’ has been defined. The importance of life skills development has been proved. The four major skills for life identified by Partnership for 21st Century Skills ( creativity and innovation, critical thinking and problem-solving, communication and collaboration ) have been focused on in the article. Pre - service teachers’ readiness to develop these skills in their perspective students has been highlighted. The strategies and examples of possible activities for implementing life skills in educational process have been provided. The PRESETT Curriculum on Methodology (2020) was thoroughly analysed in order to see how well pre-service teachers of English are trained for developing life skills in their perspective students. The Curriculum has been concluded to be effective in training pre-service teachers of English to life skills development. Introduction.The world we are living is marked by rapid changes, to which education systems need to adapt.Therefore, the New Ukrainian School reform puts forward new requirements for educational system.A key change concerns approaches to learning and educational content.According to the reform a 21st century education is to be focused on developing skills, students need to succeed in this new world.Teachers should create conditions in their lessons to enable school children to collaborate and communicate in the classroom, use their creativity, think critically and be responsible for their own learning results [9].Thus, the development of core skills has become of utmost importance in modern education.The development of these skills is crucial, not only for entering the job market and contributing to regional development but also for enhancing individuals' well-being and mental health.Therefore, developing life skills has become one of the major aspects in Pre-Service Teacher Training. Main text.This article is focused on studying the importance of developing life skills in pre-service teacher training and identifying strategies for their implementing.The study used a descriptive research design to obtain information concerning the ways of implementing life skills in PRESETT Curriculum. Literature review on the research problem has brought us to better understanding of life skills.World Health Organization (WHO) defined life skills as the abilities for adaptive and positive behaviour that enable individuals to deal effectively with the demands and challenges of everyday life [7].Danish, S.et al. (2004) consider life skills as the skills allowing individuals to succeed in different environments where they live, such as school, home, and their environment [4].Meanwhile, Cronin, L. D. & Allen, J. (2017) define life skills as skills needed to deal with the demands and demands of everyday life [3]. There is a large volume of published studies describing the importance and effectiveness of life skills education.According to the New Ukrainian School (2017) "it is not enough to only feed a child with knowledge.It is also necessary to teach how to use that knowledge.Knowledge and skills, linked to the pupil's value system, form their life competencies that are essential for successful self-fulfillment in life, education and work" [9].Prajapati, R. et al. (2016) mention that life skills education bridges the gap between basic functioning and capabilities and strengthens the ability of an individual to meet the needs and demands of the present society [8].Gupta, P. (2015) emphasizes that life skill education can play a vital role in helping students to lead a healthy and productive life and to contribute positively to the society [5].In the introduction to the article on the development of life skills through school sport activities Agustin, N. M. & Oktriani, S. (2021) point out that life skills are one of the formulas that can be applied to facilitate and develop all forms of potential of the younger generation during the learning process in formal and informal classes [1]. There is no definite list of life skills.Certain skills may be more or less relevant depending on the individual's life circumstances, culture, age, geographical location, etc.However, in 1999, the World Health Organisation identified six key areas of life skills:  Communication and interpersonal skills.This broadly describes the skills needed to get on and work with other people, and particularly to transfer and receive messages either in writing or verbally.  Decision-making and problem-solving.This describes the skills required to understand problems, find solutions to them, alone or with others, and then take action to address them.  Creative thinking and critical thinking.This describes the ability to think in different and unusual ways about problems, and find new solutions, or generate new ideas, coupled with the ability to assess information carefully and understand its relevance.  Self-awareness and empathy, which are two key parts of emotional intelligence.They describe understanding yourself and being able to feel for other people as if their experiences were happening to you.  Assertiveness and equanimity, or self-control.These describe the skills needed to stand up for yourself and other people, and remain calm even in the face of considerable provocation.  Resilience and ability to cope with problems, which describes the ability to recover from setbacks, and treat them as opportunities to learn, or simply experiences [6]. The Partnership for 21st Century Skills has played a key role in promoting the development of life skills.The organization has identified four major skills for life which include: creativity and innovation, critical thinking and problem-solving, communication and collaboration [7].They are often called as Four Cs of 21st century.We consider these skills to be a part of every lesson in the same way as literacy and numeracy, so pre-service teachers must be ready to develop these skills in their perspective students.In order to do it perfectly they need to be knowledgeable about the strategies for implementing life skills in educational process.Having done literature review on the research problem [10], we managed to design a table providing both strategies and examples of their implementing in the classroom (Table 1).Table 1 provides strategies for implementing life skills both in class and out-ofclass.The given examples suggest better understanding of the possible activities a teacher can use.Thus, giving teachers a way to creativity in designing activities relevant to the subject they teach. In order to see how well pre-service teachers of English are trained for developing life skills in their perspective students the PRESETT Curriculum on Methodology was thoroughly analysed [2].As a result, we can state that being based on the constructivist approach or 'to theory through practice approach', the Curriculum enhances the development of life skills in future teachers of English.It means that Methodology classes do not rely on traditional lectures.On the contrary, the theory is taught through practice.According to this approach students are actively involved in a dynamic teaching and learning process.This active involvement is facilitated by working in small groups and pairs, brainstorming, solving problems, using case studies, simulations, group projects, etc.All of the methods suggested by the Curriculum are intended to promote high level of interaction and students' involvement in their own learning processes, thus it develops life skills. Summary and conclusions.Summing up, we may state that this research confirmed the importance of developing life skills in pre-service teachers.The life skills has been proved to help students build confidence in both communication and collaboration, provide them with tools important for development, find new ways of thinking and problem-solving, thus equipping them with the tools they need to live a more creative life, to find ways to cope with the challenges of modern society.In this article we emphasized the necessity for pre-service teachers' readiness to develop life skills in their perspective students.Four Cs of 21st century (communication, collaboration, critical thinking, and creativity) were considered in the article.The strategies for implementing these life skills in educational process were developed.In order to see how well pre-service teachers of English are trained for developing life skills in their perspective students the PRESETT Curriculum on Methodology was analyzed.It was concluded that the Curriculum is based on the constructivist approach or 'to theory through practice approach' and provides life skills oriented activities regularly practiced in the Methodology classes.So, we can state that preservice teachers are well-trained to life skills development.  Project to a team  Research document using google doc  Quiz competition between teams  A real-life problem to be solved  A case study assigned to a group of students  Seminars, webinars, conferences  Creation of study clubs  Creation of investigation team  Break out rooms on zoom or split into small groups
2023-12-06T16:02:59.243Z
2023-10-30T00:00:00.000
{ "year": 2023, "sha1": "2ba0b091a8566dfc49a59ac3625af098b54e416b", "oa_license": "CCBY", "oa_url": "https://www.moderntechno.de/index.php/meit/article/download/meit29-03-014/6269", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4bc170861a132112ed9f3f0382c3259a10827a93", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
16554082
pes2o/s2orc
v3-fos-license
On Free Field Realizations of Strings in BTZ We discuss realizations of the SL(2,R) current algebra in the hyperbolic basis using free scalar fields. It has been previously shown by Satoh how such a realization can be used to describe the principal continuous representations of SL(2,R). We extend this work by introducing another realization that corresponds to the principal discrete representations of SL(2,R). We show that in these realizations spectral flow can be interpreted as twisting of a free scalar field. Finally, we discuss how these realizations can be obtained from the BTZ Lagrangian. 1 Introduction 2 + 1 dimensional anti-de Sitter space (AdS 3 ) is one of the recently studied examples of string theory in non-trivial backgrounds. Since AdS 3 is the group manifold of special linear transformations, it can be studied using a SL(2, R) Wess-Zumino-Witten model. However, the physical spectrum of the SL(2, R) WZW model seemed to contain states of negative norm. A resolution to this problem is to utilize an additional symmetry, known as spectral flow, in the theory. In the context of the SL(2, R) WZW model, this symmetry was first discussed in [1,2], where it was noted that spectral flow was necessary for the modular invariance of the theory. In [3], it was shown how the states with negative norm could be removed from the physical spectrum using spectral flow. One of the possible extensions of [3] is to study orbifolds of AdS 3 space-time. These include AdS 3 /Z N orbifolds, which have been examined in [4,5]. Another interesting case is the Bañados-Teitelboim-Zanelli (BTZ) black hole [6], which can be obtained by quotienting AdS 3 by a boost. As opposed to pure AdS 3 , specific features of the BTZ black hole include the appearance of space-time horizons and a region that is analogous to a space-like singularity. String theory on BTZ black holes based on the WZW model has been described in [7,8,9,10]. In the context of spectral flow, the model has been analyzed by [11,12]. Spectral flow in the fermionic sector has recently been discussed in [13]. In this letter, we consider a realization of the SL(2, R) current algebra using free scalar fields. The realization is given in the hyperbolic basis which is the appropriate choice for a BTZ black hole. Similar investigation has been made by Satoh [14] (see also [15]), but the discussion was limited to the principal continuous representations only. We will also clarify the role of the spectral flow in this setup. Spectral flow can be formulated as twisting of a free field in a basis where the generator of the Cartan subalgebra is diagonal. We will also discuss how the scalar field realization can be derived starting from the classical BTZ Lagrangian. We show that the values of the spectral flow parameters obtained in this way coincide with the results of a previous study [11]. We wish to construct a realization of (1) using free fields. For the description of spectral flow, it is useful to work in a basis where the Cartan current J 2 is diagonal. A suitable realization is obtained when we introduce three bosonic scalar fields X a (a = 0, 1, 2) obeying the following operator product expansion: The signature of the metric η is chosen to be η = diag(−1, 1, 1). Then the currents satisfying (1) can be constructed as in [14,16]: where the notations X ± = X 0 ± X 1 , k ′ = k − 2 have been introduced. The energy-momentum tensor is obtained from the currents as follows: The resulting central charge of the system is c = 3k/k ′ . We see that the energy-momentum tensor represents a system of three free fields, where the space-like field X 2 is coupled to the background charge Q = −1 √ 2k ′ . It turns out that this realization corresponds to the principal continuous representations of SL(2, R). Next, one would like to know what are the vertex operators in this realization. One finds [14,16] The conformal weight of the vertex operator V j,J is related to the second Casimir of SL(2, R) as In the SL(2, R) current algebra, there exist operators that contain only regular terms in their operator products with the currents. These so-called screening operators are given by [14] η ± = exp ± k/2 The expressions for the screening operators are determined up to constant factors and total derivatives. This construction of a realization of the SL(2, R) current algebra using three scalar fields is not unique. We need another realization that corresponds to the principal discrete representations in order to fully describe the physical spectrum of the SL(2, R) current algebra. In this letter, we point out that this realization is obtained by making the substitutions X 2 → −iX 0 , X 0 → iX 2 in the previous formulae. Then the currents and the stress tensor become It should be noted that the field coupling to the background charge is now the time-like field X 0 . The connection between this realization and the discrete representations will be established in the next section. The vertex operator corresponding to this realization is and it has the same dimension and satisfies the same OPEs as in the previous realization (6), (7). Representations of SL(2, R) If a holomorphic field X(z) is coupled to a background charge Q, the mode expansion for X(z) can be written as The corresponding Virasoro generators are then The SL(2, R) invariant vacuum state |0 is invariant under the global conformal transformations, L 0,±1 |0 = 0. Using the realization (4), the vacuum state is found to be labeled by the background charge, States corresponding to the vertex operators (5), ie. the primary states, are then From this expression, we see that the relation between j and the momentum component Let us compare this with the j-values of the principal continuous representationĈ of SL(2, R). The unitary condition ofĈ is the following: Hence we conclude that the realization (5) actually belongs toĈ, since the momentum component p 2 is real. However, we introduced another realization (10) where the background charge is coupled to a time-like field. This realization leads to a different set of vacuum and primary states: The corresponding j-value is now Now, we compare this with the j-values of the discrete representationsD ± of SL(2, R). The unitarity condition There exists a problem with the Hilbert space of the discrete representations. Namely, their spectrum is afflicted by ghosts (states with negative norm). The ghosts appear in the discrete representations when j < −k/2. The spectrum could be truncated by hand, but doing so would create a lower bound on the second Casimir c 2 = −j(j + 1), which is related to the mass of a state. In string theory, such a bound would be considered artificial. A way out of this conundrum was given in [3] using the spectral flow symmetry of the theory. The fact that J ± shifts J by ∓ij as in equation (6) might first appear puzzling, but this is actually a feature of the hyperbolic basis [17,10,11]. The operator J 2 now corresponds to a non-compact direction of the target space, and has a continuous spectrum. Therefore the states in the Hilbert space must be defined as The action of the currents J 2 , J ± on the states are The functions f (J), g(J) correspond to matrix elements of J + 0 , J − 0 . Twisting and Spectral Flow We now turn to discuss spectral flow in the free field realization. Our goal is to show how spectral flow can be interpreted as twisting of the field X 1 in the realizations introduced in section 2.1. The BTZ space-time has the topology of a solid cylinder with one compact space-like direction. Twisted sectors of the theory are generated by winding the string around the compact dimension. It will be shown in section 3 that the winding of the string can be accomplished by twisting the field X 1 . In the presence of rotation, the mode expansion for a scalar field compactified on a circle of radius R = Re R = 1 2 (R +R) becomes The imaginary part of R is related to the intrinsic angular momentum of the rotating target space. If the system is non-rotating,R = R. The integer m measures the winding of the field X 1 around the compact dimension. Also, the momentum becomes quantized, p 1 = n/R. The mode expansion (23) tells that twisting acts as shifting of the momentum p 1 . Effectively, twisting is where we have introduced w = −mR/ √ 2k,w = −mR/ √ 2k for convenience. Consequently, the currents transform under (24) as and the transformation of the energy-momentum tensor is the following: Notice that the this transformation is independent of the representationsĈ andD ± . 2 The transformed currentsJ a satisfy the SL(2, R) algebra (1) and the Virasoro algebra. Thus the transformation (24) generates a new representation of the current algebra, which should be taken into account when considering the Hilbert space of the theory. In terms of modes, the transformation reads This result is in exact agreement with [11]. Hence we identify the transformation (24) as spectral flow in the free field realization. Because the transformation (24) is nothing but twisting, we conclude that spectral flow in the free field realization is twisting of the field X 1 . A remaining task is to find out what is the compactification radius R. We will discuss this in section 3. The importance of spectral flow comes from the fact that it can be used to eliminate the appearance of ghosts in the discrete representations of SL(2, R). In general, the spectral flowed discrete representations are related bŷ It was shown by Maldacena and Ooguri [3] that spectral flow naturally imposes the limit −1/2 > j > −(k − 1)/2 on the values of j. Within this range of j, ghosts do not appear in the spectrum of the discrete representations. If spectral flow is taken to be a symmetry of the theory, the theory will be free of ghosts. BTZ Coordinates The aim here is to show explicitly how the free fields X a introduced in the previous section are related with the fields appearing in the sigma model form of the BTZ Lagrangian. For a correct description of the SL(2, R) current algebra at the quantum level, one needs to resolve some subtleties arising from the transformation of the classical Lagrangian. For the AdS 3 case, similar discussion has been given in [18,19,20]. The BTZ metric [6] for a 2 + 1 dimensional black hole is Here r − and r + are the inner and outer horizons, respectively. The angular coordinate has periodicity θ ∼ θ + 2π. The time coordinate is usually taken to be non-compact, −∞ < t < ∞. The mass and the angular momentum of the black hole are given in terms of the inner and outer horizon, M BH = r 2 + + r 2 − and J BH = 2r + r − . For a non-rotating black hole, the inner horizon goes to zero: r − = 0. To proceed with the analysis, we transform the metric (29) into a more convenient form. We make an analytic continuation to Euclidean time t E = it and make a coordinate transformation The resulting metric is the Poincare patch for Euclidean AdS 3 . The Lagrangian describing string propagation on this manifold can be written in the sigma model form as The level number k has been included for the correct normalization of the path integral. The periodicity of the BTZ angular coordinate θ translates into the following periodic identifications for γ,γ, φ: where ∆ ± is given in terms of the inner and outer horizon as ∆ ± = r + ± r − . A common trick is to rewrite the Lagrangian by introducing new fields β,β. The original Lagrangian (31) can then be recovered from by integrating over the fields β,β. At quantum level, however, the above transformation leads to an anomalous factor in the functional measure. After evaluation of this factor (for details, see [19]), one gains the effective Lagrangian: Here R (2) is the scalar curvature in two dimensions. It is useful to think of the interaction term L int = − 1 k ββe −2φ as a screening current. In the limit φ → ∞, it can be treated perturbatively. Currents The free part of the Lagrangian (34) leads to the Wakimoto free field realization [21] of the SL(2, R) current algebra. In this realization, coordinates β, γ (β,γ) constitute a holomorphic (antiholomorphic) system of bosonic ghosts. Their OPE is given by and a similar relation holds for the antiholomorphic fields. In the hyperbolic basis, the holomorphic currents of the SL(2, R) current algebra are: iJ 2 (z) = (γβ)(z) + k ′ /2 ∂φ(z) Now, we introduce three scalar fields X a and (re-)bosonize the (β, γ) system. The OPEs for the fields X a have been defined in (2). The transformation between β, γ, φ and X a is Not surprisingly, the resulting currents and the energy-momentum tensor coincide with the ones given in the previous section (3), (4). Next we consider the periodicity of the coordinates. From (32) we find that In the spirit of the previous section, we require that the twisted sectors of the system are generated by twisting of the field X 1 . The respective periodicities for holomorphic and antiholomorphic part of the coordinate X 1 are then In this setup, X 0 and X 2 are not periodic. 3 The same periodicities are obtained for the discrete representations. Comparing the mode expansion (23) and the spectral flow transformation (24), we find w = m∆ − ,w = m∆ + A similar relation was found in [11] using classical considerations. Summary We have examined a realization of the SL(2, R) current algebra in the hyperbolic basis using free scalar fields. The energy-momentum tensor of the model has a simple form, where one of the scalar fields couples to a gravitational background charge. We have shown that the realizations belong to the principal continuous representationsĈ if the field coupling to the background charge is space-like, and to the principal discrete repre-sentationsD ± if the coupled field is time-like. This has a nice analogy in [3], where it was found that long string states generated by spectral flow of space-like geodesics correspond to principal continuous representations, and short string states generated by spectral flow of time-like geodesics correspond to principal discrete representations. We demonstrated how spectral flow could be interpreted as twisting of a free scalar field in this model. Using the BTZ Lagrangian as a starting point, we reviewed the construction of the free field realization of the SL(2, R) current algebra. The periodic conditions of the BTZ coordinates imply that only a discrete set of values is allowed for the spectral flow parameters. These values also agree with [11].
2014-10-01T00:00:00.000Z
2003-04-02T00:00:00.000
{ "year": 2003, "sha1": "f3ae04bf8146d2cb3cc775108db7f02838bddd23", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0304009", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f9a0f9d7ee97ccd94079f82f65d040d61d06c008", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
16786793
pes2o/s2orc
v3-fos-license
Serum opsonic deficiency produced by Streptococcus pneumoniae and by capsular polysaccharide antigens. The opsonic requirements for phagocytosis of S. pneumoniae types 6, 7, 18, and 23 were determined in normal and C2 deficient serum, and in normal serum chelated with magnesium ethyleneglycoltetraacetic acid. All four strains were effectively opsonized via the alternative complement pathway, a finding suggesting that the capsular polysaccharides of these strains activated complement via the alternative pathway. Since bacteremic pneumococcal disease is often associated with circulating capsular polysaccharide, it was considered that this cellular component may activate complement in vivo and impair host defenses by producing an opsonic defect for pneumococci. To examine this hypothesis, serum was incubated with suspensions of whole S. pneumoniae types 6, 7, 18, or 23 or with purified capsular polysaccharide from each of these types, and residual complement activity and opsonic capacity were measured. Hemolytic C 3--9 complement activity and opsonic capacity for 3H-thymidine labeled Salmonella typhimurium, a species effectively opsonized via the alternative pathway, were reduced in serum following incubation. Polysaccharide concentrations as low as 1 microgram/ml inhibited serum opsonic capacity for salmonella. Whole pneumococci and pneumococcal capsular polysaccharide also inhibited the opsonic activity of human C2 deficient serum for salmonella, further evidence for activation of complement via the alternative pathway. Pneumococcal capsular polysaccharide markedly inhibited the opsonic capacity of normal serum for the homologous pneumoccal type. Thus, amounts of pneumococcal capsular polysaccharide, similar to those found in the serum of patients with pneumococcal disease, bring about decomplementation of serum via activation of the alternative pathway and inhibit pneumococcal opsonization. To test this hypothesis pooled normal human serum or C2 deficient human serum was incubated with whole S. pneumoniae (Pn) types 6, 7, 18, or 23 or with purified PCP from these types. Serum opsonic capacity was measured after incubation with whole bacteria or PCP by a functional assay employing neutrophil phagocytosis of methyl-3H-thymidine labeled Salmonella typhimurium. This species is effectively opsonized via the alternative pathway, thereby permitting evaluation of opsonic activity mediated by the alternative pathway [1 1]. Both whole pneumococci and PCP of the types tested caused a reduction of serum opsonic capacity and depletion of hemolytic C activity, although inhibition of opsonization varied among the serotypes. Both whole pneumococci and PCP of two representative serotypes also reduced the opsonic capacity of C2 deficient serum, evidence for activation of the alternative pathway. MATERIAL AND METHODS Bacteria and PCP. S. pneumoniae types 18 and 23 were isolated from children with bacteremia; type 6 was obtained from the American Type Culture Collection (Rockville, MD); and type 7 was kindly provided by Dr. Robert Austrian (University of Pennsylvania, Philadelphia, PA). Serotypes were confirmed with group-specific antisera (Statens Seruminstitut, Copenhagen, Denmark), and encapsulation was maintained by monthly mouse passage. Purified PCP of types 6A, 7F, 18C, and 23F were kindly supplied by Dr. James Hill (Development and Applications Branch, National Institute of Allergy and Infectious Disease, National Institutes of Health, Bethesda, MD). These highly purified polysaccharides were in lyophilized form without preservative and have been shown to contain less than 0.001 A g endotoxin by both an in vivo rabbit assay and by the limulus lysate test; nitrogen content was less than 3% and protein content was less than 1% of their dry weight (Dr. James Hill, personal communication). They have been prepared in accordance with procedures for the manufacture of pneumococcal vaccines by the Lilly Research Laboratories, Indianapolis, Indiana. The strain of Salmonella typhimurium employed has been used in our laboratory forseveral years. Bacterial preparation and labeling. This procedure is described in detail elsewhere [5]. Briefly, S. pneumoniae were grown in Mueller-Hinton broth (Difco, Detroit, MI) enriched with 3% bovine albumin, and with 1% D-glucose added during the final two hours of incubation at 37°C. A total of 0.05 mCi methyl-3H-thymidine (specific activity 6.7 Ci/ mM, New England Nuclear, Boston, MA) was added to 10 ml of the growth medium. After overnight incubation at 370C, the bacteria were harvested by centrifugation and washed three times with isotonic phosphate-buffered saline, pH 7.4 (PBS). Pneumococci were killed by heating at 65°C for 45 min, counted in a Petroff-Hauser chamber, suspended in PBS to 2.5 x 108 bacteria/ml and stored at 4°C for no longer than 7 days. For experiments employing suspensions of whole, viable S. pneumoniae, the bacteria were prepared by identical methods except they were not heat-killed and were used on the day of preparation. S. typhimurium was grown in Mueller-Hinton broth enriched with 0.1 mg/ ml 2'deoxyadenosine (Sigma, St. Louis, MO) and containing 0.04 mCi methyl-3Hthymidine. The labeled bacteria were washed three times in PBS, suspended to 5 x 108/ml and used the same day. Leukocytes. Leukocytes were separated from heparinized blood of healthy adults by dextran sedimentation, washed and suspended in Hank's Balanced Salt Solution (Grand Island Biological Co., Grand Island, NY) with 0.1% gelatin (gel-HBSS) at a concentration of 1 x 107 neutrophils per ml. Opsonins. Sera from three healthy adult donors were pooled, divided into 1 ml aliquots, and used throughout these experiments. Once thawed, unused serum was discarded. Type-specific pneumococcal antibody was kindly determined by Dr. Gerald Schiffman, Downstate Medical Center, Brooklyn, NY, using a sensitive radioimmunoassay method with results expressed in ng of antibody nitrogen (ngAbN) per ml [ 12]. The serum pool contained normal levels of antibody to types 7, 18, and 23 (214, 668, and 363 ngAbN/ ml, respectively) but a lower concentration of type 6 antibody (92 ngAbN/ ml). To study opsonization in the absence of the classical complement pathway, undiluted serum was chelated by the addition of 0.1 M ethyleneglycoltetraacetic acid in the presence of 0.1 M MgCl2 (MgEGTA) to each 1 ml of serum for a final chelator concentration of 0.01 M [13]. Calcium and magnesium-free gel-HBSS was used to dilute the chelated serum. In addition, serum was obtained from a patient with inherited complete C2 deficiency. This serum contained normal levels of all other classical and alternative complement components [14], and had a normal capacity for alternative pathway opsonization of Staphylococcus epidermidis and of S. aureus, Wood 46 [15]. All sera were divided into aliquots and stored at -70°C. Heat-inactivated serum was prepared by incubation at 56°C for 30 min. Bacterial opsonization. Bacteria were opsonized by adding 0.4 ml of the desired opsonin to a plastic tube (12 x 75 mm, Falcon, Oxnard, CA) containing 0.1 ml of radiolabeled bacteria. After 30 min incubation at 370C, the mixture was centrifuged at 1600 g for 15 min. The supernatant was discarded, and the bacterial pellet was resuspended in 0.5 ml gel-HBSS. Determination of phagocytosis. The phagocytosis mixtures consisted of the opsonized bacterial suspension and 0.5 ml leukocytes suspended in gel-HBSS. The bacteria:neutrophil ratio was 5:1 for experiments with labeled pneumococci and 20:1 for studies with salmonella. Immediately following addition of leukocytes, the mixtures were aliquoted in 250 1I volumes to four polypropylene vials (Bio-vials, Beckman, Chicago, IL) which were agitated in a shaking incubator (Model 25, New Brunswick Scientific, New Brunswick, NJ) at 37°C. After 30 min incubation, 2 ml cold PBS was added to each of two vials to arrest phagocytosis. To determine neutrophil-associated bacteria, vials were centrifuged for 5 min at 160 g at 40C, the pellets were washed twice in 2 ml cold PBS and were resuspended in 2.5 ml scintillation liquid (toluene with 20% Bio-Solve-3 in Fluoralloy, Beckman). The total bacterial population in the phagocytosis mixture was determined by adding 2 ml cold distilled water to each of the two remaining vials, centrifuging at 1600 g for 15 min and resuspending the pellet in 2.5 ml scintillation liquid. The samples were counted in a liquid beta scintillation counter (Beckman, LS-250). Chemical quenching was similar for all samples. Duplicate values were within 5% of agreement, and an average of the duplicate values was used in all calculations. Bacterial uptake by the PMN leukocytes (% phagocytosis) at a given sampling time was calculated according to the formula: % phagocytosis = cpm in 160 g pellet 1 cpm in 1600 g pellet The cpm in the denominator was always at least 50 times greater than background radioactivity. Pneumococcal and salmonella opsonic requirements were defined using dilutions of normal pooled serum (NPS), MgEGTA chelated NPS, C2 deficient serum and NPS heated at 56°C for 30 min (ANPS). Bacteria were incubated with each opsonin for 30 min, and results were expressed as percent bacterial phagocytosis following an additional 30 min incubation of opsonized bacteria with leukocytes. Each value was the mean of at least four experiments, except when C2 deficient serum was used and duplicate studies were performed. Differences were calculated by student's t-test for unpaired observations. The effect of Pn and PCP on serum opsonic activity was studied by preincubating serum with Pn, PCP, and with an equal volume of saline. Residual opsonic activity, which was reflected by the percent bacterial phagocytosis, was expressed as the percentage of the saline control. Normal serum was diluted to 2.5% and C2 deficient serum was diluted to 5% for the salmonella opsonic assay. Serum was diluted to l1o0 for the pneumococcal opsonic assay. Each value was the mean of at least four experiments unless otherwise indicated. The significance of opsonic inhibition was calculated using the difference in percent phagocytosis between serum preincubated with saline and Pn or PCP. Student's t-test for paired observations was employed. Experiments with 5 and 1 p&g/ ml of PCP were performed once and were not analyzed for significance. Hemolytic C3.9 titration. The assay for hemolytic C3.9 activity was carried out as previously reported [16,17]. Test samples were serially diluted in EDTA buffer. To 1.0 ml of each dilution of test sample was added 1.0 ml of EDTA buffer. EACI (guinea pig) 4 (human) 2 (guinea pig) was prepared at T-max, and 1.0 ml of EAC142 (5 x 107/ml) was added to each tube. After 60 min incubation at 37°C, 4.5 ml of cold normal saline was added, and the tubes were centrifuged. The optical density (412 nm) of the supernatant fluid was determined and results were expressed as 50%0 lysis. Appropriate controls and complete blanks were run simultaneously. RESULTS Determination of opsonic requirements. The opsonic requirements of pneumococcal types 6, 7, 18, and 23 were investigated by incubating bacteria in dilutions of normal and MgEGTA chelated serum and comparing kinetics of neutrophil phagocytosis of the opsonized bacteria. The opsonic requirements of these serotypes are summarized in Table 1L All four types were opsonized equally well in normal and chelated serum at the 90% and 40% serum concentrations. Types 6 and 23 were also opsonized equally well in normal and chelated 20% serum. Types 7 and 18 were opsonized less effectively in concentrations of chelated serum less than 40%, although type 18 was opsonized more efficiently than the other three types in 10% and 5% chelated serum. Heat-stable factors provided negligible opsonic activity for Pn, 6, 7, and 23, and slight activity for Pn 18. Therefore, these pneumococcal strains require complement for optimal opsonization and utilize the alternative complement pathway. S. typhimurium was effectively opsonized in normal, MgEGTA chelated and C2 deficient serum as illustrated in Table 2. There was equal opsonic activity in 20% and 10% chelated and normal serum, while 5% and 2.5% chelated serum provided less opsonic activity. C2 deficient serum provided less opsonic activity than either normal or chelated serum. However, at 5% and 2.5% concentrations the opsonic capacities of chelated and C2 deficient serum were similar. Therefore, this strain of S. typhimurium is effectively opsonized via the alternative complement pathway. Effect of whole pneumococci on serum hemolytic C3.9 activity and opsonic capacity. Whole, viable S. pneumoniae were suspended to a concentration of 5 x 530 GIEBINK ET AL. 108 / ml in normal and C2 deficient serum. The mixtures were incubated at 370C for 60 min, and aliquots of serum were withdrawn at 0 and 60 min; organisms were removed by centrifugation, and the serum was frozen at -70°C for hemolytic C3-9 titrations. All four pneumococcal types decomplemented normal serum; hemolytic C39 activity was reduced to 28% to 53% of the time 0 titer in the single experiment illustrated by Table 3. Pn 6 was somewhat more active, and Pn 23 was least active in effecting serum hemolytic complement activity. To demonstrate the effect whole pneumococci might have on serum opsonic capacity, pneumococci were separated by centrifugation (1600 g x 15 min), and the serum was assayed for opsonic activity with salmonella. All four pneumococcal types reduced the opsonic capacity of normal serum (Fig. 1). There was reduction in both the initial rate of phagocytosis and in the maximal number of salmonella phagocytized. There was a greater reduction of serum opsonic capacity after incubation with Pn 18 than with Pn 6, 7, or 23 (Table 4). Differences between percent phagocytosis in serum preincubated with saline and with whole pneumococci were all significant at a level of P < .05. It was also demonstrated in a single experiment that Pn 7 and Pn 18 reduced the opsonic capacity of C2 deficient serum for salmonella (Table 5). While Pn 18 reduced opsonic capacity to the same extent in C2 deficient and normal serum, Pn 7 reduced opsonic capacity to a slightly greater extent in normal serum suggesting enhancement of the Pn 7 inhibition by the classical complement pathway. No free PCP could be detected in the supernatant of the washed pneumococcal suspensions measured by counterimmunoelectrophoresis, which in our laboratory detects as little as 0.15,ug/ ml of polysaccharide using a standard assay [18]. This Saline incubation with serum reduced the C 34 titer by 2%. finding suggests that the complement activating properties of whole pneumococci were not due to soluble polysaccharide in these experiments. Effect of PCP on serum hemolytic C.3-9 activity and opsonic capacity. Purified PCP of each serotype was suspended in normal and C2 deficient serum. The mixtures were incubated at 370 C for 60 min and aliquots of serum were withdrawn at 0 and 60 min and immediately frozen at -70°C for hemolytic C3.9 titrations. Concentrations of polysaccharide between 10 and 100 Ag/ ml reduced the hemolytic C 3 titer of normal serum in the single experiment illustrated by Table 3. Serum C 39 activity was reduced to 53% to 73% of time 0 by polysaccharide of types 6, 7, and 18. These three Effect of whole pneumococci (Pn) on serum opsonic capacity. Serum opsonic capacity was directly related to the percent phagocytosis of salmonella using normal serum preincubated for 60 min with types 6, 7, 18, and 23 pneumococci, and serum preincubated with an equal volume of saline. Following preincubation, serum was diluted to 2.5% and used as an opsonin for salmonella. Opsonized salmonella were incubated with leukocytes, and percent phagocytosis was determined at 15 and 30 min. Each value represents the mean of four experiments. Significance of these differences are shown in Table 4. with serum, and the serum was assayed for opsonic capacity with salmonella. Polysaccharide of types 6, 7, and 18 partially inhibited serum opsonization while PCP 23 had little effect (Fig. 2). Lower concentrations of PCP were also tested for their effect on serum. Polysaccharide concentrations of types 6 and 18 as low as 1 ,ug/ ml were as active as l0,g/ ml in reducing serum opsonic capacity (Table 4). Type 7 polysaccharide showed optimum activity at a concentratin of 50 ,g/ ml, while type 23 polysaccharide produced the greatest inhibition when used in lower concentrations. Reduction of serum opsonic capacity was directly related to the degree of serum decomplementation produced by each PCP (Tables 3 and 4). Polysaccharide of types 7 and 18 also reduced the serum opsonic capacity of C2 deficient serum for salmonella in a single experiment ( Table 5). As with the whole organisms, type 7 polysaccharide reduced the opsonic capacity of normal serum to a slightly greater extent than of C2 deficient serum, while PCP 18 was equally active in both sera. To further characterize the effect of capsular polysaccharide on pneumococcal opsonic capacity, serum was preincubated with each of two polysaccharides (PCP 7 and 23) and opsonic capacity for the homologous pneumococcal type was determined. Table 6 illustrates the results of a representative experiment. Type 7 was directly related to the percent phagocytosis of salmonella using normal serum preincubated for 60 min with 50p.g/ ml polysaccharide from types 6, 7, 18, and 23 pneumococci, and serum preincubated with an equal volume of saline. Following preincubation, serum was diluted to 2.5% and used as an opsonin for salmonella. Opsonized salmonella were incubated with leukocytes, and percent phagocytosis was determined at 15 and 30 min. Each value represents the mean of four experiments. Significance of these differences are shown in Table 4. polysaccharide at concentrations as low as 5 Ag/ ml produced a striking reduction in serum opsonic capacity for type 7 pneumococci. Type 23 polysaccharide concentrations as low as I jg/ ml produced a marked reduction in serum opsonic capacity for type 23 pneumococci. DISCUSSION Streptococcus pneumoniae is a clinically significant pathogen. Although antibiotics have reduced mortality, the case-fatality rate for bacteremic pneumococcal pneumonia has remained between 20% and 30% over the past decade [19,20]. The frequency of pneumococcal disease has also not changed appreciably. Nielsen reported in 1945 that the majority (48%) of bacteria isolated from the middle ears of children with acute otitis media were S. pneumoniae [21]. Three decades later, Howie found a similar frequency of pneumococcal isolates (35%) from children with this condition, and also observed that first episodes of pneumococcal otitis media during infancy predispose children to recurrent acute otitis media [22,23]. S. pneumoniae is also the most frequent cause of occult bacteremia in febrile children and in patients with post-splenectomy sepsis [24,25]. Eighty-three distinct pneumococcal serotypes have been described, but only a few types are commonly encountered in patients with pneumococcal infection. These types have remained remarkably constant over the past decade: the majority (78-84%) of isolates from patients with pneumococcal bacteremia, meningitis, and otitis media are types 1, 3, 4, 6, 7, 8, 9, 12, 14, 18, 19, and 23 [19,26,27]. Why certain serotypes predominate and how S. pneumoniae injures the host are questions that remain unanswered. Fine observed that pneumococcal serotypes vary in their specific opsonic requirements and described three patterns of pneumococcal opsonization [7]. Several types (7, 12, 14, and 25) decomplemented serum by activating the alternative pathway of complement in the absence of specific antibody, other types (3, 4, and 8) reduced C3 levels by the interaction of antibody with alternative pathway components and type 1 did not activate C3 via the alternative pathway even in the presence of specific antibody. These differences in complement fixing ability may give selective advantages to certain serotypes for host invasion and injury. Patients lacking critical pneumococcal opsonins, either because of primary deficiency or secondary to acute infection, may be unable to phagocytize and kill these microbes. Dorff, Coonrod, and Rytel, using a rapid immunoprecipitin method, counterimmunoelectrophoresis, have detected from 0.1 to 64, g/ ml of PCP in the serum of patients with bacteremia, meningitis, and pneumonia due to types 1, 3, 4, 6, 7, 9, 12, 15, 22, 23, and 34 [1,2,3,4,18,28]. They noted a progressive decline in the level of serum PCP after initiation of therapy but antigenemia persisted for longer than one month in 5 of 19 patients [ 1]. Others have speculated that PCP might be a subcellular virulence factor causing host injury. Pillemer et al. demonstrated that large amounts (3,000 Mg) of types 4 and 14 PCP decomplemented human serum by removal or inactivation of properdin [29]. More recently, PCP of types 1, 4, and 25 in concentrations as low as 10 Ig/ ml has been shown to reduce C3 levels in C4 deficient guinea pig serum, while PCP of types 2, 3, 14, and 19 did not [10]. Others have observed patients with severe pneumococcal disease due to a variety of serotypes, including types 3, 4, and 19, with low levels of factor B of the alternative pathway and normal levels of two early components (Clq, C4) of the classical pathway [30]. Coonrod and Rylko-Bauer demonstrated that the functional activity of the alternative pathway was markedly depressed during the acute phase of pneumococcal pneumonia, especially in those patients with bacteremia [31]. Thus, we considered that patients with pneumococcal infection and antigenemia may have an acquired alternative pathway defect affecting opsonization of pneumococci. The pneumococcal serotypes selected for this investigation were all opsonized via the alternative pathway and included both lower and higher numbered types commonly encountered in human disease. Alternative pathway opsonization of Pn 18 was more efficient, most likely due to higher levels of type 18 pneumococcal antibody in the serum pool employed. The opsonic requirements of types 6, 18, and 23 have been more completely defined in a separate report [5]. It is important to note that our observations relate only to these single strains, and that different strains of a given serotype may vary in their ability to activate complement. Serum incubated with suspensions of whole, viable S. pneumoniae and with purified PCP from each type had reduced biologic activity of C3 measured by the hemolytic C3-9 assay and by opsonic capacity for Salmonella typhimurium. Opsonization of salmonella was selected as a measure of serum opsonic capacity since the organism is opsonized effectively and independently of type-specific pneumococcal antibody via the alternative pathway. Suspensions of whole S. pneumoniae showed considerably greater activity than the homologous PCP in reducing hemolytic C3-9 activity. While the quantity of PCP contained in the whole cell preparations was not measured, others have shown for types 1 and 2 S. pneumoniae that 5 x 108 bacteria yielded 1.5 and 2.0 Ag of polysaccharide, respectively [32]. This finding suggests that PCP might not be the most active subcellular factor in reducing levels of complement. However, the same suspensions of whole types 6, 7, and 23 S. pneumoniae were no more active than the highest concentrations of the homologous PCP in reducing serum opsonic capacity. Whole type 18 S. pneumoniae were slightly more active than type 18 PCP. It is possible that PCP aggregated on the surface of this strain was more active or that there was simply more PCP in that particular whole cell preparation. We were unable to explain the different results, confirmed in repeated experiments, using the two assays. While these assays measured different endpoints along the complement cascade, the measurement of opsonic capacity would more closely reflect changes in the functional process of opsonization. Capsular polysaccharide in concentrations as low as 1 &g/ ml significantly reduced the serum opsonic capacity. For at least two types (6 and 18), maximum opsonic inhibition was present at the lowest concentration of PCP studied, and inhibition produced by PCP 7 and 23 was maximal in the mid-range of PCP concentration used. Repeated experiments confirmed the greater activity of PCP 7 and 23 in this mid-range. Suspensions of whole S. pneumoniae and PCP of types 7 and 18 also reduced the opsonic capacity of C2 deficient serum, providing additional evidence that these polysaccharides primarily activated C3 via the alternative pathway. The effect of PCP on serum opsonic activity for the homologous pneumococcus was greater than its effect on salmonella opsonization. The greater inhibition of pneumococcal opsonization may have been due to binding of type-specific antibody by PCP. We have shown in previous experiments that antibody enhances pneumococcal opsonization in the presence of complement [5]. Some patients with bacteremic pneumococcal disease show absent or reduced type-specific antibody production associated with high serum levels of PCP [3]. We observed that bacteremic patients with PCP antigenemia have reduced serum complement levels and reduced pneumococcal opsonic activity [33]. Testing these patient sera with the salmonella opsonic assay showed that the opsonic defect effected the alternative pathway. Opsonic activity and serum complement increased during convalescence with the disappearance of PCP antigen. These in vivo observations support the in vitro experiments with PCP antigen. Thus, PCP antigen can activate complement and reduce serum opsonic activity. An alternative pathway opsonic defect might be expected to have its greatest impact on host defenses during the early phase of infection, prior to the development of specific antibody required for utilization of the more efficient classical pathway for opsonization. However, PCP may not be unique in its ability to activate complement since purified pneumococcal cell wall also activates the alternative pathway [34] and inhibits neutrophil phagocytosis [35]. Pneumococcal M protein [36] and C polysaccharide [37] may also activate complement since these elements may in part determine opsonic requirements. It is possible that several or all of these subcellular elements accounted for the greater effect of whole pneumococci compared with PCP on hemolytic complement activity in our experiments. The association of increased PCP levels and increased mortality in bacteremic patients [3] may simply reflect a greater concentration of invading bacteria and, as such, a greater concentration of all the subcellular elements including PCP. To define the relative activities of these elements on inhibition of alternative pathway opsonization, studies are required to compare cell wall, cell membrane, capsular polysaccharide, and whole cell preparations from several strains and serotypes of S. pneumoniae in human serum.
2014-10-01T00:00:00.000Z
1978-09-01T00:00:00.000
{ "year": 1978, "sha1": "4ebc8347902e7d2df3c0361747085ffa439d0d05", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4ebc8347902e7d2df3c0361747085ffa439d0d05", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
86754785
pes2o/s2orc
v3-fos-license
Integrating Backstepping Control of Outdoor Quadrotor UAVs In this paper an improved approach is presented for integrating backstepping control of outdoor quadrotor UAVs. The controller uses the approximated nonlinear dynamic model, while for simulation or test purposes the quadrotor can be modeled either with the precise or the simplified model. A hierarchical integrating backstepping control algorithm was constructed that has the capability of handling every effect in the dynamic model and in the meantime successfully ignores the realistic measurement noises. The hierarchical control structure consists of position, attitude and rotor control, extended with path design with continuous acceleration and/or continuous jerk. The state estimation is based on sensor fusion. Control parameters can be easily tuned. Adaptive laws are elaborated for mass and vertical disturbance force estimation. The tracking algorithm is able to follow the prescribed path with small error. The sensory system and the state estimation are prepared for outdoor applications. The embedded control system contains a HIL extension to test the control algorithms before the first flight under real time conditions. Introduction The research field of unmanned aerial vehicles (UAVs) is highly prospering these days. Although the field was mainly motivated by the possible military applications, the civilian usage is emerging quickly. Several military and civilian professionals are interested in developing an autonomous mini unmanned outdoor quadrotor helicopter. The benefits of such a system are significant, in the near future the quadrotor may be able to precisely follow a predefined path while performing a measurement series such as performing a surveillance above a predefined territory. In the control of quadrotor helicopters several control algorithms can be considered, including linear and nonlinear algorithms [1] or soft-computing algorithms [2]. In the domain of nonlinear control algorithms, the most popular technique is the backstepping approach, although several other techniques can be found in this field, including the sliding mode technique [3] and feedback linearization control algorithms [4]. A full state backstepping algorithm is presented in [5]. The paper is organized as the following. Section 2 presents the kinematic and dynamic models of quadrotors. Section 3 describes the standard backstepping control for single variable systems. Section 4 presents the hierarchical backstepping control of UAVs and the cascade controller structure. Section 5 describes the path design and tracking methods. Section 6 presents the adaptive extension for mass and vertical force identification. Section 7 illustrates the efficiency of the developed control approaches using simulation. Section 8 deals with the embedded control realization and the HIL test. Section 9 concludes the paper. Dynamic model of quadrotors 2.1 Kinematic model of aerial vehicles It can be assumed that a coordinate system (frame) E K was fixed to the Earth, therefore it might be considered as an inertial frame of reference. In our paper the frame considered to be fixed might be the NED frame. Another coordinate system fixed to the center of gravity of the quadrotor is H K , it can be described by its position ( ) T x y z ξ = , , and orientation (RPY angles) ( ) T η = Φ, Θ, Ψ relative to E K . Translational and angular velocities v and ω of the helicopter are given in H K . The orientation may be described by the (orthonormal) matrix t R as follows: t C C S S C C S C S C S S R C S S S S C C C S S S C The relationship between ξ  and η and translational and angular velocities v and ω of the helicopter can be described as and the time derivative of ω can be written as Notice that the kinematic model is similar for both fixed wing and quadrotor UAVs. Dynamic model The concept of the quadrotor helicopter is shown in Fig. 1 Each of the helicopter's four actuators exert a lift force which is proportional to the square of the angular velocities i Ω of the actuators ( The BLDC motors' reference signals can be programmed in i Ω . The resulting torque and lift force might be described as: where l , b , d are helicopter and rotor constants. The force F can then be rewritten as . The gravitational force points to the negative z -axis, hence it yields The gyroscopic effect can be represented as The rotor inertia value is r I and the third unit vector is k . The aerodynamic friction at low speeds can well be approximated by the linear formulas a t F K v = − and a r T K ω = − . Using the equations above the motion equations of the quadrotor can be derived: Simplified dynamic model A simplified dynamic model of the quadrotor can be described by disregarding certain effects and thus applying corresponding approximations. For slow helicopter motion all the aerodynamic effects can be neglected. In practice this means that t K and r K might be approximated as zero matrices. Slow motion in lateral directions means little roll and pitch angle changes, therefore r R can be approximated by a 3-by-3 unit matrix. Such simplification cannot be applied to t R . The six equations are the ones that can be found in [3,6]. Because of their importance in later use in the paper they will be repeated here. Rotor dynamics Each of the four brushless DC motor's dynamics can be represented as follows: (14) Standard integral backstepping In order to compensate disturbance effects in steady state the usual way is adding the error integrals to the control components. Hence we present first a standard integral backstepping control (IBC) approach which will be used many times in the sequel. By our knowledge the first publication in this field appeared in [7] for motor control and later it became popular also in quadrotor helicopter control. Let us consider the simple single variable system ( a and b may be nonlinear) 1 2 and define the errors and their derivatives as follows: The virtual control 2d x will be chosen as Let us consider now the following Lyapunov function V and its derivative: sampling time and integral control) then the characteristic polynomial is Hierarchical backstepping control A full state backstepping algorithm is represented in [5] without error integral action. Here the control law is obtained step-by-step through the stabilization of three virtual subsystems and high order derivatives of the path are needed which can cause numerical problems. The helicopter's dynamic model that was shown in Section 2 is comparable to that in the cited paper. In [3,6,8], a backstepping method is applied to the simplified dynamic model of the quadrotor. These serve as the base of the method that is represented in this section. The next subsections focus on the construction of such an algorithm that is capable of explicitly handling all the effects appearing in Eq. (9), while being ignorant to realistic measurement noises and tolerating disturbances. Cascade controller structure Since the helicopter is underactuated, the concept is that the helicopter is required to track a path defined by its x y z , , , Ψ coordinates. The helicopter's roll and pitch angles are stabilized to 0 internally. The control algorithm can be divided into three main parts. At first, the translational part of the vehicle dynamics is controlled, which then produces the two missing reference signals to the attitude control system. The third part is responsible for generating the input signals of the BLDC motors. The cascade structure of the controller is shown in Fig. 2, where indexes d and m denote desired and measured values, respectively. Kalman filters can tolerate the difference of measurement frequencies of the position and orientation (GPS or vision system) and acceleration and velocity (IMU). The sampling time of the motor control is set to 10ms. Position control Using the approximating dynamic model (neglecting aerodynamic friction etc.) the translational motion equations can be brought to the following form: As can be seen the standard IBC can be applied for every component of the position vector. Especially for the z-component follows : , It is evident to manage the errors and virtual controls as usual in the standard form, however to express a from the standard form because 0 b = . Therefore we can keep the content of the brackets until the desired accelera- The reason why these signals can be considered as reference signals is that as the helicopter approaches the desired coordinates, they converge to zero. Notice that the errors, error integrals and virtual controls have to be determined separately both in x and y direction according to the standard IBC form. Attitude control The design is similar to the standard form and can be applied component-wise. Only the rotation around the x-axis will be considered, the other rotations can be managed similarly. Rotor control The calculation of m u differs a little from the method used for inputs. Let us consider one of the rotors and for simplicity suppress the index, then State estimation The control algorithms need the state variables that may be unmeasured or noisy, hence they have to be estimated from the available sensor measurements. For outdoor applications the state estimation is based on sensor fusion of GPS, IMU and magnetometer. The approach was presented in our earlier paper [9]. The quaternion and 3xEKF (Extended Kalman Filter) based technique can well-tolerate the large difference between IMU and GPS sampling frequencies and can be applied for any type of outdoor vehicles. The efficiency of the method was demonstrated for real flight data of a fixed wing propeller driven UAV, however the method can also be applied for outdoor quadrotor UAVs considered in this paper. Beside unit quaternion, the orientation (attitude) is also presented in the form of Euler roll, pitch, yaw ( ) Φ, Θ, Ψ angles. The biases of the sensors are online corrected. Parameter tuning of the controller The tuning of parameters of the controllers are very simple because only the positivity should be guaranteed. The numerical values are tightly related to the speed and the order of forces and torques influencing the power consumption. It should also be taken into consideration that by increasing the speed of control saturation may be caused in the actuators. Simulation experiments can help in parameter choice based on Eq. (21). Path design and tracking The typical motion of the quadrotor helicopter can be fit together from takeoff, hovering, attitude change in fixed position and motion along a straight line in fixed direction. These sections must be connected with continuous acceleration or possibly with smooth acceleration (continuous jerk). In order to spar power the goal is to design path in Cartesian space with continuous/smooth linear and angular accelerations. It can be assumed that the prescribed information for the path is given in the form of the sequences { } 1 n ξ and { } 1 n Ψ . Therefore the path information is a sequence of 4-dimensional vectors with scalar components. Hence, the path design problem can be reduced to the path design of a fictitious robot with the joint vector ( ) T q x y z = , , , Ψ or its subset, then it can be solved by repeating path design in a single scalar variable with bounded and continuous/smooth second order derivative. For the two cases different algorithms will be presented. Path design with continuous acceleration The path can be divided into In order to obtain smooth solution it is required Path design with continuous jerk In case of the motion along a straight line in fixed direction, the yaw angle d Ψ must be constant on the traveling portion instead of to be linear as above, while the acceleration has to be smooth, i.e. the jerk is continuous. For this purpose a special path design is suggested where the sca- Since the path evaluations are performed in normalized time, hence a precise technique was elaborated to convert paths obtained for different τ and 0 τ values to absolute time by taking into consideration also the sampling time, such that the desired paths remains continuous/smooth. Notice that small spikes in the acceleration could cause large torque/force signals. The tracking algorithm with filtering and multiple differentiation The purpose of the control design is to track a predefined trajectory with the smallest possible tracking error. In practice a navigation point must be approximated with a predefined accuracy considering the positions and orientations. The conditions for position tracking ensures that the helicopter will remain in the proximity of the navigation points while keeping the motion continuous which is supported by the path design. On the other hand, the orientation control algorithm needs the derivatives of d Φ and d Θ by the time. For robust filtering and differentiation a fictitious control system (integrator plant 1 s , first order serial compensator where r is the input signal to be differentiated, 1 y is the filtered signal and 2 y is the numerically differentiated input. The obtained term can be cascaded considering 2 y as the input for the next term. Then the state equation of the composite member is as follows: where 1 y is the filtered input rf , 2 y is the first derivative dr and 3 y is the second derivative ddr . Adaptive control The standard IBC can be extended in the direction of parameter and disturbance force identification. Modeling the parameter changes Using Lyapunov theory: Simulation tests The mechanical parameters of the helicopter and the BLDC motors with the rotors are based on the planned dimensions, the masses of purchased elements. These values are summarized in Table 1. IBC with known system parameters As a final result of the integration of all the components of the integrating backstepping control (IBC) the tracking of a complex trajectory consisting of a general 3D line followed by a pentagon in horizontal plane is presented in Fig. 3 for known helicopter mass. Estimation of the initial helicopter mass In some cases the initial mass of the helicopter is unknown, for example because it was changed after the last flight. Hence the parameter estimation can help to obtain reliable values of the mass for the IBC control. The estimation results are shown in Fig. 4 for unknown helicopter mass. The starting value was 1 5 real m . . Beside the mass estimation also the position trajectory and the errors 1 Estimation of varying mass and vertical disturbance force The estimation results of the varying mass and the vertical disturbance force are shown in Fig. 5 together with the position and the integral errors. The error integration is switched out if 1 z e is larger than 0 01m . which is typical during the initial phase of parameter estimation. Attitude and Rotor Control The desired d Φ and d Θ is produced by the position control system and has to be differentiated several times by the cascaded fictitious control systems for use in the attitude IBC control. The rotor controls have also IBC principle but some modifications were necessary because of the (only) first order dynamics. Controller parameter design for IBC rotor control was discussed earlier. The attitude control transients and the angular velocities and driving torques of the motors are shown in Fig. 6. Embedded Control Realization The hierarchical structure of the controller has already been shown in Fig. 2. Control architecture For embedded realization a hardware architecture was developed that is shown in Fig. 7. The control loop of the helicopter requires accurate position and orientation information. A primary sensor for this is Crossbow MNAV100CA containing GPS and inertial measurement unit (IMU) which provides 3D acceleration and angular velocity and magnetometer measurements together with pressure and temperature information. It contains also 9 servo PWM outputs (not used here). Crossbow's MICRO-VIEW software is also included to assist users with calibration, control, data collection and overall system development. Part of the architecture is the motor control unit. The rotors are driven by brushless DC (BLDC) motors. The The on-board computer is a phyCORE-MPC555 equipped with a floating point unit and the controller can be developed at MATLAB/Simulink level. The internal bus is CAN 2.0. The hardware architecture contains an RF channel providing bidirectional communication between the quadrotor and the ground station including also a camera unit. The ground station sends commands and reference path information to the CPU. The helicopter sends status information to the ground. Hardware-in-the-loop test Because of the complexity of flying systems, it is inevitable to verify their control systems thoroughly before flight. Before implementing the control algorithm on the embedded target, it was tested using hardware-in-theloop method (HIL). The tests were aided by a dSPACE DS1103 board on which the helicopter model and the sensory system's measurements were emulated while further experiments included the real control architecture and software realizing the control algorithm and the 3xEKF based state estimation. The scheme of the HIL test can be found in Chapter 6 of [10]. Conclusions The paper has been dealt with the problem of modeling and control of outdoor quadrotor helicopters (UAVs). Novelties of the paper are the following: 1. A hierarchical integrating backstepping based nonlinear algorithm was elaborated for stabilizing and path tracking including controller parameter design. 2. This paper's results differ from earlier ones in the tuning of the stabilizing controllers and that they are integrated with a novel quaternion based approach of state estimation usable for any type of vehicles. 3. Adaptation laws are presented for mass and disturbance force estimation. 4. A technique was elaborated for switch in/out of the integrators in the several IBC controllers based on the norm of the error which can increase especially during parameter estimation. 5. An embedded control architecture was suggested which is general enough for many types of vehicles. Future research will deal with the control of fixed wing UAVs based on the developed control architecture and sensory system. Acknowledgement The work of Zs. Bodó in this paper was supported by the Higher Education Excellence Program of the Ministry of Human Capacities in the frame of Artificial Intelligence research area of Budapest University of Technology and Economics (BME FIKP-MI/FM).
2019-03-28T13:14:45.042Z
2019-02-28T00:00:00.000
{ "year": 2019, "sha1": "21ab079620d2f0b4a557678523fcbcbdba922b0a", "oa_license": null, "oa_url": "https://pp.bme.hu/eecs/article/download/13321/8270", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "ba43b42fa8ac19ef4b34c450be14e41b18830e97", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
263671087
pes2o/s2orc
v3-fos-license
Rapid runtime learning by curating small datasets of high-quality items obtained from memory We propose the “runtime learning” hypothesis which states that people quickly learn to perform unfamiliar tasks as the tasks arise by using task-relevant instances of concepts stored in memory during mental training. To make learning rapid, the hypothesis claims that only a few class instances are used, but these instances are especially valuable for training. The paper motivates the hypothesis by describing related ideas from the cognitive science and machine learning literatures. Using computer simulation, we show that deep neural networks (DNNs) can learn effectively from small, curated training sets, and that valuable training items tend to lie toward the centers of data item clusters in an abstract feature space. In a series of three behavioral experiments, we show that people can also learn effectively from small, curated training sets. Critically, we find that participant reaction times and fitted drift rates are best accounted for by the confidences of DNNs trained on small datasets of highly valuable items. We conclude that the runtime learning hypothesis is a novel conjecture about the relationship between learning and memory with the potential for explaining a wide variety of cognitive phenomena. S3 Appendix: Modeling fitted drift rates with network confidences Modeling fitted drift rates with network confidences: The assumption we made in the main body of the paper, namely that only drift rate changes from exemplar to exemplar, seems like a safe one, but we can eliminate it and more directly estimate the drift for individual exemplars by fitting an appropriate model to participant reaction time data.In consideration of the relatively small number of data points we have for each individual exemplar, we use a simple model based on the shifted Wald distribution.The Wald distribution, also (somewhat misleadingly) called the inverse-Gaussian distribution, describes the distribution of times at which a stochastic process with positive drift γ crosses some defined threshold α.The shifted Wald distribution simply adds an offset θ to the start of the process.The probability density function of the shifted Wald distribution is: for x > θ, where x denotes a reaction time. Anders et al. [1] recommend the shifted Wald distribution as a simple, interpretable cognitive process model of response times that can be used in situations with relatively few data points. While it most directly applies to decisions with only a single possible response, such as go/no-go tasks, they also note that it can be applied to more complicated tasks to produce a relatively abstract aggregate description of the process producing the entire reaction time distribution.The entropy of neural network responses is a similarly aggregate measure of drift, so this is useful even if, as Matzke and Wagenmakers [2] found, the fitted shifted Wald drift only imperfectly corresponds to the drifts of the individual components for each possible decision in a more complicated model. Thus, we can abstractly interpret our task as having only a single outcome that represents any decision being made, without any other change in our earlier logic concerning the interpretation of the parameters.Therefore, for each exemplar, we performed maximum likelihood estimation using the Broyden-Fletcher-Goldfarb-Shanno optimization algorithm to fit a shifted Wald distribution to the human reaction times (normalized as before) induced by the exemplar, where the main parameter of interest is the drift rate γ for each exemplar.We then found the Spearman's rank correlation coefficient between the drift for an exemplar and the mean normalized confidence, as well as the individual confidences, induced by the same exemplar in one hundred neural networks, set up as before and trained on each of the subsets, for both MNIST and Devanagari.Here, we expect a good fit to produce a negative correlation, as low entropy should correspond to a high drift rate. As seen in Fig A (right graph), for both MNIST and Devanagari, we found a moderate but highly significant negative correlation for networks trained on the good set, lesser (and less significant) negative correlations for random sets and the full dataset, and a positive correlation for networks trained on the bad set.The correlations produced by the good set were also significantly different from the others.Thus, based on both the direction, magnitude, and significance of the correlations, networks trained on the good training set again provide the best account of partici- Fig A: Fig A: Spearman rank correlation coefficients between the mean of networks' confidences and fitted drift values for MNIST (left) and Devanagari (right), based on training one hundred randomlyinitialized networks on each training set.The left, blue bar is the correlation between the mean network confidence for each exemplar and mean human RT for the same exemplar; the right, orange bar is the mean of the correlations between each network's confidence and mean human RT, with error bars representing standard errors of the means.Asterisks above bars indicate the level at which the correlation was significantly different from zero; asterisks on brackets indicate the level at which pairs of sets of correlations are different from each other: * * * : p < 0.001; * * : p < 0.01; * : p < 0.05; n.s.: p > 0.05.
2023-10-18T05:48:41.839Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "65680c904afc740e6d6cf2e02a2330e82ec6ac9d", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1011445&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65680c904afc740e6d6cf2e02a2330e82ec6ac9d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
239615995
pes2o/s2orc
v3-fos-license
A wind-blown bubble in the Central Molecular Zone cloud G0.253+0.016 G0.253+0.016, commonly referred to as"the Brick"and located within the Central Molecular Zone, is one of the densest ($\approx10^{3-4}$ cm$^{-3}$) molecular clouds in the Galaxy to lack signatures of widespread star formation. We set out to constrain the origins of an arc-shaped molecular line emission feature located within the cloud. We determine that the arc, centred on $\{l_{0},b_{0}\}=\{0.248^{\circ}, 0.18^{\circ}\}$, has a radius of $1.3$ pc and kinematics indicative of the presence of a shell expanding at $5.2^{+2.7}_{-1.9}$ km s$^{-1}$. Extended radio continuum emission fills the arc cavity and recombination line emission peaks at a similar velocity to the arc, implying that the molecular and ionised gas are physically related. The inferred Lyman continuum photon rate is $N_{\rm LyC}=10^{46.0}-10^{47.9}$ photons s$^{-1}$, consistent with a star of spectral type B1-O8.5, corresponding to a mass of $\approx12-20$ M$_{\odot}$. We explore two scenarios for the origin of the arc: i) a partial shell swept up by the wind of an interloper high-mass star; ii) a partial shell swept up by stellar feedback resulting from in-situ star formation. We favour the latter scenario, finding reasonable (factor of a few) agreement between its morphology, dynamics, and energetics and those predicted for an expanding bubble driven by the wind from a high-mass star. The immediate implication is that G0.253+0.016 may not be as quiescent as is commonly accepted. We speculate that the cloud may have produced a $\lesssim10^{3}$ M$_{\odot}$ star cluster $\gtrsim0.4$ Myr ago, and demonstrate that the high-extinction and stellar crowding observed towards G0.253+0.016 may help to obscure such a star cluster from detection. INTRODUCTION The Central Molecular Zone (hereafter, CMZ), i.e. the inner few hundred parsecs of the Milky Way, hosts some of the Galaxy's densest molecular clouds (Lis & Carlstrom 1994;Bally et al. 2010;Longmore et al. 2012Longmore et al. , 2013bWalker et al. 2015;Mills et al. 2018) and star clusters (known as the Arches and Quintuplet; Figer et al. 1999; Until recently, the only direct evidence for star formation within G0.253+0.016 was a single water maser , see also Lu et al. 2019b). This evidence has been strengthened considerably by recent high-angular resolution Atacama Large Millimeter/submillimeter Array (ALMA) observations of the maser source, which reveal a small cluster of low-to-intermediate mass protostars, 50% of which are driving bi-polar outflows (Walker et al. 2021). Deep radio continuum observations and additional searches for maser emission have not revealed any further star formation activity (Immer et al. 2012;Mills et al. 2015;Rodríguez & Zapata 2013;Lu et al. 2019a), and all other evidence for star formation comes from indirect energy balance arguments. Lis et al. (2001) model the far-infrared/sub-millimetre spectral energy distribution of G0.253+0.016, and infer that the cloud's luminosity is conceivably generated by four B0 zero-age main-sequence stars. Marsh et al. (2016) report evidence of heated dust emission that follows a tadpoleshaped ridge, which they suggest may result from a chain of embedded protostars. Clouds with the physical characteristics of G0.253+0.016, but which are not already prodigiously forming stars, do not exist within the Milky Way disc (Ginsburg et al. 2012;Urquhart et al. 2018). Consequently, G0.253+0.016 presents a unique opportunity to study the early phases of high-mass star and cluster formation under the extreme conditions found in the Galactic Centre (Longmore et al. 2012(Longmore et al. , 2013bRathborne et al. 2014a). Recent observational work has set out to categorise G0.253+0.016's internal structure and dynamics in order to better understand its star formation potential. The internal structure of the cloud is complex (Kauffmann et al. 2013;Henshaw et al. 2019). Dust continuum and molecular line observations reveal significant sub-structure, with a few dozen compact cores and filaments detected in both emission and absorption Johnston et al. 2014;Rathborne et al. 2014bRathborne et al. , 2015Federrath et al. 2016;Battersby et al. 2020;Hatchfield et al. 2020). Gas motions measured on ∼ 0.1 pc scales are highly supersonic (Henshaw et al. , 2020, resulting in widespread shocked gas emission (Kauffmann et al. 2013;Johnston et al. 2014). Federrath et al. (2016) inferred that the internal turbulence in G0.253+0.016 is dominated by solenoidal motion, likely resulting from the strong shear induced by its eccentric orbit around the Galactic Centre . The shear resulting from the background gravitational potential and the cloud's orbital motion may help to explain its morphology Petkova et al. 2021). The combination of solenoidal gas motion, a strong magnetic field (Pillai et al. 2015), and an elevated critical density threshold for star formation (Kruijssen et al. 2014;Rathborne et al. 2014b;Ginsburg et al. 2018) may explain the overall low star formation rate of G0.253+0.016. However, there is a complication to this simple picture, in the form of an arcuate, shell-like structure detected within the cloud's interior. It has been detected in a variety of molecular species including SO (Higuchi et al. 2014), NH 3 (Mills et al. 2015), HNCO , and SiO (Walker et al. 2021). Both the gas and dust temperature along the rim of the arc appear to be elevated, evidenced by its clear detection in higher-excitation lines of NH 3 [Mills et al. 2015 report detections in the (6,6) and (7,7) inversion transitions]. The arc is also co-spatial with the spine of warm dust identified by Marsh et al. (2016). Class I Methanol masers, believed to be tracing shocked gas emission that is not directly related to star formation (unlike Class II masers), are furthermore detected in a crescentlike arrangement following the arc emission observed in NH 3 (Mills et al. 2015). Following detailed investigation of the dynamics of G0.253+0.016, Henshaw et al. (2019) demonstrated that the arc is coherent in both projected space and in velocity. The bulk of the emission associated with G0.253+0.016 is spread over a velocity range of ∼ 40 km s −1 . In position-position-velocity space, there are at least two cloud components. The "main" component is that which closely resembles G0.253+0.016 as it appears in dust continuum emission, and has a mean velocity of ∼ 37 km s −1 . The mean velocity of the component associated with the arc is ∼ 17 km s −1 . However, the velocity gradient associated with this latter component is such that this and the main component appear to meet (in position-positionvelocity space) towards the south of the cloud . The origin of the arc is unclear. Higuchi et al. (2014) speculate that the arc may have been generated following a collision between two molecular clouds based on the arc's morphological similarity to the structure generated in numerical simulations of cloud-cloud collisions (e.g. Habe & Ohta 1992;Takahira et al. 2014;Haworth et al. 2015). An alternative hypothesis however, is that the arc is generated by stellar feedback. If confirmed, this could indicate that G0.253+0.016 is perhaps more active in its star formation than previously thought. In this work, we build on the analysis of Henshaw et al. (2019), and introduce new observations from the Karl Jansky Very Large Array (VLA), 1 to help test this hypothesis, finding that the morphology, dynamics, and energetics of the arc are all consistent to within a factor of a few of those predicted for a simple analytical model of an expanding bubble driven by the wind from a high-mass star. The paper is organised as follows. In Section 2 we describe the data used in this work, both from Henshaw et al. (2019) and our VLA observations. In Section 3 we outline our main results. Finally in Sections 4 and 5 we discuss our findings and outline our conclusions, respectively. ALMA data and S P decomposition This paper makes use of the ALMA Early Science Cycle 0 Band 3 observations of G0.253+0.016 originally presented in Rathborne et al. (2014bRathborne et al. ( , 2015. We summarise the observations here, but refer the reader to the aforementioned papers for a more extensive description. The ALMA 12 m observations cover the full 3 × 1 extent of the cloud using a 13-point mosaic. Here, we use emission from the 4(0, 4) − 3(0, 3) transition of HNCO, which has proved fruitful to study the internal structure and dynamics of the cloud Federrath et al. 2016;Henshaw et al. 2019). Rathborne et al. (2015) combine these data with single dish observations from the Millimetre Astronomy Legacy Team 90 GHz Survey (MALT90; Foster et al. 2011;Jackson et al. 2013) Mills et al. (2015). Transparent squares are those which lie outside of a ±6 km s −1 velocity range around a 2-D velocity plane fitted to the data (see text). Centre: The corresponding centroid velocity map of the arc's parent sub-cloud. The symbols are equivalent to those in the left panel. Right: The peak flux distribution with the VLA radio continuum data overlaid as blue contours. Contours start at 3σ (σ = 0.15 mJy beam −1 ), then 5, 7, 10, 15, and 20σ (Butterfield et al. in preparation). ORganising Nested Structures; Henshaw et al. 2016aHenshaw et al. , 2019, and again we summarise the procedure here, referring readers to the original paper for details. First, we use S P to decompose the spectral line emission into a set of discrete Gaussian components; we fit a total of ∼ 450000 Gaussian components to ∼ 130000 spectra (see Figure 2 of Henshaw et al. 2019). We next use to cluster the Gaussian emission features identified by S P into hierarchical velocity-coherent regions. Out of the forest of clusters that identifies, four of them dominate the emission profile of G0.253+0.016 (as it appears in HNCO emission), accounting for > 50 per cent of the detected Gaussian components. Of these four clusters, or trees as they are referred to in Henshaw et al. (2019) (owing to the dendrogram nomenclature), two account for the overall physical appearance of G0.253+0.016. The emission associated with the first, the "main" component, is qualitatively most similar in appearance to G0.253+0.016 as it appears in dust continuum emission (Henshaw et al. 2019, see their sect. 4.2). The emission profile of the second component is clearly associated with the arc focused on here, which previously had been detected in other works in different molecular species (Higuchi et al. 2014;Mills et al. 2015). This finding therefore served as the first evidence that the arc was coherent both in (projected) space and in velocity. In this work, we make use of the data products output from S P and related to this latter cloud component to investigate the origins of the arc. In the remainder of the paper, we refer to the component identified by as the parent sub-cloud of the arc. VLA data The VLA observations presented in this paper were taken in C Band (4-8 GHz) with the C array configuration (5 resolution). The observations were taken in four separate observing runs, in June 2017, with a cadence of ∼2 days between observations. The observations targeted 6 separate fields, 2 hours on source per field. The observations used J1331+3030 (3C286) as the bandpass calibrator and J1820-2528 as the phase calibrator. The phase calibrator was ob-served every 35 minutes during the observations. The observations were also set up to observe the full stokes parameters and therefore we used J1407+2827 as the polarization leakage calibrator. The observations were processed using the Common Astronomy Software Application (CASA) 2 pipeline, provided by NRAO, to calibrate the data. The continuum data combines the 4-8 GHz frequency coverage (3.8 GHz total bandwidth) of the C band observations. The continuum data used all 4 observing runs which were combined in the imaging stage of the data reduction. The observations were cleaned using the CASA task tclean. The image was cleaned non-interactively down to a threshold of 0.01 mJy. We used Briggs weighting of 0.5 to improve the sensitivity and resolution of the image. The data was cleaned using the 'multi-scale, multi-frequency synthesis' (decon-volver='mtmfs', specmode='mfs') with scales of 0, 4, and 16 pixels to account for the large scale structures present in the field. The synthesised beam size is 6 . 4 × 2 . 9 with a position angle −2 • .5. The rms noise (estimated from emission-free regions) is 0.15 mJy beam −1 . The radio recombination line data presented in this paper combined the H114α, H113α, H110α, H109α, H101α, H100α, and H99α transitions. The radio continuum was subtracted in the uv-plane, using the CASA task uvcontsub, before any imaging was done. Each radio recombination transition was cleaned individually using the CASA task tclean by combining the four observing runs during the imaging process. All recombination line transitions were imaged using the same tclean parameters: 1 km s −1 spectral resolution, 6 × 12 restoring beam size, velocity range of -40 to 99 km s −1 . The images were cleaned non-interactively using a set noise threshold level of 1 mJy and natural weighting to obtain the best sensitivity possible. The cleaned images were then averaged together using the CASA task immath to improve the signal to noise in the image. Morphology and kinematics We present a map of the arc in the left-hand panel of Figure 1. The colour scale in this image refers to the peak amplitude of emission features extracted using S P ( § 2.1) from the HNCO data . The arc can be clearly identified in this map as the ridge of emission towards the centre of the cloud (highlighted by the thick black contour). We highlight several features of interest in the map. First, the yellow circle denotes the position of the H 2 O maser identified by , see also Lu et al. 2019b, which remains the only confirmed site of embedded star formation within G0.253+0.016 (see also ;Walker et al. 2021). The red diamonds are the locations of H regions and H region candidates in close projected proximity to G0.253+0.016 (Rodríguez & Zapata 2013, though note that Mills et al. 2015 argue that the sources within the cloud are spatially filtered peaks of more extended emission, as is also seen in the 5 GHz data presented here). Mills et al. (2015) found a number of class CH 3 OH masers and maser candidates located throughout G0.253+0.016. Rather than tracing the locations of on-going star formation, these most likely trace regions of shocked gas emission (Mills et al. 2015). To investigate whether any maser sources are associated with the arc, we can compare the positions and velocities of the masers with those of the arc. To do this, we first fit the velocity field of the arc parent cluster (see Figure 1) with a bivariate polynomial (cf. Federrath et al. 2016;Henshaw et al. 2019). The velocity field displayed in Figure 1 shows a clear gradient, which increases from ∼ 0 km s −1 in the (Galactic) north-east to ∼ 25 km s −1 in the south-west of the the cloud, which we fit using where v 0 is the systemic velocity of the source, l and b are the Galactic longitude and latitude, and G l and G b are the longitudinal and latitudinal components of the velocity gradient, respectively. The best-fit parameters are v 0 = 14.7 km s −1 , and (converting from degrees to physical units) G l = 1.2 km s −1 pc −1 , and G b = −1.0 km s −1 pc −1 . We then cross reference the maser catalogue of Mills et al. (2015) against this function, identifying all masers that lie in the range v mod ± 6 km s −1 . This velocity limit represents ≈ 2 resolution elements in the ALMA HNCO data. We highlight the 24 masers that are associated with the arc as opaque magenta squares in Figure 1 (masers outside of this velocity range are shown as semi-transparent magenta squares). These masers clearly follow the curvature of the arc, highlighting the association between the arc and the shocks traced by the class CH 3 OH masers. In addition to these masers, Mills et al. (2015) noted the presence of more extended, non-masing CH 3 OH emission toward the arc. This is suggested to be quasithermal or 'quenched' emission (Menten 1991;Mehringer & Menten 1997), indicative of higher gas densities in this region. In the right-hand panel of Figure 1, we present the 5 GHz radio continuum emission observed with the VLA (blue contours). A striking feature of this emission is that it appears to fill the cavity traced by the arc. The emission within the arc cavity also connects in projection to a ridge of radio continuum emission that traces the outer (Galactic) eastern edge of the cloud. This latter ridge has been noted in earlier studies and has been attributed to the ionising influence of a known O4-6 supergiant located towards the (Galactic) south-east of the cloud (Mauerhan et al. 2010;Mills et al. 2015). Figure 2 is a histogram of the centroid velocity information extracted in Henshaw et al. (2019). The left panel shows the distribution of centroid velocities for three distinct components. The dark blue histogram shows the arc itself, defined as the region enclosed by the thick black contour in Figure 1. For comparison, the medium blue histogram shows the arc's parent sub-cloud, and the light blue histogram shows all of G0.253+0.016 . A Gaussian fit to the dark blue histogram (red dashed Gaussian in Figure 2) gives a mean velocity of v = 17.6 km s −1 with a standard deviation of 4.5 km s −1 . A simple geometrical model To better understand the morphology and dynamics of the arc we construct a simple model of a tilted ring projected on the plane of the sky (cf. López-Calderón et al. 2016;Callanan et al. 2021). The model is described by five free-parameters: i & ii) the coordinates of the ring centre on the plane of the sky, {l 0 , b 0 }, iii) the radius of the ring, R arc ; and iv & v) two angles, β, γ, that describe the orientation of the ring relative to the plane of the sky (inclination and position angle, see Callanan et al. 2021). Formally, we describe the shape of the ring by constructing a local Cartesian coordinate system centred on the ring, withx along the line of sight, andŷ and z aligned with Galactic longitude and latitude. We begin with a ring lying in the xy plane of this coordinate system (i.e. edge-on from our point of view, and at constant Galactic latitude), whose coordinates can be expressed parametrically as r = (R arc cos θ, R arc sin θ, 0) with θ ∈ [0, 2π). The angles β and γ then represent rotations about the y and x axes of this coordinate system 3 , so the coordinates of the ring become R y (β)R x (γ)r, where R x and R y are the usual rotation matrices for rotations about the x and y axes: To find the parameters that best describe the arc, we minimise the distance between the image pixels that we identify as being in the arc and the projected arc model. Formally, our procedure is as follows. For any proposed vector of parameters P describing the arc, we first compute the projected position of the arc in the Cartesian coordinate system defined by the observed image; we denote this projected position (x P (θ), y P (θ)), where θ is a parametric variable that varies from 0 to 2π. The data to which we fit this model consists of the set of N pixels in the image that we have identified as being part of the arc; let (x, y) i for i = 1 . . . N denote the positions of the centres of these pixels in the image coordinate system. For each pixel i we define the distance to any point on the model arc by and we further define d min,i,P as the minimum of d i,P (θ) on the domain θ = [0, 2π], i.e. d min,i,P is the minimum distance from the centre of pixel i to any point on the arc. We define our goodness of fit statistic for a proposed set of model parameters P by χ 2 (P) = N i=1 d min,i,P , i.e. the goodness of fit of the model is simply the sum of the squared minimum distances between the arc pixels in the image and the projected arc produced by a given set of model parameters. We find the set of parameters P that minimise this objective function using a standard Levenberg-Marquardt minimisation method (Newville et al. 2014). Our best-fitting model geometry is displayed in Figure 3, where it is overlaid on maps of the peak amplitude and gradient-subtracted velocity field (see § 3.1) of the arc. The circular model forms an ellipse when projected on the plane of the sky. It is centred on The gradient-subtracted velocity field (see § 3.1) presented in the right-hand panel of Figure 3 is quite complex. Broadly speaking, the velocities transition from blue-to red-and back to blue-shifted emission again in the azimuthal direction. Gradients in the radial direction further complicate this picture. However, the azimuthal trend may be produced by the expansion of the arc. We can verify this with our toy model. First, we assume that the arc is expanding radially and second, that the expansion velocity is constant in azimuth in the plane of the arc. Having fixed the geometry, we perform another least squares fit to determine the expansion velocity, v exp , that best describes the velocity field of the arc. We do this in two ways. In the first method, we include only the expansion velocity as a free-parameter in the model. In the second method, we introduce a constant in addition to the expansion velocity that represents the systemic line-of-sight velocity of the arc, v arc,0 . For the former we derive v exp = 3.3 km s −1 . For the latter, we derive v exp = 7.9 km s −1 and v arc,0 = −3.1 km s −1 . The introduction of the additional freeparameter in the second method leads to the factor of ∼ 2 change in the modelled expansion velocity. This latter model is displayed as the coloured dots in the right-hand panel of Figure 3 (the colour scale of the dots matches that of the background velocity field). Finally, we introduce a "control" estimate of the expansion velocity by simply fitting a Gaussian to the distribution of gradient-subtracted centroid velocities shown in Figure 1. We then estimate the expansion velocity as the half-width at half-maximum (HWHM) of this distribution, finding v exp = 4.2 km s −1 . Each of these estimates is highlighted in Figure 4, which is a position-velocity diagram extracted along the (partial) ellipse shown in Figure 3 (the 0.0 location is taken to be the lowest Galactic longitude point on the arc). The dot-dashed line reflects our kinematic model with v exp = 3.3 km s −1 , the dotted line represents the model with v exp = 7.9 km s −1 , and the horizontal lines represent the HWHM approach with v exp = 4.2 km s −1 . The uncertainties in this modelling approach are considerable, and the velocity field of the arc is more complicated than that produced by this simplified model. Nonetheless, this simple approach demonstrates the plausibility that the morphology of the arc, as well as its dynamics, may be interpreted as an expanding shell. For the sections that follow, we propagate the uncertainties associated with this modelling into our calculations. We use the mean of the expansion velocities as our fiducial estimate, but retain the upper and lower limits for further calculations, v exp = 5.2 +2.7 −1.9 km s −1 . Under these assumptions, we can estimate the dynamical age of the arc, With our best-fitting values R arc = 1.3 pc and v exp = 5.2 +2.7 −1.9 km s −1 the estimated dynamical age is t dyn ≈ 2.4 +0.8 −1.4 × 10 5 yr (assuming a constant expansion velocity). Mass, energy, and momentum With an estimate of the expansion velocity we can estimate the energy and momentum associated with the arc. To do this we first estimate a mass using dust continuum emission. We derive the total mass of the arc within the black contour presented in Figure 3 from the 3 mm dust continuum emission from ALMA Cycle 0, first presented by Rathborne et al. (2014b): where d is the distance to the source, S ν is the integrated flux density (in Jy), R g2d is the gas-to-dust ratio, κ ν is the dust opacity per unit mass at a frequency ν, and B ν (T d ) is the Planck function at a dust temperature, T d . We adopt a dust opacity per unit mass Figure 3. The 0.0 location is taken to be the lowest Galactic longitude point on the arc. The colour scale reflects the peak amplitude of the HNCO emission. The lines represent different models for the kinematics of the arc velocity field presented in the right panel of Figure 3. The horizontal dashed lines represent the most simplistic approach to estimating the expansion velocity, and reflect the half-width at half-maximum of the gradient-subtracted velocity distribution (see text for details), v exp = 4.2 km s −1 . The dot-dashed and dotted lines correspond to the model velocity fields described in § 3.2. The former of these models has a constant expansion velocity of v exp = 3.3 km s −1 . The latter also has constant expansion velocity, this time v exp = 7.9 km s −1 , but the model also includes a constant line-of-sight velocity of v 0,arc = −3.1 km s −1 . Two considerable sources of uncertainty in our mass estimate are the dust temperature and the gas-to-dust ratio, R g2d . For the former, G0.253+0.016 overall shows low dust temperatures of the order ∼ 20 K (Longmore et al. 2012;Tang et al. 2021). Marsh et al. (2016) find that the dust associated with the arc consists of a cool (< 20 K) and a warm component (up to ∼ 50 K). In terms of the gas temperature, Mills et al. (2018, see also Ginsburg et al. 2016Krieger et al. 2017) also find evidence from HC 3 N emission in G0.253+0.016 for two distinct components, one low-excitation, low-density (n ∼ 10 3 cm −3 ; T ∼ 25 − 50 K) and one high-excitation, high-density (n ∼ 10 5 cm −3 ; T ∼ 60 − 100 K). The gas temperature in Galactic Centre clouds is typically higher than the dust temperature (Krieger et al. 2017) and modelling indicates that even at densities of 10 5 cm −3 , the gas and dust are unlikely to be in thermal equilibrium (Clark et al. 2013). The uncertainty on the dust temperature is most likely a factor of 2. Moreover, given that the metallicity in the Galactic Centre is approximately twice solar (Mezger et al. 1979;Feldmeier-Krause et al. 2017;Schultheis et al. 2019Schultheis et al. , 2021, the gas-to-dust ratio is likely lower by a similar factor (Longmore et al. 2013a;Giannetti et al. 2017). Combining the above uncertainties, we estimate that the arc has a mass of M arc ∼ 2700 +3000 −1400 M , where the fiducial value corresponds to T = 50 K and R g2d = 100 (or T = 25 K and R g2d = 50). We caution that this still likely represents a strict upper limit to the mass of the arc because there are multiple velocity components along the line-ofsight in this location, which are not accounted for in mass derivations from continuum observations. Importantly, the arc spatially overlaps with the dominant sub-cloud in G0.253+0.016, which likely contains most of the mass . Therefore, although the uncertainty on the mass derived from continuum observations is of the order a factor of ∼ 2, this additional consideration means that the uncertainty could be higher. With estimates for the mass and expansion velocity in hand we can now estimate the kinetic energy and momentum of the arc using and finding E arc ∼ 0.7 +2.8 −0.6 × 10 48 erg and p arc ∼ 1.4 +3.1 −1.0 × 10 4 M km s −1 , respectively. We discuss these values in more detail in § 4. On the nature of the radio emission and the association between the arc and the ionised gas Radio continuum emission is detected throughout the arc cavity in projection ( Figure 1). However, as discussed in § 3.1, the emission extends further to the (Galactic) south and east. While it is certainly possible that the radio continuum emission is physically related to the arc, projection effects may be important. To investigate whether the ionised gas is physically associated with the molecular arc, we extract a radio recombination line (RRL) spectrum from the region marked with a dotted circle in Figure 3. In practice, we stack the emission from a total of seven RRL transitions, namely, H114α, H113α, H110α, H109α, H101α, H100α, and H99α. The resulting spectrum is displayed in Figure 5. In addition to stacking, we have smoothed the native spectral resolution of the stacked spectrum by a factor of 4 to further increase the signal-to-noise. We fit the smoothed spectrum using a multi-component Gaussian model using the standalone fitter functionality of S P ). This procedure uses derivative spectroscopy to determine the number of emission features within each spectrum and their properties (i.e. their peak amplitude, velocity centroid, and width; Lindner et al. 2015;Riener et al. 2019). Using a Gaussian smoothing kernel of standard deviation 1.5 channels, and ensuring that all identified components are above a signal-to-noise ratio of 3, this method predicts a three component model. The brightest component has a centroid velocity of 22.0±1.4 km s −1 and has a velocity dispersion of 13.6±1.5 km s −1 . This velocity is redshifted with respect to the mean of the arc centroid velocity distribution (17.6 km s −1 ), but is consistent to within one standard deviation and is importantly inconsistent with the other sub-clouds associated with G0.253+0.016 . Note that the combination of the broad lines, spectral smoothing, and the narrow bandwidth make it difficult to determine if the two lower brightness emission features are significant. However, they are located at higher velocity and are therefore not relevant here. The consistency in velocity between the RRL emission and the molecular gas tracing the arc, in addition to the spatial relationship between the radio continuum emission and the arc cavity, leads us to conclude that the molecular gas and ionised gas are most likely related. To help better understand the nature of the ionised gas we estimate the electron density, recombination time, and Lyman continuum ionising flux. The morphological and kinematic match between the radio emission presented here (continuum and RRL emission, respectively) and the arc (Figure 3) gives us confidence that the two are physically related. However, we note that G0.253+0.016 lies close in projection to both of thermal and non-thermal radio sources, in particular the arched radio filaments that are oriented perpendicular to the Galactic plane (Morris & Yusef-Zadeh 1989;Yusef-Zadeh 1989). G0.253+0.016 also overlaps in projection with the prominent supernova remnant G0.30+0.00 (Kassim & Frail 1996;LaRosa et al. 2000), and an additional candidate supernova remnant lies directly to the Galactic west of the arc (Ponti et al. 2015). The contribution of non-thermal emission to the radio continuum flux may therefore be non-negligible. We, therefore, estimate the electron density, recombination time, and Lyman continuum ionising flux in two ways i) assuming that the radio continuum flux is produced entirely by free-free emission, which provides our upper limit; ii) using the RRL emission to self-consistently predict what the expected free-free continuum flux would be. The total integrated continuum flux within the arc cavity (see the circle in Figure 3 is ∼ 80 mJy. This provides our strict upper limit on the free-free emission. The measured RRL integrated intensity in Figure 5 is 5.2 mJy km s −1 (4.6 K km s −1 ). Assuming that the RRLs are optically thin and in LTE (typical departure coefficients β n are very close to unity for H99-114α; Storey & Hummer 1995) we can use equation 14.29 of Wilson et al. (2009, 5th ed.) (1 + y He ) −1 where a(ν, T e ) is the Gaunt factor, assumed to be unity, and y He = N(He+)/N(H+), the ratio of helium to hydrogen ions, is assumed to be 0.1. We determine T L /T C ≈ 2.5 km s −1 for T e = 5000 K (see above). From this ratio we determine that the expected continuum flux is ≈ 1 mJy (cf. ∼ 80 mJy derived from the continuum). This calculation indicates that the continuum likely suffers contamination from non-thermal emission, and the estimated continuum flux from the RRL emission provides a lower bound to the contribution from free-free emission. The electron density within the shell (assuming that the ionised gas fills the volume of the shell bounded by the arc) is (Mezger & Henderson 1967;Rubin 1968) where S ν is the integrated flux density at a frequency ν (5 GHz), T e is the electron temperature (which we assume to be T e = 5 × 10 3 K, relevant for the electron temperature in Galactic Centre H regions; Lang et al. 1997;Deharveng et al. 2000;Law et al. 2009), d is the source distance, and θ = 2R = 64 refers to the angular size of the source. The recombination time is t rec = 1/(n e α B ), where α B is the hydrogen recombination coefficient, which we assume is α B = 4.5 × 10 −13 cm 3 s −1 (valid for an assumed temperature of 5 × 10 3 K; Draine 2011a). For the lower and lower bounds on the free-free emission, we derive a range in electron density of n e ≈ 10-93 cm −3 . The corresponding range in recombination time is t rec ≈ 760-7000 yr. The Lyman continuum photon injection rate needed to balance recombinations is (Mezger & Henderson 1967;Rubin 1968 Inserting numerical values, we derive a range for the Lyman continuum ionising flux of N LyC ≈ 10 46.0 -10 47.9 photons s −1 . The Lyman continuum photon rate gives us some insight into the type of source that may be driving this emission. Assuming that the emission is produced by a single zero-age main sequence star, the bounds of our derived N LyC values correspond to stars of spectral type B1-O8.5, with corresponding masses 12-20 M (Panagia 1973;Smith et al. 2002;Martins et al. 2005;Armentrout et al. 2017). We conclude that the driving source of the continuum may be a high-mass star. In the following sections we discuss whether such a star is the likely driving source of the arc. DISCUSSION In the case of massive stellar clusters (M > 10 3 M ), the energetic processes are dominated by three main forms of feedback: ionising radiation, stellar winds, and supernovae ). Stellar feedback plays an integral role in shaping the ISM and regulating star formation at the centre of the Galaxy (Kruijssen et al. 2014;Armillotta et al. 2019;Barnes et al. 2020;Tress et al. 2020;Sormani et al. 2020). Although the star formation rate is low in the CMZ (Longmore et al. 2013a), the Galactic Centre star-forming regions (e.g. Sgr B2 and Sgr A) are among the most luminous in the Milky Way. The results presented in the previous section, specifically the morphology and dynamics of the molecular arc and its apparent physical association with the ionised gas emission, suggest that the arc may be the result of stellar feedback. This conclusion is at odds with previous works suggesting that the arc may have been generated during a cloud-cloud collision (Higuchi et al. 2014). This conclusion is also in tension with the generally accepted view that G0.253+0.016 is largely quiescent, with only a single known site of confirmed active star formation (Walker et al. 2021). In the following sections, we discuss the possible origins of the arc, assuming that it is generated by stellar feedback, before addressing the question of whether or not we would expect to detect its progenitor star towards G0.253+0.016. Is the arc a shell swept up by the wind of an interloper star? One hypothesis that would be consistent with the quiescent picture of G0.253+0.016, is that the arc represents a shell swept up by the wind of an interloper star. High-mass stars possess powerful winds and the CMZ is unique in our Galaxy in that there is a rich population of 'field' high-mass stars distributed throughout (Mauerhan et al. 2010;Dong et al. 2011;Clark et al. 2021). The origin of this population is unclear. In general, the lifetimes of molecular clouds in the CMZ are short (∼ 1 Myr; Henshaw et al. 2016b;Jeffreson et al. 2018). Clouds are destroyed by powerful stellar feedback (Barnes et al. 2020) and their emergent stellar populations contribute to the field. Another possibility is that some of this population results from the tidal stripping of, or from stellar interactions within the CMZ's massive clusters the Arches and Quintuplet (Habibi et al. 2014). Irrespective of their origins, the impact that these high-mass field stars have on the surrounding interstellar medium is not well understood (although see Simpson et al. 2018Simpson et al. , 2021. We can crudely estimate the likelihood that the star represents an interloper using simplistic assumptions based on the known properties of the CMZ. If we take the approximate present day star formation of the CMZ, ∼ 0.1 M yr −1 (which has been more or less constant over the past several Myr; Longmore et al. 2013a; Barnes et al. 2017), and make the assumption that the vast majority of this star formation is confined to a torus with major and minor radii of ∼ 100 pc and ∼ 10 pc, respectively (Molinari et al. 2011;Kruijssen et al. 2015;Henshaw et al. 2016a), the expected volumetric star formation rate is of the order ∼ 0.5 M Myr −1 pc −3 . First consider a scenario where the interloper is an O star with a lifetime ≈ 4 Myr. Assuming that a single 16 − 20 M star is produced for every ∼500 M cluster produced (assuming a standard Kroupa 2001 initial mass function; IMF), the density of 16 − 20 M stars is ρ * = 1/250 pc −3 , and the expected number within the volume of G0.253+0.016, assuming a cross sectional area A ∼ 17 pc 2 and a depth L = 4.7 pc (Federrath et al. 2016), is N = ALρ * ≈ 0.3. This is high enough that we must consider the possibility that an interloper might be responsible for the arc. In the alternative scenario where the interloper is a B star, the expected number is even larger, since B stars are both more common and live longer. Numerical simulations show that the winds from runaway O and B stars can sweep up a dense shell as they pass through molecular clouds (Mackey et al. 2015). It is tempting to speculate that such a star may have been exiled from the Arches or Quintuplet (Portegies Zwart et al. 2010). This possibility has been discussed in relation to both Sgr B1 (Simpson et al. 2018) To explore this hypothesis further, we can examine the size of the arc in more detail. As the relative velocity between the runaway star and the ambient medium increases, the characteristic size of the swept-up shell driven by the star's wind decreases (Mackey et al. 2015). The scale of the bow shock produced, the stand-off distance, is defined as the point where the momentum flux of the stellar wind balances the momentum flux of the ambient medium, and is given by (Baranov et al. 1971;Green et al. 2019), whereṀ is the stellar wind mass loss rate, v ∞ is the terminal wind velocity, ρ 0 is the density of the ambient medium, v * is the velocity of the star with respect to the ambient medium and c s corresponds to the sound speed, in this case in the molecular phase. This is because the bow shock is expected to trap the ionization front for the strong wind and dense interstellar medium derived above (Mac Low et al. 1991;Arthur & Hoare 2006), in which case the bow shock expands into molecular gas. Using Equation 12, we can ask the question: what size shell could be produced by the type of high-mass star needed to stimulate the ionised emission observed within the arc cavity? To address this question we first estimate the mass loss rate and terminal wind velocity of the high-mass star. The limiting case, i.e. the star that is capable of producing a shell with the largest radius, is given by the upper end of our mass limit derived in § 3.4. For O stars which span the range of spectral types consistent with our estimated Lyman continuum photon rate of N LyC = 10 47.9 photons s −1 (O9.5, O9, O8.5), Martins et al. (2005, see their Table 1) . We can use this information to determine the mass loss rate using the metallicity-dependent relationship described in Vink et al. (2001). We derive mass loss rates for two metallicities (consistent with our mass calculations in § 3.3), namely solar and twice solar, findinġ M(Z/Z = 1) = {0.3, 0.4, 0.7} × 10 −7 M yr −1 andṀ(Z/Z = 2) = {0.5, 0.8, 1.2} × 10 −7 M yr −1 , respectively. We determine the terminal wind velocity assuming v ∞ = 2.6v esc (McLeod et al. 2019, see also Barnes et al. 2020), where v esc is the escape velocity obtained from Muijres et al. (2012, v esc = {892, 908, 923} km s −1 ). Although our upper limit on the stellar mass represents the limiting case for this scenario, it is worth noting that both observations (Mokiem et al. 2007) and simulations (Offner & Arce 2015) show that the mass loss rates from early-type B stars predicted from models of wind launching (Vink et al. 2001) can be underestimated by orders of magnitude (see Figure 3 of Smith 2014). In some cases, the mass loss rates can be as high as the model-predicted mass loss rates of the more massive O-stars considered here (albeit with moderately slower winds). Next, we use the mass of the arc to estimate the initial density of the cloud prior to the star's passage, assuming this gas originally filled the volume defined by the radius of the arc. For M arc ∼ 2700 +3000 −1400 M , we find ρ 0 = 3M arc /4πR 3 arc = 2.1 +2.3 −1.1 × 10 −20 g cm −3 , corresponding to a number density ∼ 0.9 +1.0 −0.5 × 10 4 cm −3 (which is comparable to the mean density of G0.253+0.016; Federrath et al. 2016;Mills et al. 2018). Finally, we assume v * = v exp = 5.2 +2.7 −1.9 km s −1 and T = 50 K ( § 3.3), such that c s,mol = 0.42 km s −1 , and compute standoff distances spanning the extremes of this parameter space. The smallest (largest) stand-off distance is set by the upper (lower) limits in the stellar wind properties and the lower (upper) limits in density and v * . The range in parameters described above produces stand-off distances of the order 0.01 pc -0.1 pc. The predicted size of the shell is therefore at least an order of magnitude smaller than the observed size of the arc. Looking at this another way, for the star to plausibly be an interloper, it must be able to move a distance of order L = 4.7 pc within the star's lifetime, t * , otherwise it is likely that the star was born right next to the cloud. The maximum stand-off distance (for a fixed mass loss rate and wind speed) is given by the lowest possible relative velocity between the star and the cloud. Assuming a lifetime of t * ∼ 20 Myr (the limiting case is given by the longest lifetime, and therefore the B1 star; Hurley et al. 2000), this sets a minimum velocity of v min = L/t * ∼ 0.2 km s −1 , which in turn gives a maximum standoff distance of R st = 0.8 pc (assuming the upper limits in the stellar wind properties and the lower limit in density), which is smaller than what we observe. In summary, it is difficult to reconcile the fiducial mass and radius estimates of the arc with those predicted assuming that the arc is a swept up shell driven by a stellar wind of a ≈ 12-20 M interloper star moving relative to the cloud. Reconciliation may be possible if: i) our assumed mass loss rate and wind velocity are underestimated; ii) both ρ 0 and v * are overestimated. Regarding the former scenario, Some of the 'field' high-mass stars located within the Galactic Centre are more evolved Wolf-Rayet (WR) stars (Mauerhan et al. 2010;Dong et al. 2011;Clark et al. 2021). WR stars have powerful stellar winds, with mass loss rates that can be 100× that of O stars. However, they are also more luminous, with Lyman continuum ionising fluxes that are at least an order of magnitude greater than our upper limit derived in § 3.4 (N NLyC > 48.6; Crowther 2007). Therefore it is unlikely that an interloper WR is generating the arc. Regarding the latter scenario, assuming v * = v exp , the ambient density would have to be ∼ 3 orders of magnitude lower than our fiducial value estimated above (since R st ∝ ρ −1/2 0 ). This would imply a swept-up mass so small that the arc would be undetectable in dust emission in the current observations. Therefore, a reduction in both ρ 0 and v * would be needed to reproduce the observed morphology of the arc. Better mass constraints on the arc would help to conclusively rule out this scenario. As discussed in § 3.3, it is not implausible that the mass estimate that we derive for the arc from dust continuum emission is overestimated, particularly if the bulk of that mass is attributed to a spatially overlapping, but unrelated part of the cloud ). Is the arc the result of stellar feedback from in-situ star formation? An alternative hypothesis to that presented in § 4.1 is that the arc may be the result of stellar feedback associated with in-situ star formation within G0.253+0.016. To test that this hypothesis we compare the morphology and dynamics of the arc to analytic prescriptions describing the expansion of H regions. Thermal expansion of an HII region The analytic expression for radial expansion of an H region driven purely by thermal pressure (i.e. with negligible contributions from radiation pressure 5 and stellar winds) is given (Spitzer 1978) where c s,i is the sound speed in the ionised gas, t is the age of the H region, and R s is the Strömgren radius. The sound speed in the ionised gas is where k B is the Boltzmann constant, T i is the temperature of the ionised gas, µ is the mass per hydrogen nucleus in units of m H . Assuming an ionised gas temperature of T i = 5 × 10 3 K (Lang et al. 5 Note that throughout this discussion we neglect radiation pressure from our analysis. Radiation pressure is only important compared to ionised gas pressure when the radius of the H region is below a characteristic radius defined by R ch = 0.06 f 2 trap S 49 pc (Krumholz & Matzner 2009), where f trap represents the factor by which the radiation-pressure force is enhanced by trapping of energy within the expanding shell, and S 49 is the ionising luminosity in units of 10 49 s −1 . Taking the upper limit of our range for the ionising luminosity N LyC = 10 47.9 photons s −1 ( § 3.4) gives, R ch ≈ 5 × 10 −3 f 2 trap , which is much smaller than the radius of the arc unless f trap > 16. We therefore conclude that radiation pressure is not the likely driving source of the arc. 1997; Deharveng et al. 2000;Law et al. 2009), c s,i ≈ 8 km s −1 . The Strömgren radius is where we have used the formalism from Krumholz (2017, their equation 7.24). Here, if µ = 1.4, the mean mass per hydrogen nucleus in the gas in units of m H and ρ 0 is the initial density before the photoionizing stars turn on, then n p = ρ 0 /µm H and n e = 1.1ρ 0 /µm H with the factor of 1.1 coming from assuming that He is singly ionized and from a ratio of 10 He nuclei per H nucleus. Following § 4.1, we present here only the limiting case and assume N LyC = 10 47.9 photons s −1 . Combining with an initial density ρ 0 = 2.1 +2.3 −1.1 × 10 −20 g cm −3 ( § 4.1), the estimated Strömgren radius is R s ≈ 0.05 +0.03 −0.02 pc. We can use Equation 13 to estimate the time it would take for an H region to expand to the observed radius of the arc, The corresponding velocity with which the H region expands is given Equating R Sp = R arc , we find that the estimated age of the H region would be t Sp = 1.0 +0.4 −0.3 × 10 6 yr. After ∼ 1 Myr, the corresponding expansion velocity is expected to be v Sp = 0.7 +0.3 −0.2 km s −1 . In Figure 6, we show the time evolution of both the radial expansion (top panel) and the velocity (bottom panel) predicted by the Spitzer (blue dotted lines 1978) model. The two curves (blue dotted lines) represent the upper and lower limits on the radial evolution. These limits come from the upper and lower limits on the mass and therefore density (see Equation 15). The shaded region therefore represents the range of parameter space spanned by our estimates of the physical properties. We also include in this figure the model described in Hosokawa & Inutsuka (2006), which also describes thermal expansion but with a slight modification (red dot-dashed lines): Using the Hosokawa & Inutsuka (2006) model, the predicted age and velocity of the H region are t H&I = 0.9 +0.4 −0.3 × 10 6 yr and v H&I = 0.8 +0.3 −0.2 km s −1 , respectively. As an H region expands, the photoionised gas in its interior exerts a pressure force and delivers outward radial momentum and kinetic energy to the swept-up shell. Krumholz (2017, their equation 7.36) shows that the momentum delivered to the ambient medium, assuming a spherical H region and an ionised gas temperature of 10 4 K, is where n H is the number density of H nuclei in the ambient medium into which the H region is expanding, and t is its age. The expected kinetic energy of the swept-up shell is (20) We can use the predicted age of the H region therefore, to evaluate the momentum and energy at t = t Sp . Using our fiducial estimates N ly = 10 47.9 s −1 ( § 3.4), ρ 0 = 2.1 +2.3 −1.1 × 10 −20 g cm −3 (n H ∼ 0.9 +1.0 −0.5 × 10 4 cm −3 ), and T e = 5 × 10 3 K, we find p = 3.4 +2.7 −1.5 × 10 4 M km s −1 and 0.6 +1.7 −0.5 × 10 44 erg, respectively. The above predictions are in considerable tension with the observations. The predicted age of the H region, implied by the radius of the arc, is almost an order of magnitude greater than the arc's estimated dynamical age (which assumes that the expansion velocity has been constant over this time; § 3.2). Although the predicted momentum only differs from our measured value by a factor of 2 − 3, the predicted velocity and energy show considerably more tension with the measured quantities, differing by factors of ∼1 and 4 orders of magnitude, respectively. Given that this calculation uses our upper limit on the estimated Lyman continuum ionising flux, and therefore represents a best case scenario for this hypothesis, we are able to rule out thermal expansion of an H region as the possible driving source of the arc. A wind-blown bubble The analysis presented in the previous section indicates that there must be a significant source of energy on top of that provided by the thermal pressure of photoionised gas. One possibility is that this energy is provided by the stellar wind. In the following, we explore the possibility that the arc represents the dense, partial shell that surrounds a bubble driven by a stellar wind from a high-mass star. The time evolution of radial expansion of a bubble driven by stellar winds can be expressed (Weaver et al. 1977), where α = [125/154(π)] 1/5 (Tielens 2005;Lancaster et al. 2021a), L wind is the mechanical wind luminosity, L wind = 0.5Ṁv 2 ∞ , ρ 0 is the ambient density (estimated in § 4.1). The Weaver et al. (1977) solution assumes that the wind gas is adiabatic and trapped, so it applies to a bubble that is completely closed and has no cooling. As soon as gas breaks out, or there is significant mixing between hot and cold gas that leads to cooling, the rate of expansion will drop below the Weaver et al. (1977) solution (McKee et al. 1984Mac Low & McCray 1988;Lancaster et al. 2021a). Mac Low & McCray (1988) relaxed the condition that the wind gas is adiabiatic and included radiative cooling from the interior of the bubble. At early times, the expansion follows the analytic Weaver et al. (1977) solution. At later times, some of the internal energy is radiated away and the expansion rate slows. The numerical solution of Mac Low & McCray (1988) grows at a rate close to t 1/2 , such that we can write where R cool = R W , given by Equation 21, is the radius of the bubble at a time t = t cool , where t cool is the time at which radiative cooling becomes significant. Using this expression, we can estimate the time it would take for a wind-blown bubble to expand its current size assuming this time is > t cool : The corresponding expansion velocity, momentum in the shell, and kinetic energy of the shell are and where Z is the metallicity. To estimate the cooling time, we must therefore estimate the mechanical wind luminosity. As discussed in § 4.1, both observations (Mokiem et al. 2007) and simulations (Offner & Arce 2015) show that the mass loss rates from early-type B stars predicted from the models of wind launching considered here (Vink et al. 2001) can be underestimated by orders of magnitude. In the following, we therefore use the mass loss rates and terminal wind velocities derived for O stars of spectral type O9.5, O9, O8.5 in § 4.1, under the assumption that these provide the limiting case for this scenario. We therefore estimate the range in mechanical wind luminosity that spans this parameter space, finding L wind = 0.4 − 2.2 × 10 35 erg s −1 (note that in some cases empirically derived mechanical wind luminosities from early type B stars can actually exceed this range; Mokiem et al. 2007). Inserting numerical values we derive a range of cooling times t cool = 1500-2200 yr, where the lower limit is given by our lower limit on the mechanical wind luminosity and the upper limit on the cloud density at solar metallicity (the upper limit is given by the opposite at twice solar metallicity). Due to the considerable ambient density of G0.253+0.016, the corresponding cooling time is much shorter than that inferred under the typical conditions found in galaxy discs (Mac Low & McCray 1988;Chevance et al. 2020). Using Equation 21, the corresponding size of the wind blown bubble at time t = t cool is therefore R cool = 0.05-0.12 pc. In the top panel of Figure 6, we show curves corresponding to the time evolution of wind-blown bubbles that represent the extremes of the parameter space described above (orange dashed lines). The model in which the shell swept up by the wind-blown bubble expands most quickly (slowly) is derived from our upper (lower) limits on the stellar mass and metallicity, but the lower (upper) limit on density. The corresponding evolution in the expansion velocity is shown in the bottom panel. Equating R Wc = R arc , for M/M = 19.82,Ṁ(Z/Z = 2), and n H ∼ 0.4 × 10 4 cm −3 , we derive an age of t Wc = 0.4 × 10 6 yr, an expansion velocity of v Wc = 1.5 km s −1 , a momentum p Wc = 0.2 × 10 4 M km s −1 , and an energy E Wc = 0.6 × 10 47 erg. The same calculation for M/M = 16.46,Ṁ(Z/Z = 1), and n H ∼ 1.9 × 10 4 cm −3 yields t Wc = 1.6 × 10 6 yr, v Wc = 0.4 km s −1 , p Wc = 0.2 × 10 4 M km s −1 , and E Wc = 0.2 × 10 47 erg. For the M = 16.46 M star, the expansion velocity and momentum are an order of magnitude below the values estimated from the observations, but the predicted energy is lower by >2 orders of magnitude. In the case of the M = 19.82 M star, the predicted momentum and energy are comparable to within a factor of < 2 to the measured values, while the predicted expansion velocity is lower by a factor of ∼ 3.5 compared to our fiducial estimate of 5.2 km s −1 . 6 While the agreement remains imperfect, this analysis demonstrates that the arc could plausibly represent a dense, partial shell surrounding a bubble driven by a stellar wind. The factor of ∼a few discrepancy may be explained by the fact that each of the discussions above consider a single feedback mechanism acting in isolation when in reality different mechanisms may act in concert (Draine 2011b;Martínez-González et al. 2014;Yeh et al. 2013;Mackey et al. 2015). A full prescription of the different feedback mechanisms is beyond the scope of the present study and will require detailed modelling tailored to the conditions found in G0.253+0.016 and, more generally, the extreme environment of the CMZ. Is a wind-blown bubble the most likely scenario? The analysis presented in the previous sections leads us to conclude the following: (i) the arc is plausibly the result of stellar feedback. (ii) the estimated density and morphology of the arc are difficult to reconcile with a scenario in which the arc is a bow-shock swept up by the wind of an interloper star. (iii) the thermal pressure of photoionised gas alone is unable to reproduce the estimated dynamics and energetics of the arc. (iv) the arc may represent a dense, partial shell surrounding a bubble driven by the wind from a high-mass star. The importance of winds from high-mass stars as a feedback mechanism is under recently revived debate. Numerical simulations have had a consensus for some time that generally photoionisation dominates over winds (Dale et al. 2013;Rathjen et al. 2021;Geen et al. 2021). Despite this, there are several sources with morphology and dynamics which appear to be consistent with those expected for wind-blown bubbles. RCW 120 has been recently described as being a wind-blown bubble driven by a O8V star moving relative to the ambient cloud material by < 4 km s −1 , with further evidence to suggest that star formation may have been triggered within the swept up shell (Luisi et al. 2021). Similarly, Pabst et al. (2019Pabst et al. ( , 2020 recently concluded that the bubble of the Orion Nebula is predominantly driven by the mechanical energy input of the strong stellar wind from the O7V star θ 1 Orionis C (see also Güdel et al. 2008), based on the simple analytic model of Weaver et al. (1977). This latter interpretation however, faces many challenges. As described in § 4.2.2, the Weaver et al. (1977) solution assumes that the wind gas is adiabatic and trapped. As soon as the gas cools, the expansion speed will drop below the Weaver et al. (1977) solution. The recent work of Lancaster et al. (2021a,b) demonstrates that turbulence-driven inhomogeneity in the structure of the material surrounding the wind-driven bubbles may strongly affect the impact of the mechanical energy of the wind. The cooling induced by turbulent mixing in the absence of magnetic fields leads to order of magnitude differences in the expansion velocity and imparted momentum compared to those derived in the classical Weaver et al. (1977) solution, although there is evidence that magnetic fields at least partly mitigate The blue dotted curve indicates expansion driven by the thermal pressure of photoionised gas Spitzer (1978). The red dot-dashed curve is the same but with a slight modification from Hosokawa & Inutsuka (2006). The orange dashed curves describe the radial expansion driven by stellar winds for stars of different spectral types consistent with our measurement of N LyC (Weaver et al. 1977;Mac Low & McCray 1988). The horizontal black line represents the radius of the arc R arc = 1.3 pc. The bottom panels show the corresponding time evolution of the expansion velocity. The black shaded region indicates the range of expansion velocity derived from the different methods presented in § 3.2 (note that this has been truncated for clarity, as indicated by the black arrow). The horizontal dot-dashed line reflects the lower limit of the expansion velocity estimates shown in Figure 4. this effect (e.g., Gentry et al. 2019). Indeed, the recent numerical simulations of Rosen et al. (2021) also show that wind bubbles blown by individual high-mass stars do not experience efficient mixing in the presence of magnetic fields (Pillai et al. 2015, estimate a total magnetic field strength of 5.4 ± 0.5 mG in G0.253+0.016). The magnetic field provides a confining and stabilising effect and suppresses the development of instabilities that otherwise lead to effective mixing and cooling (Lancaster et al. 2021a,b). It is also worth noting that direct measurements of the X-ray luminosities of wind-blown bubbles are inconsistent with the Weaver et al. (1977) model, and require substantial loss of energy via either turbulent mixing or bulk escape of hot material (Harper-Clark & Murray 2009;Rosen et al. 2014). It may therefore simply be the case that the high velocity C emission observed by Pabst et al. (2019Pabst et al. ( , 2020 is tracing material from a wind that is escaping along low-density channels in the bubble, rather than driving feedback globally in the region (Haid et al. 2018). In the Galactic Centre, a number of molecular shell candidates have been identified (Martín-Pintado et al. 1999;Oka et al. 2001;Butterfield et al. 2018;Tsujimoto et al. 2018Tsujimoto et al. , 2021. The kinetic energy estimated for many of these shells has led to speculation that they are the result of (potentially multiple) supernova explosions (e.g. Tsujimoto et al. 2018). However, those identified in Sgr B2 by Martín-Pintado et al. (1999) share many of the properties displayed by the arc in G0.253+0.016. Martín-Pintado et al. (1999) identify a series of ∼ 1−2 pc shells and arcs detected in emission from the (3,3) and (4,4) lines of NH 3 . (Recall that the arc in G0.253+0.016 is also prominent in these lines - Mills et al. 2015.) They conclude that the shells are expanding with velocities 6−10 km s −1 and have an associated kinetic energy of the order 10 48 erg, very similar to the quantities derived for the arc in G0.253+0.016 and considerably smaller than typical energies of ∼ 10 51 erg associated with supernova-driven shells. The authors speculate that the shells in Sgr B2 are produced by the windblown bubbles generated by high-mass stars and describe how the shocks generated by the expansion heat the surrounding gas, further arguing that the expanding shells may have even triggered further star formation within Sgr B2's envelope. The arc located in G0.253+0.016 provides an interesting new addition to this puzzle. First, the associated radio continuum emission is extended, unlike the compact H regions driven by O-type stars in other clouds in the Galactic Centre (e.g., Sgr A A-D and H Goss et al. 1985;Zhao et al. 1993;Mills et al. 2011;Hankins et al. 2019). One possible explanation for this may be because the source driving the arc is less embedded, having formed at the edge of the cloud and excavated a cavity. Second, the morphology, dynamics, and energetics of the arc show reasonable (to within a factor of a few) agreement with a modified form of the Weaver et al. (1977) solution that accounts for cooling within the bubble interior (Mac Low & McCray 1988), but differs from that in Orion (Pabst et al. 2019(Pabst et al. , 2020 in that it is identified using a molecular (rather than ionised gas) tracer. It is certainly possible that local environmental conditions in the Galactic Centre may help winds to play an important role. In high-density environments, winds may stay contained within the shell longer leading to more prolonged expansion (Barnes et al. 2020). Hence we are left with three possibilities: i) winds are not the key feedback driving mechanism and some other explanation is required to explain the origin of the arc; ii) winds are more important for driving feedback than otherwise expected, in such a way that simulations, and the interpretation of observations of winds (e.g. in X-rays) are incorrect; iii) winds are less important under normal conditions, but may be more important under the extreme conditions (e.g., high-density, high-metallicity, strong magnetic fields) in the Galactic Centre (e.g. Martín-Pintado et al. 1999;Barnes et al. 2020). Has G0.253+0.016 already formed a star cluster? In this section we address the elephant in the room, namely that if the arc is the result of a wind-blown bubble generated by a high-mass star, then where is the star? The short ∼ 760-7000 yr recombination time estimated in § 3.4 implies that the source of the ionising radiation must still reside within the cavity enclosed by the arc. If the star has formed in situ, as implied by the wind-blown bubble scenario, then the immediate implication is that G0.253+0.016 is perhaps not as quiescent as is commonly accepted. High-mass stars rarely (if at all) form in isolation (de Wit et al. 2004(de Wit et al. , 2005. Though isolated high-mass stars have been identified throughout the Galactic Centre (Mauerhan et al. 2010;Dong et al. 2011;Clark et al. 2021), the cluster formation efficiency in CMZ clouds may be as high as ∼ 30-40% (Ginsburg & Kruijssen 2018). Assuming the high-mass star forms as part of a star cluster, we can estimate the mass of the parent cluster and address the question of whether or not we would be likely to detect such a cluster towards G0.253+0.016. Again here, we consider only the O star scenario, since this presents the best case scenario for detectability. To estimate the mass of the parent star cluster, we simulate samples of star clusters for a range of cluster masses, generating n = 10000 clusters of each mass, assuming a standard stellar IMF (Kroupa 2001). For each cluster we determine the mass of its highest mass star, comparing the peak of the distribution to the 16-20 M relevant for stars of spectral type consistent with our upper limit of the Lyman continuum ionising flux, N LyC (Martins et al. 2005). We find that cluster masses of the order 400-700 M are typical for those in which the most massive star is ∼ 16-20 M . As an independent estimate of the potential cluster mass, we follow the method outlined in Barnes et al. (2017). To do this, we first estimate the bolometric luminosity from infrared luminosity maps of the CMZ using Spitzer and Herschel observations. Barnes et al. (2017) assume that all the emission from the embedded stellar population within a molecular cloud is reprocessed by the surrounding dust and re-emitted. Under this assumption the total infrared luminosity directly corresponds to the bolometric luminosity produced by the embedded population. We apply this method to the arc by estimating the total bolometric luminosity within the region defined in Figure 3, for which we find L bol ∼ 1.2 × 10 5 L . We can convert this bolometric luminosity to a stellar mass by assuming that the highest mass star within the cluster dominates the luminosity. To do this, we use the bolometric luminosity-to-mass conversions presented by Davies et al. (2011). For L bol ∼ 1.2 × 10 5 L we find M * ∼ 31 M . Repeating the same experiment as before, we find that a cluster mass of the order ∼ 1000 M is typical for those in which the most massive star is ∼ 31 M . Given the uncertainty in equating the total infrared luminosity to bolometric luminosity, this should be interpreted as a strict upper limit on the total mass of the embedded stellar population (see Barnes et al. 2017 for further details). Although the absolute values should be taken with caution, this analysis suggests that the independent measures of radio continuum emission and the total in-frared luminosity are consistent with the presence of a (moderately) high-mass star. To address whether we would be expected to detect such a star cluster in currently available data, we use the GALACTICNUCLEUS (GNS) catalogue. The GNS is a high-angular resolution (∼ 0. 2) JHK s survey of the Galactic Centre (Nogueras-Lara et al. 2018, that partially covers G0.253+0.016. We build a synthetic young cluster with a total mass of 500 M , using PARSEC evolutionary tracks (Bressan et al. 2012;Chen et al. 2014Chen et al. , 2015Tang et al. 2014;Marigo et al. 2017;Pastorelli et al. 2019Pastorelli et al. , 2020 to obtain H and K s photometry. We assume twice solar metallicity (Feldmeier-Krause et al. 2017;Schultheis et al. 2019Schultheis et al. , 2021) and a standard IMF (Kroupa 2001) and create five different models with different ages (0.5, 0.7, 1, and 5 Myr). To redden the data, we test three different scenarios using average extinctions A K s = 2, 2.5, 3.0 mag. We redden the synthetic data randomly, choosing the extinction value for each star from a Gaussian distribution centred on the average extinctions with a typical standard deviation of ∼ 0.1 mag (Nogueras-Lara et al. 2020). We randomly simulate the photometric uncertainties for each star assuming a Gaussian distribution for each band, with a standard deviation of 0.05 mag corresponding to the expected uncertainty for the GNS data (Nogueras-Lara et al. 2021b). Finally, we place the stellar population at the Galactic Centre distance using a distance modulus of 14.52 (Nogueras-Lara et al. 2021a). We plot the simulated stellar populations on the colour-magnitude diagram (CMD) K s versus H − K s towards G0.253+0.016 (Figure 7 Nogueras-Lara et al. 2021a). Using the limitations of the real GNS data, we identify which of the cluster stars may be detected. Assuming the lowest extinction (A Ks = 2.0 mag), 40 stars can be detected for each of the different ages tested and this decreases with increasing cluster age. The most favourable case, in terms of detection, is the youngest cluster age considered (0.5 Myr; Figure 7). Given the stellar background in the CMD, the differential reddening, and the very low number of potentially observed stars belonging to the young cluster, we conclude that a direct detection using the CMD would be unlikely. Moreover, the assumed extinction of A Ks = 2.0 mag corresponds to the value obtained by Nogueras-Lara et al. (2021a) using red clump stars (e.g. Girardi 2016) for the region containing G0.253+0.016. This is the best case scenario for detection and is equivalent to the cluster being situated in the foreground of the cloud. Assuming a larger extinction of A Ks = 3.0 mag, we obtain even fewer detections of the cluster members (Figure 7). Finally, we also check whether the cluster could be detected due to stellar overdensities in the NIR images. We use the K s band, where the extinction is lowest, and compute the stellar density using the GNS data corresponding G0.253+0.016. We divide the observed region into small sub-regions of 1 pc 2 to compute the number of stars detected in K s . Averaging over all the sub-regions, we find a mean stellar surface density of ∼ 180 ± 90 pc −2 , where the uncertainty corresponds to the standard deviation of the measurement. Considering the most favourable case of a cluster stellar population of 0.5 Myr, an extinction of A Ks = 2.5 mag, and assuming that the cluster extends to a radius of ∼ 0.5 pc (comparable to the Arches, Hosek et al. 2015), the expected over-density is ∼ 80 pc −2 indicating that the cluster would not easily be detected by its stellar density. In summary, we conclude that the high-extinction and stellar crowding towards G0.253+0.016 is more than capable of hampering the detection of a 500 M star cluster in currently available NIR data. Moreover, we stress that the above assumes best case scenario for detection. To detect such a cluster, longer integration time NIR observations would be needed to detect fainter cluster members. However, this may not help if the cluster were deeply embedded within the cloud or behind the main column. The discussions presented in § 4.3 and here clearly call for further observations to resolve any ambiguity that remains surrounding the possible origins of the arc. Future high-sensitivity observations with other facilities, such as the James Webb Space Telescope (JWST), will likely reveal the true star formation activity of G0.253+0.016. Barnes et al. (2017) provide an upper limit of the total stellar mass of newly formed stars within the Brick of > 2000 M from a measurement of the total infrared emission. These authors estimate a star formation rate of < 0.007 M yr −1 based on this total stellar mass and a star formation timescale based on inferences about the orbit of the cloud (t SF = 0.3 Myr). Kauffmann et al. (2017), on the other hand, estimate an upper limit of ∼ 800 M based on the absence of any radio or maser emission sources. These authors used a timescale based on a statistical approach based on the number of observed H region and masers within the CMZ (t SF = 1.1 Myr), and determined a star formation rate of < 0.0008 M yr −1 . Based on the observed bounds of our derived N LyC values, we estimate here the associated star formation rate of a 12-20 M star (section 3.4), under the assumption that this implies the presence of a ∼500 M cluster (given a standard Kroupa 2001 IMF). Assuming that the cluster has an age t SF = 0.4-1.6 Myr (see § 4.2.2), the associated star formation rate is in the range 0.0003-0.0013 M yr −1 . The star formation rates are highly dependent on the assumed timescales over which they are inferred. Nonetheless, our estimates based on the presence of a single B1-O8.5 star are broadly consistent with the low star formation rates measured within the literature. SUMMARY & CONCLUSIONS In this paper, we have built on the analysis presented in Henshaw et al. (2019), combining ALMA and VLA observations to determine the origin of the arcuate structure identified within G0.253+0.016. We find evidence for an expanding bubble associated with ionised gas emission. Our main conclusions are summarised below. Using the kinematic decomposition presented in Henshaw et al. (2019), we find that morphology of the arc can be described using a simple tilted ring model. The ring is centred on {l, b} = {0. • 248, 0. • 018} and has a radius of R arc = 1.3 pc. The azimuthal velocity pattern observed along the crest of the arc is broadly consistent with that expected for an expanding incomplete shell. Using our model geometry, we derive an expansion velocity of v exp = 5.2 +2.7 −1.9 km s −1 . From this information we infer that the dynamical age of the arc is t dyn ≈ 2.4 +0.8 −1.4 × 10 5 yr (assuming a constant expansion velocity). Using dust continuum observations we determine the mass off the arc to be M arc ∼ 2700 +3000 −1400 M . Combining with the derived expansion velocity, we measure the kinetic energy and momentum of the arc to be E arc ∼ 0.7 +2.8 −0.6 × 10 48 erg and p arc ∼ 1.4 +3.1 −1.0 × 10 4 M km s −1 , respectively. Our new radio continuum and radio recombination line (RRL) data reveal that ionised gas fills the arc cavity. The RRL spectrum extracted from the arc cavity peaks at a velocity of 22.0 ± 1.4 km s −1 , consistent to within one standard deviation of the mean of the arc centroid velocity distribution (17.6 ± 4.5 km s −1 ). The spatial and kinematic agreement between the ionised and molecular gas emission leads us to conclude that the two are likely physically related. To give insight into the type of source required to stimulate this emission, we calculate the Lyman-continuum photon rate, N LyC = 10 46.0 -10 47.9 photons s −1 . The implied short recombination time of t rec = 760-7000 yr further suggests that the source of the ionised gas must still be located within the arc cavity. Assuming that the emission is produced by a single zero-age main sequence star, the estimated N LyC is consistent with that expected for a high-mass star of spectral type B1-O8.5, corresponding to a mass of ≈ 12-20 M . We go on to explore the possible origins of the arc and the potential star driving its expansion. We consider two scenarios: i) the arc represents a shell swept up by the wind of an interloper high-mass star; ii) the arc represents a shell swept up by stellar feedback resulting from in-situ star formation. For the former scenario, the CMZ is unique in our Galaxy in that there is a rich population of 'field' highmass stars, and we show that the probability that a high-mass star may be passing through G0.253+0.016 at the present time is reasonably high. Nevertheless, we deduce that there does not appear to be a way to reconcile the required ionising continuum with the current mass and radius estimates of the arc under the assumption that the arc represents a bow-shock produced by a slowly moving high-mass star. This size constraint rules out the Arches and Quintuplet clusters as possible sources of any interloper. Given the information currently available to us, we therefore conclude that the arc is plausibly the result of stellar feedback from in-situ star formation. We compare the morphological and dynamical properties of the arc, as well as its estimated kinetic energy and momentum to simple analytical models describing the expansion of H regions, finding that the properties of the arc are consistent to within a factor of a few with those produced by a wind-blown bubble generated by a high-mass stars star. The immediate implication of this result is that G0.253+0.016 may not be as quiescent as is commonly accepted. Assuming that the high-mass star did not form in isolation, our results could mean that G0.253+0.016 has already produced a 10 3 M cluster, containing at least one high-mass star. We demonstrate that the high-extinction and stellar crowding observed towards G0.253+0.016 are more than capable of obscuring such a star cluster from view. Future observations are needed to resolve any residual ambiguity left surrounding the origins of the arc. This is important to establish the true underlying star formation rate of molecular clouds in the CMZ, and to precisely establish the role of stellar feedback in shaping the ISM and regulating the star formation process in an environment which has the highest number of high-mass stars per unit volume in the Galaxy. We suggest that future observations from facilities such as ALMA (to better constrain the mass of the arc) the JWST (to reveal the internal stellar population) will have the sensitivity necessary to confirm or reject this result.
2021-10-25T01:16:24.309Z
2021-10-21T00:00:00.000
{ "year": 2021, "sha1": "2f08cc76da8499d45e139c7f7deb0a3350711569", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2f08cc76da8499d45e139c7f7deb0a3350711569", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119390019
pes2o/s2orc
v3-fos-license
Parallaxes of southern extremely cool objects III: 118 L \&T dwarfs We present new results from the Parallaxes of Southern Extremely Cool dwarfs program to measure parallaxes, proper motions and multi-epoch photometry of L and early T dwarfs. The observations were made on 108 nights over the course of 8 years using the Wide Field Imager on the ESO 2.2m telescope. We present 118 new parallaxes of L \&T dwarfs of which 52 have no published values and 24 of the 66 published values are preliminary estimates from this program. The parallax precision varies from 1.0 to 15.5 mas with a median of 3.8 mas. We find evidence for 2 objects with long term photometric variation and 24 new moving group candidates. We cross-match our sample to published photometric catalogues and find standard magnitudes in up to 16 pass-bands from which we build spectral energy distributions and H-R diagrams. This allows us to confirm the theoretically anticipated minimum in radius between stars and brown dwarfs across the hydrogen burning minimum mass. We find the minimum occurs between L2 and L6 and verify the predicted steep dependence of radius in the hydrogen burning regime and the gentle rise into the degenerate brown dwarf regime. We find a relatively young age of 2 Gyr from the kinematics of our sample. INTRODUCTION Objects with spectral types L and T cover the mass range from the lowest mass hydrogen burning stars, through slowly cooling sub-stellar objects down to massive Jupiter type objects. Since the first tentative discoveries 30 years ago (Becklin & Zuckerman 1988;Latham et al. 1989) over 3000 are known today and this number will increase exponentially with the planned deep optical and infrared surveys (e.g. with the Large Synoptic Survey Telescope -LSST Science Collaboration et al. 2017; the ⋆ E-mail: smart@oato.inaf.it (RLS) Panoramic Survey Telescope and Rapid Response System - Chambers et al. 2016; the Wide Field Infrared Survey Telescope - Spergel et al. 2015;and Euclid -Laureijs et al. 2010). Observations and statistical studies of these objects can be used to constrain proposed stellar/sub-stellar formation processes, local galactic kinematics, understanding giant planet atmospheres and mapping the stellar to substellar boundary. The lower mass sub-stellar objects are continually cooling and therefore changing with time which, combined with their ubiquity, make them promising galactic chronometers. To realize their promise a large sample with measured distances is needed to enable a complete calibration. Figure 1. Distribution of parallaxes of L0 to T8 dwarfs with spectral type. The black area represents the 41 objects published before 2007, the grey area the 118 PARSEC objects and the white area represents all objects published today, a total of 356 L0 to T8 objects. There is an overlap of 66 objects between the PARSEC and published objects as discussed in Section 3.1. The Parallaxes of Southern Extremely Cool dwarfs (hereafter PARSEC) program was instigated to generate a large sample of these objects with measured parallaxes. In 2007 only 41 L0 to T8 objects had published parallaxes, and in PARSEC we aimed to increase the sample of objects to at least 10 for each L dwarf spectral sub-type, and, included bright southern T dwarfs to increment the T dwarf coverage. In this contribution we report the PARSEC parallaxes of 118 L0 to T8 dwarfs which, combined with objects with literature parallaxes (described in Section 3), brings the total to 356 objects distributed in spectral type as shown in Fig. 1 which we refer to as the Full Sample. For each L dwarf sub-type the number of objects is now at least 10, except L9 where we have 9. As discussed in Section 6 the ESA Gaia mission, that is measuring parallaxes for 10 9 objects, will provide a significant numbers of early L dwarfs but will have only a few late L dwarfs and less that 10 T dwarfs, so the cooler objects will remain the domain of small field pointed programs. In 2010 a complementary program to PARSEC was started on the ESO New Technology Telescope targeting late T dwarfs (Smart et al. 2013) that are too faint for PARSEC or Gaia. Preliminary results for the PARSEC program have been published in Andrei et al. (2011) and Marocco et al. (2013) using observations from the first 2-3 years; here we provide results from the full program with observations covering 8 years. In Section 2 we briefly present the PARSEC program, in Section 3 we present the astrometric results and in Section 4 and 5 we present applications of these results combined with literature measures to the problem of absolute magnitude calibration, local kinematics and the location of the stellar -brown dwarf boundary. THE PARSEC PROGRAM The instrument, observational procedures, reduction procedures and target selection is described in detail in Andrei et al. (2011), here we briefly summarize the main points. Telescope and detector The PARSEC observations where made on the Wide Field Imager (WFI, Baade et al. 1999) of the ESO MPIA 2.2m telescope. This is a mosaic of 8 EEV CCD44 chips with 2k×4k 15 µm pixels, for the results presented here we only used observations from the top half of CCD#7 (Priscilla). Limiting our reductions to this region was a balance between simplicity of the required astrometric transforms -the larger the adopted area being modeled the more complicated transforms were required -and number of anonymous reference stars for which we required a minimum of 12. This telescope and instrument combination has a number of positive characteristics: (i) The camera is fixed and stable, crucial for small field relative astrometry. (ii) The 0.2 ′′ /pixel focal plane scale allows at least 2 pixels per full width half maximum (e.g. Nyquist sampling) even in the best seeing. (iii) The total field size of 0.3 sq. deg. provides a large field to search for nearby companions. (iv) This combination had already been used for the determination of parallaxes (Ducourant et al. 2007). We observed all objects in the z band (Z+/61 ESO#846, central wavelength 964.8 µm, FWHM 61.6 µm) which provided the best ratio of exposure time and signal-to-noise for these very red targets. Observational procedure Each observation consisted of a short exposure to visually locate the target and then an application of the WFI moveto-pixel procedure to move the target to pixel 3400,3500 in CCD#7. We then made two exposures of 150 s for objects with z ≤18.0 and 300 s for z ≥18.0 offset by 24 pixels in both axes. If we found the signal-to-noise of the first exposure to be less than 100 we increased the exposure time of the second accordingly. The short location frames were saved and used in the reduction process to model the z band fringing. To minimize differential reddening corrections (Monet et al. 1992) we attempt to observe all targets within 30 min of the meridian. The total time for each target is 10-25 min which enable an average of 3-4 targets/hour. Our observing runs were usually allocated in blocks of 3 nights spread throughout the year. Observations began on April 9th 2007 and using nights obtained via Brazilian and ESO allocations continued for four years until July 21 2011. After this date this telescope was no longer available through ESO or Brazil and we obtained three additional runs in March 2014, October 2015 and February 2016 from the CNTAC, OPTICON and a few service observations on MPIA time. In these extra runs we were able to re-observe most targets extending the coverage to over 8 years providing important leverage to separate parallax and proper motion components. Target selection The target lists had to meet a number of practical and scientific considerations. The combination of a variable time allocation and the requirement of observing objects close to the meridian required us to have a target list that covered the whole 24 hour Right Ascension range uniformly. To give flexibility for matching targets to conditions, and, to ensure that any target was only observed in 2 out of each 3 night run, we built redundancy into the list. With these requirements in mind we adopted the following criteria: (i) Southern (δ < 0 • ) confirmed L and T dwarfs discovered before April 2007, (ii) Magnitude in the z band brighter than 19, (iii) Between 6-8 objects in any RA hour, (iv) The brightest examples within each spectral bin, (v) A uniform spectral class distribution for L dwarfs, (vi) A photometric distance smaller than 50 pc. The photometric distances were estimated using the 2MASS (Two Micron All Sky Survey Skrutskie et al. 2006) magnitudes transformed to the MKO system using Stephens & Leggett (2004) and the colour -absolute magnitude compilation given in Knapp et al. (2004). This produced an original target list of 140 targets that can be found in Andrei et al. (2011). In Table 1 we list the 118 targets published here including a short name for each object used throughout this paper, the discovery name, the z band magnitude adopted at the beginning of the program and, when they exist, published values of optical/NIR spectral types, radial velocities and parallaxes. The last column summarizes any published indications of multiplicity, e.g. if the object is an unresolved binary (at the nominal WFI resolution), in a wide binary system or a moving group candidate. The distribution of the 118 targets is shown in Fig. 1 with respect to all L0 to T2 objects with published parallaxes. Image reduction procedure All images were bias corrected and flat fielded using standard IRAF CCDPROC procedures. The WFI z-band images have strong interference fringes that were removed using RMFRINGE with a fringe map made with three steps: 1) mask out all objects in all short exposures and four of the long exposures; 2) make a median image of the unmasked pixels scaling all images by the exposure time; 3) smooth the median image using a 5 pixel box car average. After subtracting this fringe map scaled by the exposure time from the cleaned images we again make a new fringe map and again subtract it, this time scaled by the mean sky count. We did not use all the long exposures in the construction of the fringe map as the move-target-to-pixel and masking procedures are not perfect so the resultant fringe map using all frames often had a halo around the target position. The telescope pointing is only good to a few arcseconds so in the location frames the target is rarely in the same position and this halo problem does not occur. In Andrei et al. (2011) we adopted the Torino Observatory Parallax Program (Smart et al. 1999, TOPP) centroiding procedures but, as discussed in Marocco et al. (2013), we found the Cambridge Astronomy Survey Unit's imcore maximum likelihood barycentre (CASUTOOLS, v 1.0.21) more consistent so we have adopted that package to determine the centroids of all objects in the field. Astrometric parameter determination The astrometric reduction was carried out using TOPP pipeline procedures and the reader is referred to Smart et al. (1999) for details, here we just outline the main steps. A base frame, observed on a night with good seeing, was selected and the measured x,y positions of all objects were transformed to a standard coordinate ξ, η system determined from a gnomic projection of the Gaia DR1 objects in the frame. All subsequent frames were transformed to this standard coordinate system with a simple six constant linear astrometric fit using all common objects except the target. We then removed any frames that had an average reference star error larger than the mean error for all frames plus three standard deviations about that mean in either coordinate, or, had less than 12 stars in common with the base frame. Since the target is not used in the fit, its positional change is a reflection of its parallax and proper motion. We fit a simple 5 parameter model to this positional change, and that of all the other objects in the field, to find their astrometric parameters implicitly assuming all objects are single. We then iterate this procedure where, in addition to removing frames as described above, we also remove stars with large errors over the sequence from the objects used to astrometrically transform frames. Finally, for the target solution we removed any observations where the combined residual in the two coordinates is greater than three times the σ of the whole solution. The solutions were tested for robustness using bootstrap-like testing where we iterate through the sequence selecting different frames as the base frame thus making many solutions that incorporate varied sets of reference stars and starting from different dates. We create the subset of all solutions with: (i) a parallax within one σ of the median solution; (ii) the number of included observations in the top 10%; and (iii) at least 12 reference stars in common to all frames. From this subset, for this publication, we have selected the one with the smallest error. More than 90% of the solutions were within one σ of the published solution. To the relative parallaxes we add a correction (COR in Table 2) to find astrophysically useful absolute parallaxes. The COR is estimated from the average magnitude of the common reference stars and the Galaxy model of Mendez & van Altena (1996) in the z band. When Gaia produces proper motions and parallaxes of the anonymous field objects we will be able to tie more precisely to the absolute system. ASTROMETRIC RESULTS In Table 2 Comparison to published parallaxes There are published estimates of the parallaxes for 66 PAR-SEC targets of which 24 were preliminary values from this program. The preliminary values were found using different reduction procedures and different epoch spans, hence we consider them as independent estimates. In Fig. 2 we plot 1 www.as.utexas.edu/~tdupuy/plx/ Table 1 versus results presented here from Table 2. Only two objects, 0439-2353 and 0835-0819, have parallaxes that differ by more than three times the combined errors so warrant further consideration. For 0439-2353 Faherty et al. (2012) find 110.4 ± 4.0 mas while we obtain 79.75 ± 3.36 mas. The Faherty et al. observational coverage was 3 years with 14 nights while we have 8 years and 18 nights. We estimated the photometric parallax to be (∼80 mas) using the Vrba et al. (2004) absolute magnitude calibrations which is closer to the PARSEC estimate. The difference between the Faherty et al. parallax and the photometric parallax could be explained if the object is an un-resolved binary but the residuals show no non-linear motion and a spectroscopic investigation for binarity by Manjavacas et al. (2016) also found no evidence. We therefore consider our value more probable. For 0835-0819 we obtain 146.19 ± 2.82 from 35 observations over 8.9 years while Weinberger et al. (2016) find 137.5 ± 0.4 using observations in 17 epochs over 6.2 years. The difference is just over three times the combined error and the quoted error from Weinberger et al. (2016) is very low considering the observations were made on a similar system so we believe that the larger than 3σ difference is probably because the errors are underestimated. For the 66 PARSEC dwarfs with published parallaxes we calculated the quantity ̟ N −̟ P √ σ 2 N +σ 2 p , where ̟ is the parallax, σ the quoted errors and the subscripts N and P represent the new and published values respectively. If the measures are unbiased and the errors are precise we expect this quantity to follow a Gaussian distribution with a mean of zero and a standard deviation of one. For the 66 common objects the mean is -0.1 and the standard deviation is 1.3. Applying the t-test at the 95% level we find the mean is not significantly different from zero, e.g. P(t)=0.06, while applying the F-test we find the σ is significantly different from one, e.g. P(F)=0.0001. Since the σ is greater than one the implication is that the errors are underestimated. The median PARSEC error is larger than the median published error even though the programs were often very similar, however, without a standard comparison it is difficult to isolate. The Gaia sample should allow a complete characterisation of the errors of different procedures which can then be applied to objects that are fainter than the Gaia limit in those programs. Comparisons within binaries Binary systems are a good test for parallax determinations and in particular for the quality assurance of errors. Components in binary systems can be considered to be at the same distance and if the system is a wide binary it is appropriate to make independent solutions. In the PARSEC sample there are 4 companions observable (e.g. in the field of view and not so bright that they saturate) and in Table 3 we reproduce the parallaxes and proper motions of the PARSEC targets and the companions determined in this program. No parallaxes or proper motions differ by more than 2σ between the PARSEC values or the published values but this sample is too small to test the precision of the error estimates. Internal photometry During every observation we obtain precise relative photometry. The highest observation frequency for our objects was monthly so the sampling is not sufficient to find short period photometric variations but we can look for long term variations. For each object we transformed the magnitudes to the instrumental magnitude system of the base frame using the anonymous reference stars with iterative 3σ clipping and then found the linear correlation between instrumental magnitudes and observation time of the target. The slopes for most targets were within 3σ of zero but two objects, 0614-2019 and 1122-3916, were found to have significant slopes of 0.0079 ± 0.0021 and 0.0124 ± 0.0037 mag/yr respectively. For both objects the χ 2 sum of the two parameter fit was a statistically significant improvement over a one parameter fit. In Fig. 3 we reproduce the magnitude variation of these two objects over the observational sequence. Possible explanations for a long term photometric variation are discussed in Smart et al. (2017a) for Y dwarfs and in the K2 Ultracool Dwarfs Survey (Paudel et al. 2018, and references therein) or the Weather on Other Worlds program (Miles-Páez et al. 2017, and references therein) for L/T dwarfs. Literature photometry We compiled photometry on standard systems for the Full Sample from the large optical and infrared surveys: 2MASS (Skrutskie et al. 2006 ). Using these surveys it was possible to obtain homogeneous magnitudes in bands ranging from Gunn G to W ISE W3 (the Gunn U and W ISE W4 bands were not included as the number of objects with reliable magnitudes was very low). The number of independent magnitude measures ranged from 3 to 16 with a mean of 10 per target. For those objects with published parallaxes we took the weighted mean of the PARSEC and published value with no outlier rejection. The complete dataset of 356 objects with photometry and parallaxes is available online here. Absolute magnitudes In Fig. 4 we plot the absolute magnitudes of the PARSEC sample in the 2MASS J and W ISE W 2 bands vs nearinfrared spectral types. The crosses represent the PARSEC objects with propagated errors in the magnitude axis and an assumed error of 0.5 types in the spectral type axis. The grey area represents the one σ confidence limits of a second order fit to objects with published parallaxes. A fit to just the PARSEC sample is within one sigma of fits to the Full Sample in all magnitude bands. As a comparison we have plotted the polynomial relations derived in the studies of Dupuy & Liu (2012), Faherty et al. (2016) and for H only Vrba et al. (2004). In Table 4 we report the polynomial fits of the absolute magnitudes to the published spectral types of the form: where AbsM ag is the absolute magnitude in the passband indicated in column 1 of Table 4; P 1...P 3 are the parameters in columns 3-5 of Table 4 and SpT is the spectral type in numerical format, e.g. L0=70, L1=71 ... T5=85. The absolute magnitude in the passbands r to y refer to optical spectral types, the passbands J to W 3 refer to NIR spectral types. For each passband we removed any objects which were tagged as unresolved binaries or which had a ̟/σ < 5. and fitting was carried out using iterative outlier rejection of objects with residuals larger than three times the overall fit error. The labeled objects in Fig. 4, 0147-4954, 0357-4417 and 2101-2944, are more than 3σ from the mean absolute magnitude versus spectral type locus in at least two passbands. 0147-4954 was included as a L2 but was reclassified as a M9 in Marocco et al. (2013), at this spectral type it is not over luminous. 0357-4417 is a known unresolved binary (Bouy et al. 2003) hence the brighter than average observed absolute magnitude. 2101-2944 is under-luminous at the 3σ level in the W 1 and 2MASS K bands but shows no underluminosity in other bands and its spectra does not show any sign of peculiarity (Marocco et al. 2013) so the underluminosity in these two bands is unexplained. Spectral energy distribution analysis Using the multi-band photometry and our distances we are able to generate spectral energy distributions (SEDs) for the PARSEC objects. While the SEDs contain less information than the spectra the instrumental differences in spectral observations render the SEDs globally more homogeneous. To compare these to models we made use of the VO SED Analyzer (VOSA, Bayo et al. 2008) which provides χ 2 and Bayesian fitting to an array of models and templates. For this data we adopted the χ 2 fitting to BT-Settl models (Allard et al. 2012) with the following limits: 700 < T eff < 4000 K, 3.5 < log g < 5.5 and −1 < Fe H < 0.5. We also limited the absorption in the V band to 0.001 and turned off the excess fitting option as the non-black body distribution of the spectra of these objects was causing the excess fitting procedures to ignore the mid IR magnitudes. As an example in Fig. 5 we plot the VOSA fit of 0227-1624 to the BT-Settl model spectra. 0227-1624 is a 16th z band magnitude L1/L0.5 object at 20 pc with a 8% error and W ISE W 2 (bottom) bands versus near-infrared spectral types. The length of the symbol arms represent the error which for spectral types is assumed to be 0.5 types. The grey area represents one σ confidence limits for a fit to published objects using literature parallaxes. We have included as comparisons the polynomial relations from Liu (2012), Faherty et al. (2016), and for H only, Vrba et al. (2004) as indicated in the legend. Coefficients of polynomial fits to absolute magnitudes derived using parallaxes from Full Sample and large all sky photometry as described in Section 4.3. on the distance for which we have magnitudes in 11 bands. The model slightly underestimates the near and mid infrared bands but follows well for the optical bands. Reasons for this over luminosity are beyond the scope of this paper as it will require a more in depth study of the models and the fitting of VOSA. The Gaia G observation is plotted in orange because we manually removed it from the fit. We find the nominal G passband tends to over estimate the flux of the L & T dwarfs which can be seen in Fig. 5. This systematic excess is also reflected in the model (the blue points) as this uses the G passband for the transformation, however, we felt this known systematic error made using the Gaia magnitude for the 38 objects in the Gaia DR1 to constrain the fits premature. The problem of the G passband is noted in the Gaia documentation 2 , and there is an empirical correction proposed(Maíz Apellániz 2017). In Fig. 6 we plot the RAD1 radius from the VOSA fits to BT-Settl models against the near-infrared spectral types. RAD1 is the "scaling radius", i.e. the radius required to fit the observations to the model based on the parallactic distance. From the PARSEC sample we removed all objects which are known or suspect unresolved binary systems and known or candidate moving group objects (see Section 5.3) leaving 60 objects. We removed the moving group objects as these are in general young and we wanted to be sure our sample was dominated by objects older than 1 Gyr. In Fig. 6 the targets plotted as diamonds are literature objects and the PARSEC sample are plotted as asterisks. The radius of objects with ages greater than 1 Gyr in this spectral range is predicted to be a minimum at the spectral type that corresponds to the hydrogen burning limit (e.g. Chabrier et al. 2009;Burrows et al. 2011). Objects with earlier spectral types than this limit are in hydrostatic equilibrium and the radius decreases with the spectral type. Objects with later spectral types than this limit are degenerate and in this case the radius increases with the spectral type. Hence, we expect to find a minimum RAD1 that corresponds to the spectral type of objects at the hydrogen burning limit. We experimented with a grid of trial spectral types fitting the RAD1 to the spectral type on either side of the trial value with simple straight line fits. In each fit we weight a common point given by the median around the trial value to guide continuity. As an example in Fig. 6 we have plotted the two line fit of the observations assuming a minimum at L6. We then calculate the minimum χ 2 of the combined fits, which formally occurs if we choose the trial value between spectral types L3 and L4, however there is no significant difference between L2 to L6, the minimum is not well defined. In Dieterich et al. (2014) for a smaller more refined sample they find the position of the minimum at L2.5 which is the early end of our window. In Dieterich et al. (2014) they use tailor made SED fitting that is calibrated with radii of objects with radii measurements from interferometric observations so we expect this to be a more robust estimate. The larger sample we have included here is going to cover a range of ages, metallicities and masses and as discussed in Burrows et al. (2011) the position of the minimum is dependent on age and metallicity so as this is a mixed pop-2 gaia.esac.esa.int/documentation/ Figure 6. Scaling radius (RAD1) in solar radii from VOSA BT-Settl model fitting vs near-infrared spectral types. The asterisks represent the 60 PARSEC objects and the diamonds the 132 literature objects that are not suspect binary or moving group members. The error bar in the top of the graph is the median error from VOSA for RAD1. The two lines represent two simple straight fits of RAD1 to Spectral Type on either side of L6. The black bar represents the range of solutions where the χ 2 minimum did not vary significantly while the black circle is the minimum radius found by Dieterich et al. (2014). The sample and fitting procedure are discussed in Section 4.4. ulation we do not expect to have a unique clear minimum. In addition, even for a given age the minimum may not be a single distinct value, for example in the case of halo UCDs there is a narrow mass range in which unsteady nuclear fusion occurs (Zhang et al. 2017). If this occurs even over a smaller range for younger UCDs it would result in spreading out of the minimum. The general trend is of a steep dependence in the hydrogen burning regime and a flatter change in the degenerate regime predicted by Burrows et al. (2011) is however confirmed. Solar motion and velocity ellipsoids To calculate the U V W velocity components in the galactic reference frame in addition to position, proper motion and parallax we also require radial velocities. We follow the general procedure of Johnson & Soderblom (1987) except we use the transformation matrix from equatorial coordinate to galactic coordinates taken from the introduction to the HIPPARCOS catalogue (ESA 1997). For the PARSEC sample 20 objects have published radial velocities and for the Full Sample 38. For the objects without radial velocities we estimate only the two components least impacted by assum- To isolate those dwarfs that are part of the thin disk population we use 3σ clipping in each element and remove 34 objects from the Full Sample of 356 which includes 16 from the PARSEC subset. This is a simple efficient method for the removal of non thin disk and outlier objects. In Table 5 we report the mean and standard deviations of the velocity components for the PARSEC, published and Full Sample after this cleaning. The velocities are all heliocentric so the mean velocity vector indicates the anti-motion of the Sun relative to this sample. From Table 5 we would estimate the Solar motion compared to the Full Sample to be (U, V, W ) = (5.4 ± 1.4, 12.6 ± 0.9, 7.6 ± 1.1) km s −1 which can be compared directly to the Sun's velocity components inferred by larger stellar groups, e.g. Schönrich et al. (2010): (U, V, W ) = (11.10 +0.69 −0.75 , 12.24 +0.47 −0.47 , 7.25 +0.37 −0.36 ) km s −1 , and Francis & Anderson (2009): (U, V, W ) = (7.5 ± 0.1, 13.5 ± 0.3, 6.8 ± 0.1) km s −1 . Considering the large uncertainties, and given the agreement in the direction of rotation (V ) indicate that our sample, as a whole, is not moving very differently from the local standard of rest. Age estimation ( 2) where τ is the mean age, σ0 = 10 km/s and γν = 1.4 × 10 −5 (km/s) 3 yr −1 andσ |W |ν is total of the |W | weighted velocity components, σ |W |ν column in Table 5. These are calculated using: Wielen (1977). The sum of the |W | weighted velocities are 35.4 km/s for the PARSEC sub-sample and 34.1 km/s for all objects corresponding respectively to an age of 2.1 and 1.8 Gyr. Using equation 16 from Wielen for ages > 3 Gyr we also get an age of less than 3 Gyr, so equation 13 is more appropriate. These estimates of the age are younger than the ∼5.1 Gyr in Seifahrt et al. (2010) and the 3.4 ∼ 3.8 Gyr from Burgasser et al. (2015) with similar samples and procedures, though both studies benefitted from having radial velocities for all their target which we do not have. Our estimates are however in agreement with τ = 1.7±0.3 Gyr found in Wang et al. (2018) using a similar sample/procedure and also in Dupuy & Liu (2017) where the median age is 1.3 Gyr for L dwarfs from dynamical masses and luminosities combined with evolutionary models. The reason for this younger value may be, as before, a result of our sample cleaning or because we are dominated by brighter examples. The Gaia results should be complete to some limiting magnitude so it will be interesting to see what they reveal -especially since the full Gaia dataset will significantly constrain kinematically based ages. Moving group membership We used the packages LocAting Constituent mEmbers In Nearby Groups 3 (Riedel et al. 2017, hereafter LACEwING) and Bayesian Analysis for Nearby Young AssociatioNs Σ 4 (Gagné et al. 2018;Malo et al. 2013;Gagné et al. 2014, hereafter BANYAN Σ) to assess membership of our sample to known moving groups starting from the assumption that they are all field objects -e.g. that we have no spectral or colour evidence of youth. This assumption is a conservative starting point as some objects are known to have signs of youth and are often known moving group candidates as indicated by MG in column 6 of Table 1. Since we start from a conservative position our moving group candidate indication should be more robust and homogeneous. The calculation of probability is different between LACEwING and BANYAN Σ, in the first case the probabilities are considered independent while in the second case the probabilities are required to sum up to 100%, i.e. the object is either in one of the included moving groups or it is a field object. This generally leads to LACEwING having
2018-11-01T23:12:21.000Z
2018-09-14T00:00:00.000
{ "year": 2018, "sha1": "6ea60b7f2ee7fa8d07c7e1cbc8e9be21a877d9fd", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/481/3/3548/25838699/sty2520.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "6ea60b7f2ee7fa8d07c7e1cbc8e9be21a877d9fd", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
85517059
pes2o/s2orc
v3-fos-license
Spin-orbital order in LaMnO$_3$: $d-p$ model study Using the multiband $d-p$ model and unrestricted Hartree-Fock approximation we investigate the electronic structure and spin-orbital order in three-dimensional MnO$_3$ lattice such as realized in LaMnO$_3$. The orbital order is induced and stabilized by particular checkerboard pattern of oxygen distortions arising from the Jahn-Teller effect in the presence of strong Coulomb interactions on $e_g$ orbitals of Mn ions. We show that the spin-orbital order can be modeled using a simple \textit{Ansatz} for local crystal fields alternating between two sublattices on Mn ions, which have non-equivalent neighboring oxygen distortions in $ab$ planes. The simple and computationally very inexpensive $d-p$ model reproduces correctly nontrivial spin-orbital order observed in undoped LaMnO$_3$. Orbital order is very robust and is reduced by $\sim 3$ \% for large self-doping in the metallic regime. I. INTRODUCTION The spin and orbital ordering found in three dimensional LaMnO 3 perovskite is an old problem which is nowadays quite well understood [1,2], see also early and recent experimental and theoretical references . The short summary and conclusions coming out from these papers are as follows: At zero temperature the LaMnO 3 lattice has orthorhombic symmetry. The lattice is distorted due to strong Jahn-Teller (JT) effect: the MnO 6 octahedra are deformed, as shown schematically using the simplified picture for the ab ferromagnetic (FM) plane (nonzero tilting of the octahedra is neglected in this study) in Fig. 1. The magnetic moments on Mn ions correspond to spins S 2 and the magnetic structure is of the A-type antiferromagnetic (A-AF), i.e., FM order on separate ab planes, coupled antiferromagnetically plane-to-plane along the crystallographic c axis. The electron occupations on manganese t 2g orbitals {xy, yz, zx} are very close to 1, while both e g orbitals, {x 2 − y 2 , 3z 2 − r 2 }, contain together roughly one (the fourth) electron. The orbital order which settles in e g orbitals is not seen when using the standard {x 2 − y 2 , 3z 2 − r 2 } basis (which corresponds to choosing z quantization axis direction). However, if we consider a checkerboard pattern superimposed upon ab plane and choose x axis of quantization on "black" fields (MnO 4 rhombuses m = 2 expanded along x axis, see Fig. 1), and y axis of quantization on "white" fields (MnO 4 rhombuses m = 1 expanded along y axis), then the orbital order becomes transparent: the orbitals with majority electron-occupations follow the 3x 2 − r 2 / 3y 2 − r 2 pattern. Oxygen distortions repeat along the c axis, and e g orbitals follow. The common terminology to describe this type of order is C-type alternating orbital (C-AO) order. The origin of the orbital order was under debate using effective 3d models taking into account electron-electron and electron-lattice interactions [17,[27][28][29][30]. It was es-tablished that large JT splitting and superexchange are together responsible for C-AO order observed in LaMnO 3 [28][29][30]. Surprisingly, electron-lattice interaction is too weak to induce the orbital order alone and has to be helped by electron-electron superexchange [17]. Importance of the joint effect of strong correlations and Jahn-Teler distortions was concluded from the dynamical mean field theory [18]. On the other hand, superexchange alone would stabilize C-AO order but the order would be fragile and the orbital ordering temperature would be too low [29,31]. The large S = 2 spins at Mn 3+ ions are coupled by spin-orbital superexchange which stabilizes A-AF spin order in LaMnO 3 below Néel temperature T N = 140 K. However, spin order parameter could be little reduced if the ideal ionic model approximation does not strictly apply and the actual number of electrons transferred from La to MnO 3 subsystem is reduced to 3 − x, i.e., one has La +(3−x) (MnO 3 ) −3+x , where x is so-called self-doping. The idea of self-doping comes directly from ab-initio computations and may be addressed when d − p hybridization is explicitly included in the d − p model. There, a rather trivial know-how is that charges on cations, computed using simple Mulliken population analysis -or better Bader population analysis -are almost never the same as in the idealized ionic-model. Therefore, it is quite reasonable to introduce into the d − p model the idea that each La +(3−x) donates onto MnO 3 lattice on average not 3 but 3 − x electrons. The self-doping can be regarded as a free parameter which could be adjusted using some additional experimental data. However, on the other hand, one can compute x using ab-initio or density functional theory (DFT) with local Coulomb interaction U (DFT+U ) cluster computations, exactly like it was done in Ref. [34]. This is, however, very costly, and contradicts one of the most important virtues of the d − p model, i.e., time and cost efficiency of the computations. Returning now to x in LaMnO 3 , we note that if x Schematic view of JT distortions used for the Hartree-Fock (HF) computations. A single ab plane of the low-temperature phase of LaMnO3 is presented. Red/blue dots denote positions of manganese/oxygen ions. The long bars denote energy-privileged (due to local crystal fields) 3y 2 − r 2 orbitals (at Mn1) or 3x 2 − r 2 orbitals (at Mn2 ions) -their cooperative arrangement corresponds to orbital order which supports FM spin order (spins are not shown). The numbers close to manganese positions identify different ions (see the corresponding entries in Table II) as belonging to 3y 2 − r 2 (m = 1) and 3x 2 − r 2 (m = 2) orbitals with lower energy. Note that horizontal and vertical directions on the figure correspond to x and y axes, respectively, which are at 45°angle to the crystallographic {a, b} axes. Orbital order is repeated in consecutive layers along the z (crystallographic c) axis. is finite but still small enough, other magnetic phases could be more stable -in particular ordinary FM order is found frequently. Such a situation was studied experimentally in Sr x La 1−x MnO 3 [12]; note that here subscript x in the chemical formula Sr x La 1−x MnO 3 which routinely is identified as ordinary doping is only roughly similar to our self-doping x. When we increase ordinary doping x from very small towards intermediate values, then the metallic regime sets in Sr x La 1−x MnO 3 [2]. The purpose of this short paper is to investigate whether the spin-orbital order observed in LaMnO 3 would arise in the multiband d−p model with the explicit treatment of oxygen 2p electrons. Here we focus on simple modeling of the JT effect, where oxygen distortions are treated as semiempirical input for the electronic d − p model. To make the model realistic, we also include in it non-zero Coulomb repulsion on oxygen ions and spinorbit interaction on Mn ions. Note that model studies done on LaMnO 3 before usually neglected these Hamiltonian components. In addition, we would like to account for the possibility of non-trivial self-doping x = 0 (just as found before in ruthenium, iridium, titanium, and vanadium oxides [32][33][34][35]). The paper is organized as follows. We introduce the model Hamiltonian and its parameters in Sec. II. The numerical method is presented in Sec. III, where we treat the JT effect in a semi-empirical way (Sec. III A), and present the unrestricted Hartree-Fock approximation in Sec. III B as a method of choice to describe the states with broken spin-orbital symmetries. It was shown recently that this approach gives very reliable results for doped vanadium perovskites [36]. The magnetization direction is there one of the open questions and we discuss possible states in Sec. III C. The ground state of LaMnO 3 is described in Secs. IV A, IV B, and IV C for three selfdoping levels, x = 0, x = 1/16, and x = 1/8. Finally, we also remark on the consequences of neglected e g and t 2g splittings in Sec. IV D. In Sec. V we present the dependence of orbital order parameter on the self-doping level and conclude that the order is robust. A short summary and conclusions are presented in Sec. VI. II. HAMILTONIAN We introduce the multiband d − p Hamiltonian for MnO 3 three-dimensional 4 × 4 × 4 (periodic boundary conditions) cluster which includes five 3d orbitals at each manganese ion and three 2p orbitals at each oxygen ion, where H dp stands for the d − p hybridization and H pp for the interoxygen p − p hopping, H so is the spin-orbit coupling, H diag is the diagonal part of kinetic energy (bare levels energies and local crystal fields). Here H d int and H p int stand for the intraatomic Coulomb interactions at Mn and O ions, respectively. Optionally one could add JT energy as H JT . However, instead of introducing this term with a quite complicated form [37], we shall model the JT interaction by a simple Ansatz which can be inserted into H diag as local potentials acting on e g electrons. The cluster geometry and precise forms of different terms are standard; these terms were introduced in the previous realizations of the d − p model devoted to transition metal oxides [32][33][34][35]. The kinetic energy in the Hamiltonian (1) consists of: where d † mα,σ (p † jν,σ ) is the creation operator of an electron at manganese site m (oxygen site i) in an orbital α (ν) with up or down spin, σ =↑, ↓. The model includes all five 3d orbital states α ∈ {xy, yz, zx, x 2 − y 2 , 3z 2 − r 2 }, and three 2p oxygen orbital states, {µ, ν} ∈ {p x , p y , p z }. In the following we will use shorthand notation, and instead of {x 2 − y 2 , 3z 2 − r 2 } we shall write {(z), (z)} -this emphasizes the fact that z axis is chosen as the quantization axis for this e g orbital basis, while () brackets are here to distinguish these Mn(3d) orbitals from O(2p) {x, y, z} orbitals. The matrices {t mα;jν } and {t iµ;jν } are assumed to be non-zero only for nearest neighbor manganese-oxygen d − p pairs, and for nearest neighbor oxygen-oxygen p − p pairs. The next nearest neighbor hoppings are neglected. (The nonzero {t mα;jν } and {t iµ;jν } elements are listed in the Appendix of Ref. [32]). The spin-orbit part, H so = ζ i L i ·S i , is a one-particle operator (scalar product of angular momentum and spin operators at site i), and it can be represented in the form similar to the kinetic energy H kin [38][39][40][41], with t so α,σ;β,σ elements restricted to single manganese sites. They all depend on spin-orbit coupling strength ζ (ζ ≈ 0.04 eV [42]), which is weak but still it influences the preferred spin direction in the A-AF phase. For detailed formulas and tables listing {t so α,σ ;β,σ } elements, see Refs. [32,39]. The diagonal part H diag depends on electron number operators. It takes into account the effects of local crystal fields and the difference of reference orbital energies (here we employ the electron notation), between d and p orbitals (for bare orbital energies) where ε d is the average energy of all 3d orbitals, i.e., the reference energy before they split in the crystal field. We fix this reference energy for d orbitals to zero, ε d = 0, and use only ∆ = −ε p and the crystal field splittings {f cr µ,σ } as parameters, thus we write The first sum is restricted to oxygen sites {i}, while the second one runs over manganese sites {m}. The crystalfield splitting strength vector (f cr α,σ ) describes the splitting within t 2g and within e g levels, as well as the t 2g to e g splitting, respectively. Furthermore, the distance 10Dq between t 2g levels and e g levels is ∼ 1.7 eV [43]). For the group of t 2g levels and their splittings in accordance with local JT distortion of particular MnO 6 octahedron we assume that either yz is lower than zx orbital which should correspond to O 4 square (in ab plane) when distorted from an ideal square into rhombus elongated along y direction, or the opposite: Artist's view of the bare 3d levels (no interaction) at Mnm (m = 1, 2) ions split by local crystal fields originating from JT distortions, 2p oxygen levels in the d − p model. (a) Left panel: yz orbital is lower than zx orbital which corresponds to O4 square when distorted from an ideal square into rhombus elongated along y direction (see Mn1 ions in Fig. 1), eg levels are also split -3y 2 − r 2 is lower than x 2 − z 2 (here y-axis is chosen as the quantization axis). (b) Right panel: zx orbital is lower than yz orbital which corresponds to O4 square when distorted from an ideal square into rhombus elongated along x direction (see Mn2 ions in Fig. 1), eg levels are also split -3x 2 − r 2 is lower than y 2 − z 2 (here x-axis is chosen as the quantization axis). The average distance between t2g levels and eg levels (10Dq) is ∼ 1.7 eV [43], and ∆ is oxygen-to-manganese charge-transfer energy. zx is lower than yz orbital which should correspond to O 4 distorted into rhombus elongated along x direction (compare Fig. 2). The splitting value should be ∼ 0.1 eV what is an educated guess (compare Ref. [35]) or even smaller [16]. What concerns the occupied e g levelsthey are also split. We assume varying (ion to ion) local crystal fields and choose appropriately the local axes of quantization, following Mn sublattices shown in Fig. 1. Returning now to rhombuses expanded along the x axis when we work with D † (x)σ and D † (x)σ operators (x quantization axis) the corresponding canonical transformation is: where d operators are standard (i.e., for the z quantization axis). We can also compute the operators of particle numbers, which are: Note that orbitals t 2g remain the same as before, i.e., they were not transformed. When looking at the formulae just above we immediately see that while in H diag the part with local crystal field is formally expressed by using f cr µ,σ D † m,µ,σ D m,µ,σ , then in fact, thanks to Eqs. (8) and (9) we can still work with old standard d operators (z quantization axis). Returning to rhombuses expanded along the y axis, we should work withD † (ȳ)σ andD † (y)σ operators (for y quantization axis) and the corresponding formulae are: The particle number operators are: To make a short summary: the subsequent HF computations will be performed in the framework of the standard basis (i.e., old basis with z quantization axis) and after reaching self-consistency, we extract occupation numbers using formulae from Eqs. (8) and (9); to give an example the electron occupation in 3x 2 − r 2 orbitals is just The on-site Coulomb interactions H d int for d orbitals take the form of a degenerate Hubbard model [44] where n mα = σ n mα,σ is the electron density operator in orbital α; {α, β} enumerate different d orbitals, and J d,αβ is the tensor of on-site interorbital exchange (Hund's) elements for d orbitals; J d αβ has different entries for the {α, β} pairs corresponding to two t 2g orbitals (J t H ), and for a pair of two e g orbitals (J e H ), and still different for the case of cross-symmetry terms [31,45]; all these elements are included and we assume the Racah parameters: B = 0.1 eV and C = 4B. The local Coulomb interactions H p int at oxygen sites (for 2p orbitals) are analogous, where the intraatomic Coulomb repulsion is denoted as U p and all off-diagonal elements of the tensor J p µν are equal (as they connect the orbitals of the same symmetry), i.e., J p µν ≡ J p H . Up to now, interaction at oxygen ions H p int were neglected in the majority of studies (i.e., for simplicity it was being assumed that U p = J p H = 0). In the following we use the parameters U d , J d µν , U p , and J p H similar to those used before for titanium and vanadium oxides [33][34][35]; for the hopping integrals we follow the work by Mizokawa and Fujimori [6,38]. The value U p ∼ 4.0 eV was previously used in copper oxides [46,47]. Concerning the parameter ∆ an educated guess is necessary. Old-fashioned computations, such as those reported in the classical textbook of Harrison [48] and shown in tables therein suggest 1.5 eV. Results of a more detailed study suggest that ∆ < 2.5 eV [49]. We also use the Slater notation [50]. We performed our computations for the parameter set in Table I. Our reference system is LaMnO 3 where the total electron number in the d − p subsystem is N e = 18 + 4 = 22 per one MnO 3 unit provided we assume an ideal ionic model with no self-doping (x = 0), i.e., all three La valence electrons are transferred to MnO 3 unit. Another possibility is to take finite but small self-doping x. The problem how to fix x is a difficult question. The best way is to perform independent, auxiliary ab-initio or local density approximation (LDA) with Coulomb interaction U (LDA+U ) computations and to extract the electronic population on the cation R (in RMnO 3 ) analogously like it was done in Ref. [34]. This is however quite expensive. Without such auxiliary ab-initio computations one is left either with speculations or one should perform computations using several different values of x. III. NUMERICAL STUDIES A. Computational problems concerning the Jahn-Teller Hamiltonian Now let us return once more to the important part of the electronic Hamiltonian in perovskites, namely to the influence of JT distortions on the electronic structure. These rarely can be treated in a direct (and exact) way during the computations. Most often a semiempirical treatment of JT terms is used: namely one assumes an explicit form and the magnitudes of the lattice distortions, such as suggested by experiments. Thus, the distorted lattice is frozen and we take this as the experimental fact. The JT modes and the JT Hamiltonian do not enter computations anymore -their only role is to deform the lattice and to change Mn−O distances. Instead, one collects all Mn−O and O−O bond lengths (as suggested by experiment) and because of modified bond lengths one modifies the matrix of kinetic hopping parameters. In this respect quite popular is the Harrison scaling [48]. We have used it in the present study. The second important consequence of changed Mn−O distances is the creation of additional local crystal fields (in addition to standard crystal field which is responsible for t 2g to e g splitting). These additional local crystal fields renormalise bare energy levels within e g doublets and within t 2g triplets. (Note that this picture is valid at the level of static one-particle effective-potential approximation; i.e., it is similar like crystal-field cubic potential [6,38] and include oxygen-oxygen hopping elements in Hpp, given by (ppσ) = 0.6, (ppπ) = −0.15 eV (here we use the Slater notation [50]). The charge-transfer energy ∆ (5) is defined for bare levels. The magnitude of splitting within t2g and eg levels is arbitrarily taken as 0.1 eV and 0.2 eV. approximation which gives rise to standard 10Dq splitting). Thus as the second part of modeling JT effect, what we do is: e g doublets and t 2g triplets will be split as already discussed (in the previous Section) for H diag and f cr µ,σ . B. Unrestricted Hartree-Fock computations We use the unrestricted HF approximation (UHF) (with a single determinant wave function) to investigate the d − p model (1). The technical implementation is the same as that described in Refs. [6,32,33,38] featuring the averages d † mα,↑ d mα,↑ and p † iµ,↑ p iµ,↑ (in the HF Hamiltonian) which are treated as order parameters. We use the 4 × 4 × 4 clusters which are sufficient for the present d − p model with only nearest neighbor hopping terms. During HF iterations the order parameters are recalculated self-consistently until convergence. The studied scenarios for the ground state symmetry were those with spin order: FM, A-AF, G-AF (Néel state), C-AF (AF in ab plane, repeated in the consecutive ab planes when moving along the c axis), or non-magnetic; the considered easy magnetization direction was either x or z. To improve HF-convergence we used the quantum chemistry technique called level shifting [51]. It is based on replacing the true HF Hamiltonian by a different Hamiltonian -the one with the identical eigenvectors (one particle eigenfunctions) as the original Hamiltonian and with identical occupied eigenenergies. The original eigenenergies of virtual states are however uniformly shifted upwards by a fixed constant value. When applying virtual level shifting we can obtain some additional information. Namely when the splitting between the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO), i.e., HOMO-LUMO splitting (after correcting for the shift) is negative or zero, then the single-determinant HF ground state we obtained is not correct and usually the reason is that the true ground state is in fact conducting. We remind that the HOMO-LUMO gap serves here as an estimate of the experimental band gap. C. Searching for the magnetization direction We performed computations for several values of selfdoping x. They give the orbital order for e g orbitals of C-type alternating orbital (C-AO) order in the regime of low self-doping x < 1 8 (see Table II). At the same time, the spin order is A-AF, with the easy-axis of magnetization along the x direction. The preferred spin direction is however not generic as the ground states with z-easy axis and with x-easy axis are almost degenerate (within accuracy below 1 meV). Average spin values on Mn ions are very close to S = 2 and the HOMO-LUMO gaps G were also computed, see Table II. Finally, we remark on the magnetic state obtained in HF calculations: The up-and down-spin occupations are equal, i.e., d † mα,↑ d mα,↑ = d † mα,↓ d mα,↓ , thus the average z-th spin component vanishes. However, this does not imply that the found ground states are nonmagnetic. We have found that the symmetry breaking with magnetization along x or y axis is equivalent, and the averages of the type, d † mα,↑ d mα,↓ , are finite (not shown, but it is always the case for the data in Section IV). When the summation over µ is performed, i.e., if we calculate Re α d † mα,↑ d mα,↓ , we obtain the average spin component along the x direction, | S x |. This provides evidence that the spins are indeed aligned along the x axis, and we give the average magnetization | S x | in Sec. V. The imaginary part of the same sum (if finite) does correspond to the average spin component along the y direction. IV. GROUND STATE OF LaMnO3 A. Zero self-doping x = 0 The ground state of LaMnO 3 has C-AO orbital order, and this is reproduced by the UHF calculations, see Table II, densities given in boldface. Note that when using only the standard orbital basis (i.e., the orbitals corresponding to the z quantization axis), the orbital order is completely hidden. To get more insightful results and to describe the orbital order induced by lattice distortions, we considered all the types of O 4 rhombuses. It is important to realize that orbital order may be easily detected only for properly selected orbitals, depending on the sublattice. First, if the standard basis {3z 2 − r 2 , x 2 − y 2 } is used, no trace of any orbital order is seen, see Table II. For the other two possible e g bases, {3x 2 − r 2 , y 2 − z 2 } and {3y 2 − r 2 , z 2 − y 2 }, one finds that the directional orbital has large electron density only on one sublattice, but on the other sublattice this is not the case. In other words, if one selects the x quantization axis, the orbital order is easily visible through a distinct asymmetry between Mn ions at this and the other sublattice, i.e., in positions m = 1 and m = 2, see Fig. 1. Thus, the found asymmetry in the density distribution indicates that the order is of checkerboard type. Indeed, the checkerboard pattern of oxygen distortions requires choosing different local bases at two sublattices: on one with x quantization axis, and with y quantization axis on the other. Then the orbital order is clearly visible and the symmetry is correctly recovered, see the electron densities listed in boldface in Table II. The spin order coexisting with C-AO order is A-AF, with the easy axis of magnetization along the x (y) direction, see Table II. Note that the state with the same characteristics but with easy magnetization axis along the z direction (not shown in Table II) is only by 0.3 meV higher (thus these two states are almost degenerate). The TABLE II. Spin-orbital order and electron densities nmα obtained on non-equivalent Mn ions for the HF ground state (at zero temperature) as obtained for orthorhombic LaMnO3 at self-doping x = 0 and x = 0.0625. The index m = 1, 2 denotes a Mn site of a given sublattice, as shown in Fig. 1. Numbers in bold indicate the most appropriate quantization direction, i.e., the best local orbital basis for the description of orbital order at a given sublattice. The HF calculations are summarized by: the HF energy per MnO3 unit cell, EHF, and the HOMO-LUMO gap G, and the average magnetization value. self-doping A-AF / C-AO spin-orbital order other HF states with C-AO order and with ordinary FM spin order are by 2 meV higher, while the states with G-AF or C-AF spin order are by 3.5 meV higher than the ground state. Nonmagnetic state is never realized. With these results for the electron distribution, one could say that the experimental facts are faithfully reproduced. However, it is not so, as the HOMO-LUMO gap G we obtained is 4.76 eV, much larger than the experimentally measured; the experimental data concerning band-gap are in the range of 1.1 − 1.7 eV (direct gap) [3,4,9] and 0.24 eV [5] (indirect gap). In fact, the HOMO-LUMO gap should correspond either to direct or to indirect gap, whichever is smaller. Anyway, for these both possibilities this discrepancy is by far too large and it invites one to reject thinking in terms of ideal-ionic model and to consider instead non-zero self-dopings x. Note that our cluster is rather small thus we can not (in the following) consider and study any arbitrary value of self-doping x as this would in some cases result in non-integer electron number (in the cluster), and is some other case would result in an open-shell system (and such systems can not represent the ground state of an infinite crystal). The density distribution found without self-doping (at x = 0) suggests that there is some but rather small electron transfer from O(2p) to Mn(3d) orbitals, see Table II. Indeed, we have verified that the electron density in all oxygen p-orbitals is very close to 6.0, thus we deal with an almost "perfect" ionic system with O −2 ions. B. Weak self-doping x = 1/16 The first possible close shell configuration is obtained by subtracting only 4 electrons from the total electron number in the 4 × 4 × 4 cluster with N el = 1408 electrons at x = 0. In this case we obtain the orbital order and spin order virtually the same as for x = 0 case (compare the density distribution for x = 0 and x = 0.0625 in Table II). However, total electron density in e g orbitals is reduced by self-doping, see Table II. But most importantly, the HOMO-LUMO gap G becomes much reduced to 0.23 eV, in satisfactory (though probably incidental) agreement with the experimental results [5]. Once again, one finds the ground state with the x quantization axis for the magnetization, and the complementary A-AF phase with the z easy spin direction is by 1.5 meV higher (per one unit cell). Other magnetic states are less favorable. The state with FM order is only by 2.0 meV higher, and the states with G-AF and C-AF spin order are both by ∼ 26 meV higher than the ground state. This result is important and reflects the proximity of the FM order in doped systems, which can be stabilized at still higher doping x ∼ 0.17, as known from the phase diagram [56]. In fact, in the A-AF ground state the exchange interactions in ab planes are FM, and the described change of spin order involves just the change of sign of the exchange interaction along the c axis, from AF to FM. C. Moderate self-doping: x = 1/8 Computations for x ≥ 1/8 invariably produce the states with zero (or negative) HOMO-LUMO gaps. This can serve as an indication that the FM metallic regime sets in already at this self-doping level in the cluster under consideration. Note that the experimental results for La 1−x Sr x MnO 3 systems indicate that such systems are conducting and FM for x > 0.2 doping [56]. If we roughly identify our theoretical value of self-doping x = 1/8 ≈ 0.12 with the doping by Sr, we approach the metallic regime, even when this value of x does not coincide yet with the experimental doping in metallic FM manganites (x ∼ 0.17) [11,12]. We remark that such a discrepancy for small (not infinite) cluster and for simple non-ab-initio d − p model can be expected. D. Neglecting splitting within eg and t2g states As already discussed, the modeling of JT Hamiltonian goes in two separate steps: (i) changing the bare energies of individual orbitals, and (ii) performing Harrison [48] scaling of hopping integrals due to modified Mn-O bond lengths [26], i.e., changing simultaneously static crystalfield potential and changing the kinetic part. To the best of our knowledge, only the second step (ii) is discussed in the literature. Therefore (to conform to the main stream) we performed auxiliary computations putting to zero bare level splittings within e g and t 2g multiplets but performing Harrison scaling to adjust the values of d − p hopping elements to the actual bond lengths. It could appear surprising, but the results concerning the ground states did not change much. The magnetic order and orbital order persist, albeit the orbital order is somewhat weaker. Thus it seems that the change which the JT-effect brings upon kinetic Hamiltonian part (hopping elements) is the dominant change or at least it is just enough for a satisfactory modeling of the JT effect. V. ROBUSTNESS OF ORBITAL ORDER Finally, we address the question of stability of orbital order under self-doping. In doped manganites orbital order persists at low doping up to x 0.1 [2], and at higher doping orbital liquid [52] takes over which supports FM metallic phase. A remarkable feature of the perovskite vanadates is that orbital order is quite robust [53,54] and is destroyed only at high concentration of charged defects x 0.20 by orbital polarization interactions which frustrates orbital order [55]. To investigate orbital order and its dependence on selfdoping x, we take N e = 64 × (22 − x) electrons for (4 × 4 × 4)-cluster. Having in mind the charge distribution anisotropy in distorted rhombuses in ab planes, we define orbital order parameter η y/x for the observed C-AO order as follows, 2η y/x = − n 1,x 2 −z 2 + n 1,3y 2 −r 2 n 1,x 2 −z 2 + n 1,3y 2 −r 2 + − n 2,y 2 −z 2 + n 2,3x 2 −r 2 n 2,y 2 −z 2 + n 2,3x 2 −r 2 , where the electron densities appropriate to y/x quantization directions [as applied to (m = 1) / (m = 2) rhombuses; see Fig. 1] are used. Once again we stress that when only one zquantization direction (on all rhombuses) is used, then the C-AO orbital order is not visible at all. To make this picture complete we can define ordinary order parameter η z as well, 2η z = − n 1,x 2 −y 2 + n 1,3z 2 −r 2 n 1,x 2 −y 2 + n 1,3z 2 −r 2 + − n 2,x 2 −y 2 + n 2,3z 2 −r 2 n 2,x 2 −y 2 + n 2,3z 2 −r 2 , which only shows the difference between electron densities n m,x 2 −y 2 and n m,3z 2 −r 2 , (influenced by zdirection tetragonal distortion) which is independent of m, i.e., they are the same for (m = 1)-type and for (m = 2)-type rhombuses. We remark that sometimes one finds a tiny site-to-site charge modulation which is possibly an artifact due to imperfect convergence, we neglect it -to get rid of it in Eq. (17) we average η z over m = 1 and m = 2 rhombuses. The C-AO order parameter (16) versus hypothetical values of x (self-dopings) is shown in Table III. It is very robust and almost independent of self-doping x up to rather high value x = 3/16. It is remarkable that C-AO order survives in the metallic regime at x = 2/16 and x = 3/16. On the contrary, the parameter η z (17) is inconclusive concerning C-AO order. Instead, it shows that the electron density in x 2 − y 2 orbitals is higher than the one in 3z 2 − r 2 , in agreement with the model including tetragonal distortions [26]. VI. SUMMARY AND CONCLUSIONS We have shown that the d − p model with strong electron interactions reproduces correctly spin-orbital order in LaMnO 3 , provided the electronic configuration of Mn ions is very close to Mn 3+ and the oxygen distortions due to the Jahn-Teller effect are included. This implies also selecting the adequate orbital basis which is the most appropriate to describe the orbital ordered state stabilized by oxygen distortions. Thereby occupied e g orbitals follow the oxygen distortions in ab planes, and one finds A-AF / C-AO order, as observed [1,56]. We have shown that the self-doping in LaMnO 3 is small but finite and is in fact necessary to reproduce the observed insulating behavior with a small gap. This result emphasizes the importance of electronic charge delocalization over O(2p) orbitals in the d−p model for a chargetransfer insulator LaMnO 3 . This study completes the series of papers [32][33][34][35], where we have shown that the multiband d − p model is capable of reproducing coexisting spin-orbital order in various situations and in various perovskites. In contrast to time-consuming cluster-ab-initio or LDA+U calculations [17][18][19][20][21], the computations using d − p model are very efficient and should be regarded as easy and simple tool for any preliminary study to establish the electronic structure and ground state properties of an investigated perovskite. During such calculations the only difficult part is the proper choice of the Hamiltonian parameters. We suggest that this approach could be a promising technique to investigate heterostructures [57,58] or superlattices [59,60], where the Jahn-Teller effect plays an important role. On the other hand, when information about the correct values of Hamiltonian parameters are uncertain, one can perform computations with several sets of Hamiltonian parameters. The results of such computations when confronted with the experimental results could be eventually used for screening out wrongly chosen Hamiltonian parameter sets.
2019-03-25T19:23:36.000Z
2019-03-25T00:00:00.000
{ "year": 2019, "sha1": "a538c979ab8411d4e490e3c5e2a85e269f9a694a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1903.10557", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "056ce0fe70ac69303c4a0445e22fa8296709c453", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
260076136
pes2o/s2orc
v3-fos-license
Periodic Model Predictive Control for Tracking Halo Orbits in the Elliptic Restricted Three-Body Problem A periodic model predictive control (MPC) scheme is proposed for tracking halo orbits. The problem is formulated and solved in the elliptic restricted three-body problem (ER3BP) setting. The reference trajectory to be tracked is designed by using eccentricity continuation techniques. The MPC design exploits the periodicity of the tracking model and guarantees exponential stability of the linearized closed-loop system, through a suitable choice of the terminal set and weight matrices. A sum-of-norms cost function is adopted to promote fuel saving. The proposed control scheme is validated on two simulated missions in the Earth–Moon system, which, respectively, involve station keeping on a halo orbit near the L1 Lagrange point and rendezvous to a halo orbit near the L2 Lagrange point. Results illustrate the advantage of designing the reference trajectory and the periodic control directly in the ER3BP setting versus approximate solutions based on the circular restricted three-body problem (CR3BP). Periodic Model Predictive Control for Tracking Halo Orbits in the Elliptic Restricted Three-Body Problem Renato Quartullo , Andrea Garulli , Fellow, IEEE, and Ilya Kolmanovsky , Fellow, IEEE Abstract-A periodic model predictive control (MPC) scheme is proposed for tracking halo orbits. The problem is formulated and solved in the elliptic restricted three-body problem (ER3BP) setting. The reference trajectory to be tracked is designed by using eccentricity continuation techniques. The MPC design exploits the periodicity of the tracking model and guarantees exponential stability of the linearized closed-loop system, through a suitable choice of the terminal set and weight matrices. A sumof-norms cost function is adopted to promote fuel saving. The proposed control scheme is validated on two simulated missions in the Earth-Moon system, which, respectively, involve station keeping on a halo orbit near the L1 Lagrange point and rendezvous to a halo orbit near the L2 Lagrange point. Results illustrate the advantage of designing the reference trajectory and the periodic control directly in the ER3BP setting versus approximate solutions based on the circular restricted three-body problem (CR3BP). Index Terms-Aerospace control, control design, halo orbits, model predictive control (MPC), Space vehicles. I. INTRODUCTION L OW-THRUST station-keeping and orbital rendezvous in cis-lunar space play a key role for long-term solar system exploration missions as well as lunar landing [1]. In particular, parking orbits in the lunar vicinity are receiving increasing attention from several space agencies [2]. Near rectilinear halo orbits (NRHOs) are limit cycles typically found close to Lagrange points in the three-body problem of orbital mechanics. Thanks to their properties, halo orbits near L1 and L2 Lagrange points in the Earth-Moon system are deemed promising candidates for parking orbits in cis-lunar space missions. In particular, they benefit from the existence of Manuscript Long-term station-keeping and trajectory design in the circular restricted three-body problem (CR3BP) have been extensively treated in the literature (see [6] and references therein). The CR3BP describes the motion of a satellite attracted by the gravitation of two massive bodies orbiting their center of mass on circular orbits and maintaining a constant distance between them during the motion. Choosing a rotating reference frame that keeps the position of the primaries fixed, the dynamics is represented by a set of autonomous ordinary differential equations, i.e., a time-invariant model; such a model has been extensively employed for station keeping [7], [8], [9] low-thrust trajectory design [10], [11], formation flight [12], and other applications. However, the lunar orbit around the Earth is elliptic with a nonnegligible eccentricity (≃0.055). For this reason, the CR3BP represents only an approximation of the three-body problem for the Earth-Moon system. The elliptic restricted three-body problem (ER3BP) takes into account the eccentricity of the orbit of the primaries, and it is, therefore, a more accurate model than the CR3BP. However, the time-varying distance between the primaries renders the equations of motion nonautonomous; thus, the model is periodically time-variant. Furthermore, the generation of a reference halo orbit to be tracked becomes more involved. A number of methods have been developed for periodic orbit design in the ER3BP, see, e.g., [13], [14], [15], [16], [17], [18]. Control problems in the ER3BP setting have also been addressed [19], [20], although typically the reference trajectory to be tracked is designed in the CR3BP setting [21], [22]. In the last decade, model predictive control (MPC) has emerged as a promising technology for enhancing autonomy of the flight control systems in space applications [23]. The ability of MPC to handle state and input constraints and to optimize suitable performance indexes has made this technique attractive, especially for low-thrust operations and proximity maneuvers, see, e.g., [24], [25], [26], [27], [28]. Most popular MPC schemes are based on the minimization of a cost function which is quadratic in both state and input vectors. However, it has been observed that the use of alternative performance indexes may be convenient to achieve specific control requirements. In particular, sum-of-norms cost functions have been recognized to provide desirable properties in terms of control sparsity and fuel saving [27], [29], [30]. MPC can also be adapted to deal with inherently periodic systems or to track periodic references (see, e.g., [31], [32], [33] and references therein). In [34], an MPC strategy has been derived for periodic systems involving a sum-of-norms objective function. In this article, periodic MPC solutions based on the sum-ofnorms objective function are derived for tracking halo orbits. In recent years, several MPC schemes have been proposed for problems involving halo orbits. The use of linear MPC for station keeping while tracking a halo orbit was considered in [35]. Nonlinear MPC is adopted in [36] for halo orbits in the Sun-Earth CR3BP. In [37], a quadratic MPC approach is proposed to stabilize a multirevolution halo orbit in the elliptic Sun-Mercury model. The problem is formulated directly in the ER3BP setting, but the periodicity of the model is not directly exploited and stability of the control scheme is not discussed. A chance constrained MPC for spacecraft rendezvous in NRHO has been proposed in [38], to ensure robustness with respect to probabilistic disturbances. In [39], a nonlinear continuous-time control law is coupled with a sampled-data MPC to perform station keeping of quasi-halo orbits near L2. The reference model is that of the CR3BP and the effect of eccentricity is treated as a disturbance. In this article, a constrained stabilizing control law for halo orbit tracking in the ER3BP is presented. By exploiting the periodicity of the ER3BP model, a periodic MPC controller based on the application of the methodology proposed in [34] is developed to control a spacecraft involved in cis-lunar space missions. The novelty of the contribution with respect to the literature is twofold. Firstly, the control design problem is formulated as a periodic MPC with a sum-of-norms objective function, instead of the usually employed quadratic performance index. The control input is computed via convex optimization. The second key feature of the proposed approach is that the reference halo orbit is generated directly in the ER3BP setting via eccentricity continuation techniques [15], [17]. The resulting control scheme is validated on two simulated space missions, by employing a high-fidelity model based on nonlinear ER3BP spacecraft dynamics, affected by several disturbance sources, such as localization errors, thrust imperfections, and fourth body perturbation. Simulation results demonstrate the potential for successful application of the periodic MPC control scheme to station keeping and rendezvous on NRHOs. In particular, it is shown that formulating and solving the orbit-tracking problem directly in the ER3BP setting leads to a remarkable reduction in control effort, and thus fuel consumption, with respect to tracking a halo orbit designed under the CR3BP assumption. Moreover, a comparison with a classical quadratic MPC controller is presented, showing the advantages of adopting a sum-of-norms cost function in terms The article is organized as follows. In Section II, a description of the ER3BP equations is provided and a model suitable for the considered orbit-tracking problem is described. The control problem and the proposed MPC are presented in Section III. Section IV details the generation of the periodic orbit in ER3BP used as reference trajectory. The validation of the proposed method through numerical simulations is reported in Section V. Section VI contains some concluding remarks. II. DYNAMIC MODEL The general restricted three-body problem describes the motion of a body with negligible mass m 3 , under the gravitation attraction of two massive bodies m 1 and m 2 (with m 1 > m 2 ≫ m 3 ), namely the primaries, whose mass ratio is defined as ρ = m 2 /(m 1 + m 2 ). In the ER3BP, the primaries move in elliptic orbits, with eccentricity e and semi-major axis a, around their center of mass, according to Kepler's law. In this article, the particle represents a controlled satellite and the primaries are the Earth and the Moon. The motion of the satellite is described in a rotating frame centered in the Earth-Moon center of gravity, where the position of the primaries is fixed on the x-axis (also known as syzygy-axis), the z-axis is normal to the Earth-Moon orbital plane, i.e., in direction of their angular momentum, and y-axis completes a right-handed triad. In this frame, the distances are instantaneously normalized, i.e., divided, by the primaries separating distance, which changes with the small primary true anomaly θ as where p = a(1 − e 2 ) is the semiparameter of the primaries' orbit. In this way, the position of the bodies is expressed in nondimensional units, the distance between Earth and Moon is equal to 1 and their coordinates are (−ρ, 0, 0) and (1−ρ, 0, 0), respectively. Since the normalization factor is not constant, the frame is also called pulsating [40]. Let r = [x, y, z] T be the nondimensional satellite position in the described rotating and pulsating frame. Its dynamics are then described by the following system of ordinary differential equations, where the independent variable is the true anomaly of the primaries θ: In (2), ( ) ′ indicates the differentiation with respect to θ is the pseudo-potential, andū x ,ū y , andū z are the nondimensional forcing accelerations. For completeness and in order to elucidate the definition and meaning of scaled inputs, we include the derivation of (2) in Appendix A. Fig. 1 depicts the rotating and pulsating frame with respect to an Earthcentered inertial (ECI) frame. T be the state vector collecting the nondimensional position and velocity components of the satellite andū = [ū x ,ū y ,ū z ] T be the control input vector. In this notation, system (2) can be written as As shown in Appendix A, the nondimensional acceleration u is related to the actual (dimensional) acceleration exerted by the propulsion system u d by the equation where h = (G(m 1 + m 2 ) p) 1/2 is the magnitude of the angular momentum of the primaries and G is the universal gravitational constant. Note that the control law needs to ultimately govern the physical acceleration of the satellite; hence, we define and B(θ ) = (1/(1 + e cos θ) 3 )B, so that (4) becomes Since the right-hand side of system (7) is periodic in θ with period T = 2π, (7) is a nonlinear periodic system. III. PERIODIC MPC In this article, the control objective is to track a reference trajectory, representing a close periodic orbit, within the family of halo orbits [4]. In the following sections, a solution relying on the receding horizon periodic MPC [34] is developed. Before presenting the results, the following definition is given. Let ξ r be an uncontrolled reference trajectory, obtained as an unforced solution of (7), so that By linearizing system (7) around ξ r , one obtains a linear time-varying system where x = ξ − ξ r represents the deviation from the reference trajectory . In order to design a digital control scheme, system (9) is ZOH-discretized with sampling interval θ s . Thus, the resulting periodic discrete-time linear system has the form where k ∈ N and the matrix sequences A k ∈ R n×n and B k ∈ R n×m are N -periodic with period N = T /θ s . The use of a linearized model for control design has a twofold motivation. On one hand, typical halo orbit missions involve transfer to a neighborhood of the desired reference orbit, at which point the controller is engaged to stay on the halo orbit itself. In such a scenario, a linearized model provides a sufficiently accurate representation of the dynamics for such a controller. Moreover, the use of (10) as a prediction model allows one to formulate a computationally feasible MPC problem, which can be solved by standard convex optimization tools. In this work, in the controller design phase, we assume that a reliable state estimate is available at each time instant k from a localization system. This assumption is in line with the current state of art, see, e.g., [41], [42], while measurement noise will be considered during closed-loop simulations. Moreover, in order to meet recent mission technology requirements, the satellite is assumed to be equipped with a single electric propulsion system. In particular, maneuvering is achieved by firing a single thruster and steering the thrust vector via attitude control. Therefore, constraints on the maximum deliverable thrust can be modeled as ||u(k)|| ≤ 1 (a more general input constraint ∥u(k)∥ ≤ u max can be recast as ∥u(k)∥ ≤ 1 by scaling (10) by u max ). In this article, the attitude control problem is not addressed, thus the orientation of the thruster is assumed to be accurately realized during the design, while thrust errors will be considered during closed-loop simulations. This is a reasonable assumption in practice, because the attitude control authority has typically a much higher bandwidth than the translational one [43]. Let us consider the following optimization problem: where H is a given time horizon length,x k ( j) denotes the predicted state j steps ahead of k, and the decision variables are the elements of the control sequencê The objective function J (Û k ) is chosen as in which Q is a full-rank matrix, while W k+H and S k+H are full-rank matrices belonging to N -periodic sequences W k and S k = S T k , respectively. Matrix Q can be adjusted to trade-off tracking performance and fuel consumption. The proposed MPC design is based on the solution, at each time instant k ∈ N, of problem (11). Then, as is common in the receding horizon control, the first element of the optimal solution is applied to the system, i.e., Note that problem (11) is a second-order cone program (SOCP), thus its solution is computationally affordable with convex optimization tools [44]. It is worth stressing that the proposed MPC scheme is characterized by the cost (13), which is different from standard quadratic performance indexes, being instead a sum-of-norms of states and inputs. It has been observed that this choice is useful to promote control sparsity and fuel saving (see, e.g., [27], [29], [30]). Sum-of-norms MPC schemes have been studied for both time-invariant and periodic time-varying systems [30], [34]. Hereafter, their main theoretical properties are briefly recalled. In order to ensure closed-loop stability of system (10) with the control law (11)- (15), it is crucial to suitably design the terminal set and the terminal cost, defined respectively by the N -periodic matrix sequences S k and W k . Due to the structure of matrices A k and B k , system (10) is stabilizable via N -periodic linear feedback. Hence, consider an auxiliary asymptotically stabilizing control law where the feedback gain K k ∈ R m×n is N -periodic and can be computed, for instance, by solving a periodic Riccati equation [45]. The resulting closed-loop system is given by which is clearly N -periodic. The following results establish the desired theoretical properties of the proposed MPC scheme. Proposition 1: Let S k = S T k ∈ R n×n be a N -periodic matrix sequence such that it is the solution of the following set of periodic linear matrix inequalities (LMIs): for k = 0, 1, . . . , N . Then, if problem (11) is feasible at time k 0 , then it is also feasible for all k > k 0 . Proof: See [34]. where λ m (D i ) denotes the minimum eigenvalue of the matrix D i . Then the proposed MPC scheme (11)-(15) with S k computed as in Proposition 1 and W k chosen as in (20), renders the origin of system (10) exponentially stable. Proof: See [34]. Proposition 1 provides a periodic matrix sequence S k ensuring that problem (11) is recursively feasible. Note that in (18), the constraint S 0 = S N is implicit from Definition 1. However, Proposition 2 selects the periodic matrix sequence W k in such a way that the cost J k decreases over time, thus guaranteeing a closed-loop stability. Matrix D k in (19) is a further degree of freedom, which can be used as a tuning parameter of the control design procedure. IV. REFERENCE TRAJECTORY GENERATION In order to generate the reference trajectory ξ r , one has to find a solution of the ER3BP (8). Due to the time-variance of the problem, generating a periodic orbit in the ER3BP involves more complications than in the CR3BP. The distance between the primaries changes accordingly to their true anomaly, thus Lagrange points have no fixed positions. This yields an asymmetry of the problem. For this reason, halo orbit generation in the ER3BP is more involved than in CR3BP. Furthermore, in CR3BP an orbit can achieve a period which is uncorrelated to the primaries periodicity. In ER3BP this is, in general, not possible, since the right-hand side of (2) is periodic with period 2π. However, designing the reference trajectory in the ER3BP leads to significant improvements in the control performance, with respect to trajectories designed in the CR3BP, as shown in Section V. In this work, reference periodic halo orbits in the ER3BP are generated starting from halo orbits in the CR3BP, through differential corrections and eccentricity continuation techniques, similar to [15] and [17]. Let M S be the number of the satellite revolutions around a Lagrange point and M P the number of primaries' revolutions around their barycenter. The objective is to generate an orbit with a resonance ratio M S :M P . By exploiting the mirror theorem [46], a periodic orbit with period T C = 2(M P /M S )π is generated in the CR3BP through a single-shooting algorithm, see, e.g., [47]. In particular, the latter provides the initial condition ξ r c (0) = [x 0 , 0, z 0 , 0, y ′ 0 , 0] T such that, integrating system (2) with e = 0, a perpendicular and symmetric crossing of the normal plane occurs after half of the period. At this point it is sufficient to propagate the half orbit forward for the remaining half period to have the full periodic solution ξ r c . To generate a periodic solution in the ER3BP, the rationale is similar, i.e., the objective is to find a vector of initial conditions such that, integrating the unforced version of system (2), the resulting trajectory is a closed periodic orbit. A sufficient condition for this is stated in [15]: for an orbit to be periodic in the ER3BP, it is sufficient that it has two perpendicular crossings with either the normal plane or the syzygy-axis, or both of them, when the primaries are at the apse-line. Let χ 0 = [x 0 , z 0 , y ′ 0 ] T be the vector of the free variables at θ = θ 0 , i.e., when the Moon is either at the periapsis (θ 0 = 0) or at apoapsis (θ 0 = π). The remaining three variables are equal to zero, in order to ensure the first perpendicular crossing of the normal plane. According to the periodicity criterion, the second crossing has been imposed at θ = θ 0 + π. This means that, after the integration of (2) for θ ∈ [θ 0 , θ 0 + π], the final state must satisfy the condition T e (χ 0 ) = [y π (χ 0 ), x ′ π (χ 0 ), z ′ π (χ 0 )] T = 0. At this point it is sufficient to propagate the half trajectory forward to θ = θ 0 +2M P π to obtain the full periodic solution. In this way all obtained periodic trajectories are M S :M P resonant orbits in the ER3BP with period T = 2M P π = M S T C , i.e., an integer multiple of the primaries' periodicity. The condition T e (χ 0 ) = 0 can be enforced by using numerical root finding techniques. However, the reliability and quality of the solution are often dependent on the initialization of the solver. In other words, using directly ξ r c as an initial guess in the ER3BP with the actual Earth-Moon eccentricity usually does not produce acceptable results. For this reason, this problem is tackled iteratively with the eccentricity used as the continuation parameter. The eccentricity is gradually increased with a fixed small step δe until the actual primaries' eccentricity e is achieved. At each iteration j, a periodic orbit is generated in the ER3BP through the described procedure with the current eccentricity e j ∈ {0, δe, 2δe . . . , e} using the orbit computed at iteration j −1 as an initial guess for the solver. The procedure is outlined in Algorithm 1. V. NUMERICAL RESULTS In this section, the performance of the proposed MPC scheme is evaluated through numerical simulations. A. Simulation Model and Control Design The proposed Sum-of-Norms MPC scheme, hereafter referred as SoN-MPC, has been tested on a high-fidelity nonlinear model, including several sources of uncertainty. The nonlinear ER3BP dynamics (2) and (3) has been corrupted by measurement noise, thruster imperfections, and fourth body perturbation. In particular, position and velocity measurements provided by the localization module are affected by additive noise with standard deviations σ r = 10 km and σ v = 0.1 m/s, respectively. These values are in the same order as those considered in [41]. In order to account for thruster imperfections, an input disturbance with standard deviation σ u = 10 −7 m/s 2 , corresponding to 1 mN, is considered. Moreover, the missions are simulated under solar gravity perturbation. Despite its long distance to the Earth-Moon system, the Sun represents the most perturbing fourth body for the three-body problem. Its huge mass yields an additional acceleration term in the ER3BP model, whose derivation is detailed in Appendix B. As far as the reference mission is concerned, the primaries orbit features are summarized in Table I. A servicing satellite with mass m 3 = 10 000 kg is assumed to be equipped with a fixed electric thruster capable of exerting a maximum force of 1 N, resulting in a maximum acceleration of 10 −4 m/s 2 . According to the values in Table I, the maximum nondimensional acceleration, scaled as in (6), is u max = 0.0364. The primaries' orbit is sampled assuming the sampling interval θ s = 0.0491 rad, corresponding to N = 128 samples. The SoN-MPC scheme (11)-(15) is applied with a horizon H = N , corresponding to one orbit of the primaries. This choice of the prediction horizon is motivated by the fact that low-thrust propulsion systems involve low control authority, thus problem (11) may be infeasible for shorter horizons. Moreover, halo orbits are typically characterized by highly unstable regions [4]. Therefore, it is appropriate to include the whole orbit period in the predicted dynamics, so as to prevent too aggressive control actions. Nevertheless, despite the long prediction horizon, sufficient time to solve problem (11) is available, thanks to the large sampling time (in the order of hours). The design of the control input is performed according to Propositions 1 and 2. The gain matrix K k in (16) is chosen as the solution to the standard periodic LQR problem [45], with state weighting matrix Q T Q, with Q used in the cost (13), and input weighting equal to the identity matrix. Matrices S k+H and W k+H are chosen as in (18), (19), and (20), respectively, with the matrix D k equal to the identity matrix for all k. The LMI problem (18) is solved by using CVX [48] and the commercial solver Mosek, capable to tackle semi-definite conic programming. Its solution required about 3 s on a standard laptop. However, note that the solution of (18) can be computed offline, therefore, its computational burden is not a key issue. The solution of the MPC problem (11) is carried out by using CVX and the commercial solver Gurobi. A single MPC problem instance is composed by m(H − 1) + n H optimization variables (corresponding to 1149 variables in our case study) and the computing time for its solution is in the order of 0.5 s. To solve the reference orbit generation problem, i.e., to implement Algorithm 1, we iteratively invoked the built-in MATLAB function fsolve, which numerically finds the solution relying on Newton's method for each eccentricity step δe = 0.001. Examples of the generated reference trajectories are reported in Figs. 2 and 7. Note that the resulting revolutions of the third body are not overlapping, opposite to what is observed in the CR3BP. This is in line with the fact that in the ER3BP, it is not possible to achieve periodicities that are not integer multiples of 2π. Under the above conditions, two space mission scenarios are simulated. 1) Station-Keeping: The operating satellite is required to actively track a halo orbit around the collinear Lagrange point L1 with a 3:1 resonance ratio. 2) Orbital Rendezvous: The servicing satellite has to approach an unperturbed reference point, assumed to lie on a halo orbit around the L2 Lagrange point with a 2:1 resonance ratio. B. Station-Keeping In this first mission, the objective is to show the ability of the SoN-MPC to keep the satellite on the reference trajectory, which is the 3:1 halo orbit around the Lagrange point L1, depicted in Fig. 2. Thus, the initial state is set as ξ (0) = ξ r (0). The state weighting matrix in cost function (13) is chosen as Q = I 6×6 . In Fig. 3, the dimensional components of the position and velocity errors between the satellite and the reference are reported, while Fig. 4 shows the 2-norm of the dimensional input profile. Note that the satellite is able to accurately track the halo orbit, keeping the tracking error bound in a small range, in spite of all the considered disturbances. The oscillatory behavior of the error is mainly due to the fourth body perturbation (which rapidly leads to divergence in the absence of control, due to strong instability of the halo reference trajectory [5]). Fig. 5 shows the trajectory of the satellite in the inertial Earth centered frame. It can be observed that the tracked halo orbit accomplishes three revolutions around the Earth. It is worth stressing the advantages of addressing the station-keeping problem in the ER3BP setting, with respect to adopting the CR3BP one, as usually done in the literature. To this aim, the SoN-MPC approach has been applied to maintain the reference trajectory generated from the CR3BP model (i.e., the red curve in Fig. 2). The tests have been performed on the same high-fidelity simulation model, with CR3BP-specific linearized model used for prediction in (11). Interesting results are obtained in regard to the fuel consumption for the entire mission. First, it has been observed that problem (11) turns out to be infeasible after few time steps, with the maximum thrust set to 1 N. Indeed, the small control authority is not sufficient to compensate the difference between the eccentricity assumed for the trajectory generation and the one considered in the simulation. In order to assess the control effort needed to compensate such a discrepancy, the maximum deliverable thrust has been increased to 2 N, for both scenarios. In Fig. 6, the comparison between the two resulting command profiles is shown. Significant fuel saving can be observed by tracking the trajectory generated for the ER3BP. In fact, considering k ||u(k)|| as a fuel consumption indicator, tracking the trajectory designed in the ER3BP leads to a 64% fuel consumption reduction. This corroborates the benefits of designing periodic MPC scheme using the reference orbit generated with the ER3BP model in low-thrust missions. C. Rendezvous In this scenario, the servicing satellite aim is to reach another satellite already orbiting on a 2:1 halo orbit near the L2 Lagrange point (shown in Fig. 7). The initial state ξ (0) is picked from a Gaussian distribution with mean ξ r (0) + (1/d(0))[2·10 6 , −1·10 6 , 0.5·10 6 , 0, 0, 0] T , which corresponds to an initial separating distance of 2291.3 km. The covariance matrix of ξ (0) is chosen to cover a variation of 150 km on each initial position component and 1 m/s on each initial velocity component. The state weighting matrix is set to Q = blockdiag{5 · I 3×3 , I 3×3 }. As a first experiment, the SoN-MPC scheme has been tested on the ER3BP dynamics with no disturbances. The result of a typical run is shown in Fig. 8. It can be observed that the tracking error goes to zero in finite time (approximately 8 days). This is a typical feature of MPC schemes adopting sum-of-norms cost functions, as opposite to classical quadratic ones. A Monte Carlo set of 100 simulations, in which the servicing satellite starts at different random initial conditions, has been performed on the high-fidelity simulator, including all the disturbance sources. Fig. 9 shows the distance and velocity errors during all the simulated maneuvers. It can be seen that the rendezvous between the two satellites is always achieved after about 12 days, corresponding to less than half a lunar orbit. Moreover, the dispersion of the simulated trajectories after the initial transient is negligible, confirming the inherent robustness of the adopted MPC scheme with respect to the considered disturbances (fourth-body perturbation, thrust error, measurement noise). In order to assess the benefits of the sum-of-norms formulation of the MPC problem, the performance of SoN-MPC is compared to that of a periodic quadratic MPC (hereafter referred to as Q-MPC). This amounts to solving problem (11) with the following cost function: where the matrix sequence k is the solution of the periodic LQR [45]. The distance and velocity errors of the two approaches for one simulated maneuver are reported in Fig. 10, while the time history of the norm of the corresponding input acceleration vector is shown in Fig. 11. It can be observed that SoN-MPC is superior in terms of tracking errors, while the periodic Q-MPC is prone to oscillations, which are mainly due to the fourth-body perturbation. The same behavior has been observed for the 3:1 halo orbit considered in the station-keeping maneuver of Section V-B. To further evaluate the performance of the proposed approach, both SoN-MPC and Q-MPC have been tested with a state weighting matrix Q = diag{q 1 · I 3×3 , q 2 · I 3×3 }, for different values of q 1 and q 2 . For each pair (q 1 , q 2 ), a set of 50 simulations has been performed, with the same setting as those in Fig. 9. The root mean square error (RMSE) of the steady-state distance error (from the 15th day on) is reported in Table II. It can be seen that SoN-MPC yields much more precise tracking for all the considered values of (q 1 , q 2 ). VI. CONCLUSION A periodic MPC scheme has been developed for halo orbit stabilization and tracking. Its ability to accurately track a periodic orbit in the ER3BP has been demonstrated through numerical simulations which included several perturbation effects. Furthermore, the same control scheme has been shown to successfully complete a simulated rendezvous mission, thereby highlighting its applicability to different low-thrust cislunar scenarios. Possible developments of the proposed approach concern the inclusion of state constraints dictated by mission requirements and the adoption of offset-free scheme to reject persistent disturbances such as the fourth body perturbation. Alternative techniques for generating the reference trajectory may also be considered, an example being the cooperative dual-task space framework [49]. Adaptation to system variations over time or to actuator faults are other subjects for continuing research. A. ER3BP Model With Forcing Input Let R = [X, Y, Z ] T be the dimensional position of the third body in the rotational frame andṘ = [Ẋ ,Ẏ ,Ż ] T its timederivative. Remember that dimensional position is related to nondimensional one as R = d(θ)r. The coordinate system rotates at rateθ about the z-axis, so that the angular velocity vector is ω = [0, 0,θ] T and the velocity vector can be written as The position of the spacecraft with respect to the primaries can be expressed by R 1 = [X + ρd, Y, Z ] T and R 2 = [X + (ρ − 1)d, Y, Z ] T . Denoting the kinematic energy by K, the potential by U, and the Lagrangian by L, one obtains and The forced Euler-Lagrange equations, with the components of the position vector R as the generalized coordinates, reads d dt For the sake of brevity, let us derive only (26) for the component X that yields In order to simplify (27), a transformation to the rotating and pulsating frame defined in Section II is required. Such a transformation exploits the scaling in (1), a normalization of the time by the characteristic time t ⋆ = (G(m 1 + m 2 )/d(θ )) 1/2 and then a transformation of time derivatives into derivatives with respect to true anomaly, taking into account that According to these considerations, the relationships between the dimensional, time-dependent and the nondimensional, true anomaly-dependent velocities and accelerations arė X = h p (1 + e cos θ)x ′ + e sin θ x (28)Ẍ = h 2 p 3 (1 + e cos θ ) 2 (1 + e cos θ )x ′′ + e cos θ x . (29) Then, substituting (28) and (29) in (27), one obtains x ′′ − 2y ′ − 1 1 + e cos θ x = (1 − ρ)(ρ + x) where r 1 = ((x + ρ) 2 + y 2 + z 2 ) 1/2 and r 2 = ((x + ρ − 1) 2 + y 2 + z 2 ) 1/2 . Defining the pseudo-potential as in (3) and scalingū x as in (5), (30) can be rewritten as ) which is the first equation of (2). The other equations are derived in the same way. B. Fourth Body Perturbation In [50], a characterization of the solar gravity acceleration in the CR3BP has been given where ω 4 is the magnitude of the nondimensional angular velocity of the Sun as viewed in the Earth-Moon rotating frame and θ 4,0 is the initial Sun angular position. The angular velocity is computed as the difference between the nondimensional mean motion of the Sun in the inertial frame centered at the Earth-Moon barycenter, i.e., n 4 ((1 + ρ 4 )/||r 4 || 3 ) 1/2 , and the nondimensional mean motion of the Earth-Moon system with respect to the same observer, that is 1. The simulation model adopted for the numerical results is the same as in (2) where the modified pseudo-potential is * = + 4 .
2023-07-23T15:02:25.235Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "eaa1b20da6768bb9bc22b948978d282c3a713760", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/87/4389040/10190212.pdf", "oa_status": "HYBRID", "pdf_src": "IEEE", "pdf_hash": "a13aadc2ad57848a84ab73ea5096ff6d2ed5c4a0", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
8081062
pes2o/s2orc
v3-fos-license
Antimycobacterial Activities of Endolysins Derived From a Mycobacteriophage, BTCU-1 The high incidence of Mycobacterium infection, notably multidrug-resistant M. tuberculosis infection, has become a significant public health concern worldwide. In this study, we isolate and analyze a mycobacteriophage, BTCU-1, and a foundational study was performed to evaluate the antimycobacterial activity of BTCU-1 and its cloned lytic endolysins. Using Mycobacterium smegmatis as host, a mycobacteriophage, BTCU-1, was isolated from soil in eastern Taiwan. The electron microscopy images revealed that BTCU-1 displayed morphology resembling the Siphoviridae family. In the genome of BTCU-1, two putative lytic genes, BTCU-1_ORF7 and BTCU-1_ORF8 (termed lysA and lysB, respectively), were identified, and further subcloned and expressed in Escherichia coli. When applied exogenously, both LysA and LysB were active against M. smegmatis tested. Scanning electron microscopy revealed that LysA and LysB caused a remarkable modification of the cell shape of M. smegmatis. Intracellular bactericidal activity assay showed that treatment of M. smegmatis—infected RAW 264.7 macrophages with LysA or LysB resulted in a significant reduction in the number of viable intracellular bacilli. These results indicate that the endolysins derived from BTCU-1 have antimycobacterial activity, and suggest that they are good candidates for therapeutic/disinfectant agents to control mycobacterial infections. Introduction Mycobacterium tuberculosis (MTB) is the leading cause of tuberculosis which is a serious public health problem that results in millions of deaths around the world each year [1]. Approximately one-third of the world's population is latently infected with MTB and at risk of reactivation [2]. The slow-growing MTB can persist in the latent state of asymptomatic infection for a long time, and the treatment of active diseases involves long multidrug regimens. Unfortunately, the world is at present struggling with multi-drug resistant (MDR) as well as extensively drug resistant (XDR) forms of MTB which threaten to make both the first and second line drugs ineffective [3,4]. The prevention of the spread of MTB is a continuing challenge, and finding novel agents to treat MDR and XDR MTB infections has become a priority. Bacteriophages (phages) are viruses that infect and replicate within their bacterial hosts. The lytic phages have developed various ways to interfere with and kill their host cells. Therefore, phage therapy is being considered as a possible therapeutic alternative for the treatment of infections caused by MDR strains [5]. Particular attention has been paid to recombinant phage endolysins because of their potential to digest bacterial cell walls when applied exogenously, enabling their use as alternative antibacterials [6,7]. Presently, scientists have successfully tested the use of endolysins to control antibiotic-resistant bacterial pathogens in animal models [8]. Like other bacteriophages, mycobacteriophages were considered as an alternative therapy for Mycobacterium infection control [9]. Additionally, mycobacteriophage-encoded lytic endolysins have considerable potential to be effective antimicrobial agents-or enzybiotics-against a number of MDR and XDR MTB strains [10,11]. To date, several phages infecting Mycobacterium spp. have been reported [12][13][14]. Nonetheless, the isolation and genome description of mycobacteriophages in Taiwan still remain to be achieved. In this study, we used Mycobacterium smegmatis as a rapidly growing host, and a mycobacteriophage was isolated from the soil near Buddhist Tzu Chi University and named BTCU-1. Electron microscopy images revealed morphology resembling the Siphoviridae family. The BTCU-1 genome is a linear double-stranded DNA of about 46 kb in length. To utilize mycobacteriophage lytic endolysins as therapeutic alternatives to antibiotics, we surveyed the genomic sequence of BTCU-1 and successfully identified two lytic-associated genes. Following cloning and expression/purification, various antimycobacterial activities of these two lytic proteins were determined in vitro. BTCU-1 Genome and Predicted Endolysins The BTCU-1 genome is a linear double-stranded DNA of about 46 kb in length. The genomic sequence of BTCU-1 was deposited in GenBank (accession number KC172839). The complete genomic sequence of BTCU-1 showed more than 90% nucleotide identity to mycobacteriophages Rockstar and HelDan (Supplementary Figure S1). According to the functions predicted so far, in general, the structural and lysis proteins are encoded by one strand, and the proteins required for the maintenance of lysogeny are encoded by the opposite strand. One of the major differences of BTCU-1 from the other two genomes (Rockstar and HelDan) located at ORF4 and ORF5, with the limited knowledge that the product of ORF4 is a structural protein. In addition, about 2 Kb and 3.3 Kb extra genomic fragments were found after the repressor genes in the genomes of Rockstar and HelDan, respectively. Though the genome of BTCU-1 is suggested to contain genes related to lysogeny, the deletion of the genomic fragment in BTCU-1 may inactivate the repressor gene, causing the defect in establishing a lysogenic state. This presumption is supported by the clear plaques produced by BTCU-1 on M. smegmatis. 19) domain, and a C-terminal domain possibly responsible for cell wall binding. The predicted catalytic residues in GH19 domains are marked by hash signs under the alignment; (B) Multiple sequence alignment of BTCU-1 LysB and its homologues, including D29 LysB (gp12, NP_046827) whose structure has been resolved (PDB code 3HC7). The predicted residues involving in catalysis (Ser82-Asp166-His240 in 3HC7) are marked by hash signs under the alignment. 3HC7 SSE, resolved secondary structural elements for D29 LysB; ribbons indicate helices and arrows indicate strands. To utilize lytic endolysins as therapeutic alternatives to antibiotics, we surveyed the genomic sequence of BTCU-1 and successfully identified two lytic associated genes, BTCU-1_ORF7 and BTCU-1_ORF8 (termed lysA and lysB, respectively). LysA is a putative endolysin responsible for the cleavage of the peptidoglycan in the cell wall of mycobacteria. Sequence analysis suggests that LysA contains three domains: the N-terminal peptidase domain, the central catalytic GH19 (glycoside hydrolase family 19) domain, and the C-terminal cell wall binding domain (Figure 2A). LysB shares about 63% identity with the previously characterized LysB (gp12) in mycobacteriophage D29, and similarly lacks a possible peptidoglycan-binding domain at the N-terminal, while it presents in many D29 LysB homologues ( Figure 2B) [15]. D29 LysB has been proved to be a mycolylarabinogalactan esterase required for the completion of lysis of the host mycobacterial cells [15]. It cleaves the ester linkage joining the mycolic acid-rich outer membrane to arabinogalactan, releasing free mycolic acids. Cloning, Expression and Purification of LysA and LysB To confirm that LysA and LysB are mycobacteriophage lytic associated enzymes, their genes (BTCU-1_ORF7 and BTCU-1_ORF8) from the BTCU-1 genome were cloned for further characterization. After the PCR reaction, two respective amplified sequences from the BTCU-1 genome were inserted into the E. coli expression vector pET-30b, yielding pET30b-LysA and pET30b-LysB, respectively. In E. coli BL21 (DE3), pET30b-LysA directed the synthesis of a single individual protein with an apparent molecular mass of about 63 kDa on a sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) ( Figure 3A). Similarly, pET30b-LysB directed the synthesis of a single individual protein with an apparent molecular mass of about 34 kDa on an SDS gel ( Figure 3A). Antimycobacterial Activity of BTCU-1 and Purified LysA and LysB We first tested the lytic activity of BTCU-1 against two M. smegmatis strains and seven M. tuberculosis isolates listed in Table 1. BTCU-1 showed broad lytic host range, affecting almost all the tested Mycobacterium strains except TCGH59490 (Supplementary Figure S2). To study the antibacterial activity of LysA and LysB, M. smegmatis ATCC 14468 was examined for its susceptibility to these proteins. As shown in Figure 3B, the MBC of LysA was over 80 μg·mL −1 , and the MBC of LysB was between 20 and 40 μg·mL −1 . To further analyze the antibacterial spectra of LysA and LysB, another three Gram-negative (E. coli ATCC 25922; Acinetobacter baumannii ATCC 17978 and Salmonella enteric BCRC 10746) and two Gram-positive bacteria (Bacillus subtilis BCRC 10447 and Staphylococcus aureus ATCC 25923) were used for the MBC assay. The MBC of LysA and LysB towards these five bacteria were higher than 200 μg·mL −1 . We noticed that LysA and LysB had narrow spectra of antimicrobial activity against only Mycobacteria spp., with relatively lower bactericidal activity against other non-Mycobacteria species. Electron Microscopy Experiments The effects of LysA and LysB on the morphology of M. smegmatis were visualized using scanning electron microscopy (SEM). The exposure of bacteria to 100 μg·mL −1 LysA or LysB for 24 hour caused a remarkable modification of their cell shape, as shown by SEM. Untreated bacteria displayed a rough bright surface with no obvious cellular debris ( Figure 4A). The crinkled and irregular spheroidal cell morphology was observed in the cells of M. smegmatis after exposed to LysA as shown in Figure 4B. The treatment with LysB resulted in notable bacterial rupture and breakdown in M. smegmatis ( Figure 4C), and treatment with LysA combined LysB exhibited a wide range of significant abnormalities, including the collapse of the cell structure and notable cracked cells in M. smegmatis ( Figure 4D). LysA and LysB Kill Intracellular M. smegmatis To determine the effect of BTCU-1 derived endolysins on intracellular M. smegmatis, RAW 264.7 macrophage cell lines were infected with M. smegmatis ATCC 14468, and 2 h after infection, treated with LysA or LysB for 12 h. As shown in Figure 5, the viabilities of intracellular M. smegmatis dropped to <40% following incubation with LysA compared to the negative control (incubation with DMEM). However, the viabilities of M. smegmatis dropped to <20% following incubation with LysB for the same period of time. These results suggest that BTCU-1 derived endolysins can effectively decrease the number of intracellular M. smegmatis. Discussion The prevention of the spread of MTB is a continuing challenge, and finding novel agents to treat MDR and XDR MTB infections has become a priority. Mycobacteriophages own the abilities to attack and kill mycobacteria and can potentially serve as an alternative option to combat mycobacteria infection. In this work, a mycobacteriophage was isolated from the soil near Buddhist Tzu Chi University, Taiwan and was named BTCU-1. To our knowledge, this is the first detailed study of a mycobacteriophage isolated from Taiwan. Similar to other mycobacteriophages reported, BTCU-1 showed a broad host range, being able to lyse almost all the tested Mycobacterium strains. This suggests its use as an alternative sanitation or disinfectant agent will be highly feasible for application in controlling MTB infections [16]. In order to have a better understanding about mycobacteriophage distribution in Taiwan, more work on the isolation and characterization of mycobacteriophages is still needed. Mycobacterium spp. possess unusual cell wall core, which consists of a peptidoglycan layer covalently attached to a mycolic acid layer via the polysaccharide arabinogalactan [17]. Upon infection, mycobacteriophages produce lysins to digest the peptidoglycan and mycolic acid layers of the host cell wall, causing the rupture of the bacterial cells and the release of the phage particles. The ability to lyse mycobacterial cells makes lysins extremely important. There have been hundreds of mycobacteriophage genomes sequenced [18], and the sequence features of the major types of lysins have been analyzed [11,15]. Nevertheless, the evaluation of their antimycobacterial effect, particularly applied exogenously, is still very limited. In the example of mycobacteriophage Ms6, its LysA was characterized as a peptidoglycan amidase; however, in culture, it showed no antibacterial effect on either M. tuberculosis or M. smegmatis [19]. In this study, we surveyed the genomic sequence of BTCU-1 and identified two lytic associated genes, BTCU-1_ORF7 and BTCU-1_ORF8 (termed lysA and lysB, respectively). Sequence analysis suggests that LysA is a putative endolysin responsible for the cleavage of the peptidoglycan in the cell wall of mycobacteria, whereas LysB probably acts at the mycolylarabinogalactan bond to liberate free mycolic acid. To our surprise, either LysA or LysB alone, while applied exogenously, could cause a remarkable modification of the cell shape of M. smegmatis and also cause cell death. Scanning electron microscopy (SEM) result showed that LysA-treated M. smegmatis displayed crinkled and irregular spheroidal morphology. A simple explanation for this is that the peptidoglycan in the cell wall of mycobacteria was partially removed by the action of LysA, and the membrane tension caused the cell to form a characteristic spherical shape. However, how the exogenous LysA permeates through the mycolic acid-rich outer membrane to reach the substrates remains to be discovered. Furthermore, in vitro and in vivo antimicrobial assays showed that LysB possessed better bactericidal capacity than LysA while applied exogenously. Based on the results of SEM and sequence analysis, we can only presume that LysB proteins reach their host mycolylarabinogalactan targets from the outside, and decrease the integrity of the mycolic acid linkage to the arabinogalactan-peptidoglycan layer. This may directly affect the cell wall permeability barrier in mycobacteria. However, the mechanism of LysB that ultimately results in the bacteriolysis seen in these experiments is a question worthy of investigation. Bacteria, Vector and Growth Conditions The bacterial strains and plasmids used in this study are listed in Table 1. M. smegmatis strains were grown at 37 °C in Middlebrook 7H9 medium (Difco, Detroit, MI, USA) with shaking, or on Middlebrook 7H11 agar (Difco) supplemented with 0.5% glycerol. E. coli strains were grown at 37 °C in Luria-Bertani (LB) broth or agar supplemented with 100 μg·mL −1 ampicillin or 50 μg·mL −1 kanamycin, when appropriate. Isolation and Characterization of the Mycobacteriophage In this study, M. smegmatis ATCC 14468 is a non-virulent mycobacterial strain and was used as a host for phage isolation. The soil sample was collected from sites near Buddhist Tzu Chi University, Hualien, Taiwan. The land is owned by Tzu-Chi Foundation, the founder of Tzu Chi University, and no specific permission was required to access the land or collect the samples. No endangered or protected species were found or are registered in this area. Each time, about 300-500 g of soil was collected from a surface of 100 cm 2 , at a depth of 3-5 cm. We believe the environmental impact is minimal. The mycobacteriophage isolation and purification process was performed according to the procedures described by Pope et al. [12]. Phage morphology was examined by TEM of negatively stained preparations [20]. A drop of approximately 10 10 PFU/mL was applied to the surface of a formvar-coated grid (200 mesh copper grids), negatively stained with 2% uranyl-acetate and then examined in a Hitachi H-7500 transmission electron microscope (Hitachi Company, Tokyo, Japan) operated at 80 kV. Phage DNA Preparation and Genome Sequencing Phage DNA isolation was performed as previously described in detail [21]. The genome of mycobacteriophage was sequenced using the Ion Torrent PGM 314 chip (approximately 500 × coverage rate, Thermo Fisher Scientific, Waltham, MA, USA). The order of assembled contigs was predicted from the genomic sequences of Mycobacteriophage Rockstar (GenBank accession number JF704111), and Mycobacteriophage HelDan (accession no. JF957058) and confirmed by PCR. Primer walking was used to fill the gaps. Gene prediction was performed using GenMark.hmm [22], GenMarkS and Glimmer [23], followed by a manual correction where needed. All genes were annotated by BLAST searches of the GenBank databases. tRNA genes were predicted using the tRNAscan-SE tools [24]. Cloning/Purification of Endolysins Genomic DNA of the mycobacteriophage BTCU-1 was extracted as previously described in detail [21]. Plasmid DNA was purified from E. coli using a QIAprep Spin MiniPrep Kit (Qiagen, Hilden, Germany). Polymerase chain reaction (PCR) amplification, with the DNA of BTCU-1 as a template, was carried out to produce two putative endolysin genes lysA and lysB. To construct the LysA expression vector, a 1566-base pair DNA fragment of lysA was amplified by PCR with BTCU-1_lysA-FP and BTCU-1_lysA-RP primers and cloned into a TA cloning site of pGEM-T-easy (Promega); the resulting recombinant DNA (BTCU-1_lysA) was digested with EcoRI and cloned into the EcoRI sites of pET-30b (Novagen). The resulting plasmid, pET30b-LysA, was then used to transform E. coli strain BL21 (DE3) for expression. The pET30b-LysB expression vector was constructed in a similar approach to pET30b-LysA, using BTCU-1_lysB-FP and BTCU-1_lysB-RP as primers. All of the constructed plasmids for expression were confirmed by DNA sequencing. The subsequent expression and purification procedures of the recombinant proteins were carried out according to the processes as described by Lai et al. [21]. The E. coli BL21 (DE3) strains, harboring the plasmids pET30b-LysA or pET30b-LysB were grown overnight in LB medium containing kanamycin (50 μg/mL). The overnight culture of the transformed E. coli was diluted 100 times with the same medium and incubated at 37 °C with shaking (150 rpm) until the optical density of the medium at OD600nm reached 0.5. The expression of the target gene was induced by the addition of isopropyl-L-D-thiogalacto pyranoside at a final concentration of 0.1 mM. After further incubation for 3 h, the cells were harvested by centrifugation. The cell pellet was suspended in 10 mL of lysisequilibration-wash (LEW) buffer containing 50 mM NaH2PO4/300 mM NaCl (pH 8.0), disrupted by sonication and centrifuged at 10,000 g for 15 min to remove debris. Crude supernatant was loaded onto Protino Ni-TED packed columns (MACHEREY-NAGEL, Düren, Germany) equilibrated with LEW buffer. The fractions were eluted with the elution buffer containing 50 mM NaH2PO4/300 mM NaCl/ 250 mM imidazole (pH 8.0). Active fractions were pooled and dialyzed against the elution buffer and concentrated by Amicon Ultra-0.5 centrifugal filter (MILLPORE, Bedford, MA, USA). The concentration of each purified protein was determined by the Bradford assay using bovine serum albumin as a standard. Antimycobacterial Assays The host-range of BTCU-1 was determined by the plaque-forming method, adapting a spot-test technique described by Rybniker et al. [25], with some modifications as detailed below. BTCU-1 was prepared in a phage buffer (10 mM Tris, pH 7.5, 1 mM MgSO4, and 70 mM NaCl) and diluted to 10 6 PFU/mL. The mycobacterial strains used in this test were grown overnight in Middlebrook 7H9 broth (Difco) (for M. smegmatis) or grown on 7H11 agar (for 7 M. tuberculosis clinical strains) at 37 °C. Until the bacteria grew to an appropriate concentration, the mycobacterial strains were diluted to an OD600 of 0.1. Two hundred microliters (µL) of mycobacteria were mixed with 200 µL of prepared BTCU-1 and added to 3 mL top agar containing 1 mM CaCl2 and poured onto 7H11 agar plates. Plates were incubated at 37 °C for four days for the fast-growing Mycobacteria, and for up to 4-6 weeks for the slow-growing strains. In vitro antimicrobial assays were performed for the purified LysA and LysB. Antimicrobial activity was determined as described by Chen et al. [26], with some modifications as detailed below. Bacteria were grown overnight in Middlebrook 7H9 medium or Mueller-Hinton broth (Difco) at 37 °C, and during the mid-logarithmic phase, bacteria were diluted to 10 6 colony-forming units (CFU)/mL in phosphate buffer. LysA and LysB were serially diluted in the same buffer to the concentration range from 10 to 400 μg·mL −1 . Fifty microliter (µL) of bacteria was mixed with fifty µL of LysA and LysB at varying concentrations followed by incubation at 37 °C for 24 h without shaking. At the end of incubation, bacteria were inoculated on Middlebrook 7H11 agar or Mueller-Hinton agar, and allowed growth at 37 °C for four days. The lowest concentration of LysA or LysB on the agar plate which displayed no bacterial growth is defined as the minimal bactericidal concentration (MBC). All experiments were performed in triplicate. Scanning Electron Microscopy The scanning electron microscopy images were prepared as previously described in detail [16]. M. smegmatis ATCC 14468 was grown in Middlebrook 7H9 medium to a log phase, harvested by centrifugation, washed twice with deionized water and resuspended in the same water. Approximately 10 8 cells were incubated at 37 °C for 24 h with 100 μg·mL −1 LysA or LysB. Negative controls were run in the presence of phosphate buffer. The volume was adjusted to 100 µL. The treated cells were fixed with 2.5% (w/v) glutaraldehyde in 0.1 M cacodylate buffer and 1% tannic acid, thoroughly washed with the phosphate buffer and dehydrated with a graded ethanol series. After critical-point drying and gold coating, the samples were observed with a HITACHI S-4700 instrument (Hitachi Company, Tokyo, Japan) operated at 15 kV. Intracellular Bactericidal Assay Infection of macrophages and quantitation of intracellular bacteria were performed as already described with some modifications [27]. Mouse peritoneal macrophage cell line, RAW 264.7, was obtained from the American Tissue Culture Collection. Cells were cultured in Dulbecco's Modified Eagle Medium (DMEM; Difco Laboratories) supplemented with 10% fetal bovine serum and 2 mM L-glutamine. For the assays described in this article, RAW 264.7 macrophages (10 6 ) were treated with trypsin, washed, and seeded on a six-well tissue culture plate (Corning Costar, Cambridge, MA, USA) and allowed to grow overnight at 37 °C with an atmosphere of 5% CO2. M. smegmatis ATCC 14468 were used to infect RAW 264.7 macrophages. Bacteria were grown overnight in Middlebrook 7H9 medium at 37 °C, and during the mid-logarithmic phase, bacteria were diluted to 10 8 CFU/mL in DMEM. Monolayers (∼10 6 cells) were incubated with M. smegmatis at a ratio of 100 bacteria to 1 cell. Infection was allowed to occur for 2 h, and then the monolayers were washed with Hank's Balanced Salt Solution (HBSS) twice and reincubated for two hours with medium containing kanamycin 50 µg·mL −1 to kill extracellular bacteria. After washed with HBSS three times, the M. smegmatis-infected monolayers were then incubated for 12 h with either 5 µg·mL −1 LysA and 5 µg·mL −1 LysB, and the control experiment was performed with DMEM or 5 µg·mL −1 Rifampicin. After the treatment of the phage-related endolysins, cell monolayers were washed three times and the medium was replaced with 1 mL of sterile distilled water to lyse the macrophages. After vigorous pipetting to ensure complete cell lysis, viable intracellular M. smegmatis were determined by quantitative plating of serial dilutions of the lysates on Middlebrook 7H11 agar. Each test was done three times in independent experiments, and the number of CFU recovered per well (mean number ± S.D.) was determined. Conclusions Our results indicate that BTCU-1 derived endolysins have antimycobacterial activity. These results also suggest that a variety of the genes encoding mycobacteriophage-related lytic endolysins can be readily isolated from the mycobacteriophage genomes. Use of these lytic endolysins as alternative sanitation or disinfectant agents will be highly feasible for application in controlling mycobacterium infections.
2016-03-01T03:19:46.873Z
2015-10-01T00:00:00.000
{ "year": 2015, "sha1": "4fec55b8087fe70280111236741b01e8aaf611c6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/20/10/19277/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4fec55b8087fe70280111236741b01e8aaf611c6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
4674039
pes2o/s2orc
v3-fos-license
A genetic association study between growth differentiation factor 5 (GDF 5) polymorphism and knee osteoarthritis in Thai population Objective Osteoarthritis (OA) is a multi-factorial disease and genetic factor is one of the important etiologic risk factors. Various genetic polymorphisms have been elucidated that they might be associated with OA. Recently, several studies have shown an association between Growth Differentiation Factor 5(GDF5) polymorphism and knee OA. However, the role of genetic predisposing factor in each ethnic group cannot be replicated to all, with conflicting data in the literatures. Therefore, the aim of this study was to investigate the association between GDF5 polymorphism and knee OA in Thai population. Materials and Methods One hundred and ninety three patients aged 54-88 years who attended Ramathibodi Hospital were enrolled. Ninety cases with knee OA according to American College of Rheumatology criteria and one hundred and three cases in control group gave informed consent. Blood sample (5 ml) were collected for identification of GDF5 (rs143383) single nucleotide polymorphism by PCR/RFLP according to a standard protocol. This study protocol was approved by the Ethics Committee on human experimentation of Ramathibodi Hospital Faculty of Medicine, Mahidol University. Odds ratios (OR) and 95% confidence intervals were calculated for the risk of knee OA by genotype (TT, TC and CC) and allele (T/C) analyses. Results The baseline characteristics between two groups including job, smoking and activity were not different, except age and BMI. The entire cases and controls were in Hardy-Weinberg equilibrium (p > 0.05). The OA knee group (n = 90) had genotypic figure which has shown by TT 42.2% (n = 38), TC 45.6% (n = 41) and CC 12% (n = 11), whereas the control group (n = 103) revealed TT 32% (n = 33), TC 45.6% (n = 47), and CC 22.3% (n = 23), respectively. Genotypic TT increased risk of knee OA as compared to CC [OR = 2.41 (P = 0.04, 95%CI = 1.02-5.67)]. In the allele analysis, the T allele was found to be significantly associated with knee OA [OR = 1.53 (P = 0.043, 95%CI = 1.01-2.30)]. Conclusion These data suggested that GDF5 polymorphism has an association with knee OA in Thai ethnic. This finding also supports the hypothesis that OA has an important genetic component in its etiology, and GDF5 protein might play important role in the pathophysiology of the disease. Introduction It is widely believed that osteoarthritis develops from an imbalance between anabolic and catabolic processes or homeostasis of cartilage metabolism [1,2]. The etiology of this disease is related to genetic association [3]. Recently, several studies have demonstrated the polymorphism in many genes which might be related to the pathogenesis of osteoarthritis [4][5][6][7][8][9][10][11][12][13][14]. Growth Differentiation Factor 5 or GDF5 gene regulates the expression of the GDF5 protein which is closely related to BMP and is a member of TGF-beta superfamily [15]. It has a role in regulation of the chondrogenesis and the defect in this gene might be correlated to the abnormal joint development [15]. It has been reported in animal study that GDF5 knockout mice develops knee joint anomaly [16]. Moreover, it has been shown that he polymorphism in GDF5 gene is related with low expression of the GDF5 protein in knee joint [17]. The large scale analysis has shown that the association of this polymorphism (rs143383) in promoter area might be a risk factor of the osteoarthritis [18]. In addition, studies of this genetic variant in China and Japan have shown the association of T allele and knee OA [19]. However, there is a report from Greece which had an inconsistent result and found no an association [20]. The genetic susceptibility of the disease in different ethnic cannot be applied to others because each ethnic has a different genetic background. There is a gap of information about this polymorphism in Thai population, therefore the objective of this study is to determine the association of the SNP rs143383 in GDF5gene and knee OA in Thai population. Subjects Our study was approved by the Ethics Committee of the Faculty of Medicine, Ramathibodi Hospital, Mahidol University, Bangkok, Thailand. All patients recruited in the study were Thai by nationality and had ancestors settled in Thailand for at least three generations. In total, 90 patients with knee OA who underwent total knee arthroplasty (TKA) and 103 patients without knee OA were enrolled from Department of Orthopedics, Ramathibodi Hospital. Informed consent was performed after the purpose of the research project and it had been clearly explained to the patients. The diagnosis of knee OA was based on the American College of Rheumatology criteria [21]. Both OA and control groups were interviewed to obtain demographic data and all of established risk factors. Thereafter, standard weightbearing antero-posterior and lateral view of knee radiographs were taken to confirm the diagnosis of OA by Kellgren and Lawrence scores (KL scores) [22]. Laboratory technique PCR-RFLP for BsiEI restriction site was used for SNP in GDF-5 identification Firstly, 5 ml peripheral blood sample was collected from patient using ethylenediamine tetraacetic acid as an anticoagulant and processed for SNP analysis. Genomic DNA was extracted from buffy coat leukocytes using the standard phenol-chloroform method. PCR primers to amplify the promoter area of GDF5 gene were designed by the Primer-3 web-based tool [23]: GATTTTTTCT-GAGCACCTGCAGG (forward) and GTGTGTGTT TGTATCCAG (reverse). 50 μl PCR mixture contained 100 ng of genomic DNA, 20 pmol of each primer, 0.2 μ M of each dNTP, 1 unit of Taq DNA polymerase (AmpliTaq ® , Applied Biosystem, Foster City, CA), 3.0 mM MgCl2 in 10 × PCR buffer containing 10 mmol of Tri-HCl pH 9.0, 10 mmol KCl and 0.1% Triton X-100 (Invitrogen, Carlsbad, CA). PCR reaction was started with an initial denaturation at 95°C for 5 min, followed by 35 cycles of amplification in a thermocycler (PCR Sprint, Thermofisher, Waltham, MA) with denaturation at 94°C for 1 min, annealing at 58°C for 1 min, extension at 72°C for 1 min, and final extension at 72°C for 10 min. Then, 10 μL of PCR product was incubated at 37°C with 3 units of BsiEI for 4 hours under the manufacturer's recommended conditions (New England Biolabs, Ipswich, MA). The digested product was electrophoresed on 2% agarose gel with ethidium bromide staining before being visualized on a UV transilluminator ( Figure 1). The expected fragment length was 104 and 230 bp in CC, 104, 230, and 344 bp in TC, and 344 bp in TT genotypes, respectively. Statistical analyses Analysis of demographic data was performed by Excel 2007 (Microsoft ® Excel ® ). The unpaired t-test was used for continuous data and Chi-square was used for categorical data. Allele frequencies, Odds ratio and the probability for Hardy-Weinberg Equilibrium were estimated and analyzed as explained by following website; http:// ihg.gsf.de/cgi-bin/hw/hwa1.pl. Results In our study, the average age of patients in the knee OA group was significant older than in the control. BMI which is one of the risk factor of knee OA was also slightly higher in the OA patient group. However, other Figure 1 shows the bands of each genotype in the electrophoresis gel: A represents TT, B represents CC and C represents TC. possible risk factors which might be related to OA as well as patient life style were comparable between these two groups. The baseline characteristics of the patients are shown in Table 1. The prevalence of allele T and C in our sample was normally distributed according to the Hardy-Weinberg Equilibrium (p-value < 0.05). The comparison of the number and odd ratio of genotype TT, TC, TT+TC with CC of GDF5 polymorphism (rs143383) in the promoter area between case and control is demonstrated in Table 2. When analyzing by allele (shown in table 3), the odds ratio of T allele was 1.53 (95%C.I. = 1.01-2.31), which delineated the increase in risk to develop knee OA. Our result shown that this polymorphism, T/C, is inherited by autosomal recessive manner as the TT genotype increases risk of the disease significantly (OR = 2.41, 95%C.I. = 1.02-5.67), whereas the TC has no significant difference. Discussion and Conclusion Our study has shown the statistically significant association between polymorphism in promoter of GDF5 gene (rs143383) and knee osteoarthritis in Thai population. We found that T allele in GDF5 polymorphism was a significant risk factor for knee osteoarthritis with Odds ratio 1.53 (95%C.I. = 1.01-2.31). When analysed by genotype, it was found that the TT genotype increased risk of knee OA, whereas TC has not shown any correlation. Regarding these finding, it suggests that this polymorphic gene might be expressed as the autosomal recessive type. The GDF5 gene is located on the chromosome 20q11.2 and it regulates the expression of GDF5 protein. It is categorised in the class of bone morphogenetic protein (BMP) [15]. It involves in development of bone and cartilage, particularly in endochondral ossification process [24,25]. The mutation of GDF5 associates with generalised osteoarthritis and skeletalrelated congenital diseases. Acromesomelic chondrodysplasia; Grebe type and Hunter-Thompson type is autosomal recessive form which is rare hereditary skeletal disorders. They are presented by short status, abnormal limbs development. Brachydactyly type A1, A2 and C and symphalangism proximal syndrome are inherited malformation presented with abnormal morphology of hand and finger. Multiple synostoses syndrome type 2 is an autosomal dominant condition characterised by progressive joint fusions and progressive conductive deafness. Du Pan syndrome is a rare autosomal recessive; the patient presents with absence of the fibulae and severe acromesomelic limb shortening with small, nonfunctional toes. These abnormal conditions have been reported that they have a mutation in GDF5 gene, thus they support that GDF5 plays a role in skeletal development [26,27]. Recently, it has been reported that GDF5 deficiency mice had the delay fracture healing process that impaired the cartilaginous matrix deposition in the callus and reduced callus cross-sectional area [28]. The functional study of this polymorphism has been demonstrated; T allele in rs143383 was associated with the decrease of GFD5 molecule expression and might increase susceptibility to osteoarthritis [17]. Moreover, it has been report that the differential binding of deformed epidermal autoregulatory factor 1 (DEAF-1) can modulate the expression of GDF5 via this polymorphism [29]. It is believed that GDF5 plays role in the regulation of the chondrogenic cell growth and differentiation. These evidences support the function of GDF5 gene which might indicate the importance of this polymorphism in osteoarthritis etiology. As the result, many scientists have been working for study the association of this polymorphism in GDF5 gene. Recently, a large scale association study between GDF5 gene and osteoarthritis has revealed the genetic susceptibility of polymorphism in Table 1 Shows the base line characteristics between cases and controls [18]. This study included the data from diverse ethnics which have been reported in the recent literatures. However, the magnitude of this association is smaller than the Asian ethnic, it has been reported from Japan that odds ratio is 1.79 (P = 1.18 × 10-13) for GDF5 polymorphism with per-risk allele (T) odds ratio by Miyamoto et al [17]. The magnitude of the association of GDF5 polymorphism is different between Caucasian ethnic and Asian ethnic. Furthermore, the association of this polymorphism from Greek and Korean study cannot demonstrate the association of GDF5 polymorphism [20,30]. We have also reported the polymorphism in ESR1, which is associated with knee OA in Korean population. However, our result did not show any statistically significant difference [31]. Thus, these evidences can imply that the association of ethic and genetic susceptibility in osteoarthritis might not be consistent among the different populations and cannot be applied to others. In addition, there is a report that GDF5 polymorphism (rs143383) also predisposes to Lumbar disc degeneration in women. Lumbar disc degeneration which is defined by disc space narrowing and the presence of osteophytes significantly associates with GDF5 polymorphism from cohorts from Northern Europe in women, with an odds ratio (OR) of 1.72 (95% CI 1.15-2.57) [32]. The genomewide association study from Finland and Sardinia shows the common variants in GDF5 contribute to height difference [33]. The GDF5 gene might be involved skeletal growth and development and GDF5 variants might play role in pathogenesis of bone and cartilage diseases. Although the sample size in our study was small when compared to previous studies, these limited samples were sufficient for demonstrating the statistically significant association of rs143383 in GDF5 core promotor area. Furthermore, the population in our study was in homogeneity and the distribution between the case and control groups was in the Hardy Weinberg's equilibrium. Therefore, our study can represent the significant susceptibility of GDF5 in knee osteoarthritis in Thai ethnic. Our findings have emphasized this association that the susceptibility of the polymorphism in GDF5 among the Asian population because the magnitude of association is close to Japan and Chinese population. It is possible that Thai population ethnic is believed to be close to Chinese ancestry. Finally, we decide to leave some suggestions in genetic susceptibility study. Firstly, it is important to study the genetic susceptibility in common disease as this might be valuable for further investigation in order to understand the disease at molecular level or even apply to use for disease screening. More important, the association between polymorphism and disease might not similar in different ethics; therefore the genetic susceptibility from different ethics is required and the meta-analysis should be conducted in the adjacent area or similar genetic background. Lastly, other modern technology such as microarray technique should be employed for investigation of genetic susceptibility in osteoarthritis.
2014-10-01T00:00:00.000Z
2011-09-21T00:00:00.000
{ "year": 2011, "sha1": "251d03b494ead226b89ae7a0633fe25089ee1d06", "oa_license": "CCBY", "oa_url": "https://josr-online.biomedcentral.com/track/pdf/10.1186/1749-799X-6-47", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "16d42b76eb9f635aa9021e70012906d17b4e1326", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
85458578
pes2o/s2orc
v3-fos-license
FPGA implementation of new LM-SPIHT colored image compression with reduced complexity and low memory requirement compatible for 5G ABSTRACT INTRODUCTION With the fast development of mobile systems, techniques need to be changed to cope with these developments; among the modifications that need to take place such as image compression. Image compression can be achieved by removing one or more of the three basic data redundancies as briefly outlined below: [1] a. Coding redundancy: if the number of bits per pixel that is required to represent the image is higher than is necessary. b. Inter pixel redundancy: The correlations among image pixels, which result from the structural or geometrical relationships between the objects in the image lead to inter pixel redundancy. c. Psycho visual redundancy: The less important information is considered to be redundant since it is ignored by the human vision system and hence omitted. Many researchers implemented JPEG2000, but the problem was always the high decoding complexity and the amount of memory required to store data. JPEG 2000 image compression is based on Discrete Wavelet Transform (DWT) which is considered a Lossy compression method. The objective of DWT coding is to divide the spectrum of one image into the Low-pass and the High-pass components. JPEG 2000 is a 2-dimension DWT based Image Compression standard [2]. A coding algorithm developed for DWT transformed images is the Set Partitioning in Hierarchical Trees Algorithm (SPIHT). The SPIHT algorithm can be applied to grey-scale and colored images. It ensures that important information is restored first to make it an effective use in networks [3]. SPIHT algorithm is applied on the wavelet transformed image to reduce the correlation between neighboring pixels especially when original image is concentrated in the lowest frequency band of the transformed image [3]. In this paper a modified version of SPIHT is based on reduction of three processing lists in sorting pass into single processing list including refinement passes. Also to improve compression performance and further reduction in memory requirement, Listless SPIHT has been proposed. In L-SPIHT, coefficients are extracted from the whole wavelet-transformed image and encoded separately to reduce the memory requirement without the need to store lists. Another modification to SPIHT was introduced is the use of three trees instead of continuous scanning. This will help reduce the processing time by performing parallel processing to the three trees. As a final compression step, Runlength Encoding is a very simple form of lossless data compression by storing a single data value and its repetition count. This is most useful on data containing many such runs such as icons, line drawings, and animations [4]. From the parameters that need to be calculated to test the quality of the reconstructed image are Mean Square Error (MSE) and peak signal to noise ratio (PSNR) ratio. Let's assume that the original image is 'A' and the reconstructed image is 'B'. 10 ( 2 ) The original and the reconstructed images are considered indistinguishable by human eyes if the PSNR value is 40 dB or greater [5]. Another performance factor is the compression ratio of the image is given by CR ( Yashaswini P R1, and Mr. Ravi Kiran [5] used a modified SPIHT that utilizes only a single list instead of three lists. Thomas W. Fry, and Scott Hauck [7] used a modification to the original SPIHT algorithm to reduce the computation. THE PROPOSED METHOD 2.1. Discrete Wavelet Transform The DWT has become so popular that it provides an efficient method for sub band decomposition of signals. The process starts by passing signal through filters with different cut-off frequencies. [9] These filters are the low pass and high-pass decomposition filters to generate four lower resolution components: one low-low (LL1) sub-image, which is the approximation of the original image and three detailed sub-images, which represent the horizontal (LH1), vertical (HL1) and diagonal directions (HH1) of the original image [10,11]. Firstly, the rows of the array are passed to the filters to divide the array into two vertical halves, with the first half represents the average coefficients, while the second vertical half represents the detail coefficients. Secondly, the process is repeated again with the columns, resulting in four sub-bands [9]. Figure 1 shows the output of the two-dimensional wavelet transform for two decomposition steps in a conceptual way. L denotes a low-pass and H a high-pass band. The first letter stands for the transforming of the rows while the second one relates to the columns and the number to the decomposition step [12]. Set Partitioning in Hierarchical Trees Algorithm (SPIHT) Embedded coding is based on threshold where values greater or equal to the threshold are called significant. While values less than threshold known as insignificant as in (4) 1, This indicates that if the coefficient with maximum magnitude in a set is significant, then the significant test result is 1 [12]. The value of n can be obtained by using (5) , The encoder and the decoder perform the same steps and when the decoder receives the comparisons results with threshold from the encoder, it can recover the ordering information from the execution path. [13]. For a given bit rate, the lower bits is used to represent significant values, gives a lower output bit rate [14]. The Wavelet-coefficients can be classified into three sets where each set contains the representative of whole subtree. These trees are List of Significant Set (LIS), List of Significant Pixels (LSP), and List of Insignificant Pixels (LIP) [13]. a. LIS includes the location of the coefficients that considered insignificant with respect to selected threshold. b. LSP include the pixels that are significant with respect to selected threshold. c. LIP includes the pixels that are considered insignificant with respect to the selected threshold. Every pass is divided into two parts: [6] Sorting pass: in this pass every value that considered insignificant in previous pass and its significant in this pass are encoded. Refinement passes: the nth MSB of all the significant values are output. Passes will continue until target data rate is reached or n=0. Runlength Encoding Runlength coding is considered as a one form of lossless compression. It represents a run of same numbers by two values, the number followed by the its run counts [4]. Runlength Encoding provides large compression of data, when the data contains large number of runs. One drawback is that in some times when the data contain small number of runs, it provides no compression. The worst case if there are no runs this will lead to increase the sequence size instead of reducing it [15]. THE PROPOSED METHOD The proposed model is depicted in Figure 2. At first, the input RGB image with size QCIF (176x144) is converted to binary file using MATLAB program to be used as the input to the VHDL program. The wavelet filter is carried out for the red, green, and blue image layers in parallel. Each 8x8 block are processed individually to reduce processing time. Hence only 64 pixels (8x64 bits) needed to be stored in buffer before it is passed to the next stage. And further reduction is made by passing only details subset (LL) of the output of wavelet filter to the SPIHT encoder. This is approximately quarter of the original image (76x88) pixel. A LM-SPIHT is proposed were three trees are used instead of one tree. Each 8x8 block of the detail subset is divided into three trees and without the need to store lists of significant and insignificant. The 8x8 block of image with three trees zigzag scanned is shown in Figure 3. The zigzag scanning order is performed on each tree then each value is compared to threshold in parallel. The parallelization of the computation process of each tree reduces the computation time. The number of levels where trees are compared to threshold depends on the required output data rate. The less number of levels is the less data rate. Parallel processing has been adopted in implementing the SPIHT coding, where each tree is processed individually. Figure 4 shows a flowchart of LM-SPIHT parallel processing. The output of SPIHT in previous step is encoded using runlength encoding. Runlength is useful when applied on the SPIHT output bitstream. Since the SPIHT output are represented by three values only (1: significant list, 10: significant pixel, and 0: insignificant pixel) this makes the output contains a long successive runs of 10 or 0. Flowchart of Runlength encoding is as in Figure 5. This step reduces the number of required bits to be transmitted to the receiver. Hence reducing the data rate of the overall system to makes it more compatible to work in the 5G mobile networks. Table 1 shows the LM-SPIHT algorithm step by step. If 'LL' quarter of wavelet output then step 6 Compute threshold step 7 Divide block into three trees step 8 If at least one value in subset (S1)>=threshold then step 9 If value in S1 >=threshold then out=10 step 10 If value in S1 < threshold then out=0 step 11 Else step 12 Skip subset (S1) step 13 If at least one value in subset (S2)>=threshold then step 14 If value in S2 >=threshold then out=10 step 15 If HARDWARE IMPLEMENTATION The system was implemented using netFPGA-CML-IG Kintex-7 board in the stand alone form of connection as shown in Figure 6. Xilinx ISE Design Suite 14.7 is used to write the VHDL code which has to be then compiled with zero errors. A test bench code is written in VHDL. To implement one level 2D-DWT, the input bits are passed to shift registers with 2-bits to perform FIFO operation. The output bits are then passed to the low pass filter and high pass filter in parallel. The operation continues until all rows in the image completion. Then the output of each filter is again passed to 2bit shift register to perform the FIFO operation. Also the same operation is done for bits in each column. The Register Transfer Logic view of the wavelet circuit is shown in Figure 8. Next step is implementing the low and high pass filters. The low-pass filter can be implemented as an average between the current value of a signal, and the previous value as (6) ( 6 ) Figure 8. RTL view of the wavelet filters Intuitively, the high-pass filter is calculated by determining the distance between the average and one of the signal values (or, in other words, the difference between the current and the previous signal value, divided by 2):as (7) The data cannot be directly stored to the output RAM, because the compression yet needs to be performed. For this purpose, a memory controller device is created which takes the decomposition values and controls the address and data buses of the output RAM to store the correct values. This operation is performed for the red, green, and blue layers of the image in parallel. Figure 9 depicts the simulation results of wavelet transform. A sample of the SPIHT encoder code written in Xilinx ISE is shown in Figure 10. The output of wavelet transform is next SPIHT encoded (note only LL is being encoded). The RTL configuration is in Figure 11. The same operation is done for red, green, and blue in parallel. The simulation results of SPIHT encoding shown in Figure 12. A sample of the SPIHT decoder code in ISE is shown in Figure 13. The VHDL code is downloaded to netFPGA device using JTAG connector and by using the SVF (Serial Vector Format) mode, multiple code programs are simultaneously downloaded in the sequence shown in Figure 14. RESULTS AND DISCUSSION The input image for DWT, LM-SPIHT and the restored image from DWT and LM-SPIHT for Compression are presented in MATLAB software. Figure 15 shows the encoding and decoding output for wavelet transform with LM-SPIHT. As it is noticed the difference between original and reconstructed image is undetectable by human eyes. The difference is only noticeable through calculation or graphical representation. Since the PSNR Performance for the Lena image using Wavelet transform and LM-SPIHT technique is 51.4 dB and MSE is 46.56 dB which is greater than 40 dB, then the two images are virtually indistinguishable. The compression ratio in Lena image is 40% shows that the image has been compressed. As shown in Table 2. From the calculation it is being seen that the hardware implementation speeded up the processing of Lena image 14604 times faster than that of the results obtained from software (MATLAB) simulations, thereby making it highly promising for real-time and memory limited mobile communication. LM-SPIHT was compared with several models in literature by mean of speedup factor, FPGA processing time, and PSNR as in Table 3. CONCLUSION This paper focuses mainly on compressing images using wavelet transform along with the proposed LM-SPIHT; Adding another level of compression using lossless Runlength encoding. Wavelet transform along with LM-SPIHT technique enhance the quality of reconstructed image by mean of PSNR, MSE and so on. Haar wavelet transform makes itself a standard technique for its high efficiency. SPIHT provides variety of important characteristics like good image quality, high PSNR etc., along with HWT technique makes the proposed work a very efficient design. The purpose of this paper is to improve image compression algorithm that is fast, efficient and low memory compatible for real-time implementations. The design was successfully implemented, tested and validated on Xilinx netFPGA-CML-IG Kintex-7 utilizing 1,797 logic cells and achieved a maximum frequency of 260.3MHz. The hardware implementation of the proposed algorithm speeded up the processing of Lena image 14604 times faster than that of the results obtained from MATLAB simulations. Thus, proven the FPGA realization of the design is highly promising for real-time and memory limited mobile communication.
2019-05-12T13:11:02.117Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "73ff6154d31ab3b0482371c41094056bf787b897", "oa_license": "CCBYSA", "oa_url": "http://ijres.iaescore.com/index.php/IJRES/article/download/17889/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ad21ac4d3c97ebbc00bd5f32f9cbe0912a8d9821", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
258806912
pes2o/s2orc
v3-fos-license
The Influence of Knowledge Worker Salary Satisfaction on Employee Job Performance Knowledge workers are quite crucial to every enterprise, so exploring the relationship between their salary satisfaction and job performance is significant. Hence, this work observes their salary satisfaction by identifying the employees’ emotions at the time of salary announcement. The relationship between salary satisfaction and job performance is studied through the obtained satisfaction. First, the convolutional neural network (CNN) model is introduced. Then, it is optimized by adding an attention mechanism to improve the accuracy of the emotion recognition model. Finally, through comparative experiments, the effectiveness of the model proposed and the impact of employee’s salary satisfaction on job performance are verified. The experimental results show that the recognition accuracy of the model is much higher than that of the traditional model. In particular, the recognition accuracy of neutral emotions is as high as 95%. It verifies the effectiveness of the model. the selection of different activation and loss functions according to different situations. Finally, in the model, the attention mechanism is introduced to optimize this, and comparative experiments are conducted to verify the rationality of this model and the relationship between employee compensation satisfaction and job performance. The research innovation is to use CNN models to identify employees' emotions in real-time, which is more accurate than the information obtained in the questionnaire survey. This is because employees' subjective feelings may occur during the questionnaire survey, causing errors in the analysis results. Meanwhile, this work also provides direction for optimising CNN models and contributes to the design of deep learning models. Drosos et al. (2021) conducted interviews within enterprises to study employee salary satisfaction and job performance. Based on the interview results and the company's current situation of employee satisfaction, they proposed research assumptions and built a research model. Then, questionnaire distribution, data collection, data analysis, correlation analysis, regression analysis, and other methods were adopted to verify the model proposed previously and identify the relevant influencing factors. Finally, the factors significantly impacting employee satisfaction in the performance process were identified, and influencing factors were informed to the company's management for analysis (Drosos et al., 2021). Liang et al. (2021) also researched through questionnaires, obtaining 410 valid questionnaires. The qualitative comparative analysis method of fuzzy sets was applied to deeply explore the impact mechanism of working conditions, sense of responsibility, leaders, and external rewards on employee job satisfaction. The research results show that working conditions and leaders are vital in improving employee job satisfaction, while responsibility and external rewards mainly play a "health care" role. There are four practical ways to improve employee job satisfaction: work conditions and leaders are the core conditions, and a sense of responsibility and external rewards are auxiliary conditions (Liang et al., 2021). Regarding emotion recognition, Dzedzickis et al. (2020) proposed a subjective and objective feature fusion neural network model for emotion recognition. This model could effectively learn the spatiotemporal information of electroencephalogram (EEG) signals, dynamically integrate EEG signals with eye movement signals, and output emotion classification results through a classifier. Finally, comparative experiments were conducted using public datasets. Experimental results showed that the new model's accuracy was 86.27% and the standard deviation was 10.16%, which were superior to traditional models. The new model could better utilize the complementary relationship between subjective and objective features to achieve better emotion recognition effects (Dzedzickis et al., 2020). Literature Review In previous research, data were mainly obtained through questionnaire surveys to analyse the relationship between employee salary satisfaction and job performance. This work studies the real-time emotions of employees through neural network models. When optimizing emotion recognition models, the previous focus was the information input, such as data preprocessing. However, this work modifies the convolutional layer to improve the model's performance and reduce the need for data processing. Convolutional Neural Network (CNN) Deep learning is widely applied in multiple fields, especially in image classification. Thereby, CNN is born. Its spatial invariance and channel specificity have significantly improved its image processing speed and effect (Ghosh et al., 2020). CNN is a kind of artificial neural network which can divide the image into several small areas during feature extraction and recognition. These small areas are called "receptive fields". The convolution kernel and receptive field size in CNN should be consistent. It is necessary to multiply the pixel value of the image's receptive field and the convolution bit by bit, sum them, and then add the offset. Figure 1 displays the process. The convolution operation calculates the input data by pushing the filter at a fixed distance. It is to multiply the filter data of each position and the corresponding input data, then sum them, and finally store the calculation results in a specific output location. The filter mentioned here is a convolution kernel, and the distance where the filter is applied is called the step (Ciancetta et al., 2020). Before the network model's convolution operation, fixed data are sometimes added around the input data to adjust the output size. This operation is called a filling operation. It is a method often used in convolution operations, so the convolution operation can consciously adjust the output size and transmit the data to the next layer (Lou & Shi, 2020). Finally, the output data size can be calculated according to the input size, filling operation, and step size . The calculation equation is as follows: The size of input data is ( H , W ), the filter size is ( h , w ), the output size is ( ¢ H , ¢ W ), the filling is P , and the step size is s . Most pooling operations refer to a down-sampling operation in the pooling layer, generally conducted independently on each channel. Its step size will refer to the shape and size of the receptive field used and normally plays a role in cooperation with the convolutional layer. It can make the network observe some features' existence and reduce the spacerange size and computation amount. Moreover, it can also prevent over-fitting and expand the receptive field (Chen et al., 2020). The most common operations in the network are maximum pooling and average pooling (Xu & Qiu, 2021). Maximum pooling refers to taking the maximum value in the selected range as the pooling result, and average pooling refers to taking the average value of all values in the selected field as the pooling result. Figure 2 shows the pooling operation. The last is the fully connected layer. Its primary purpose in the deep network is to classify the network correctly. It is generally placed at the end of the model structure in practical use to weigh the features designed by various previous convolutional layers. It means integrating each component and performing dimensionality reduction operations on the combined feature image (Khan et al., 2020). It does not specify the size of the input image, that is, as long as the image size is sufficient when the last layer of the network is input (Yang et al., 2021). The confusion matrix is a kind of error matrix about image classification results. It is a commonly used auxiliary tool in deep learning. As its name implies, its function is to highlight whether there is confusion among multiple fundamental categories in the test set in the model (Markoulidakis et al., 2021). Generally, the matrix values involved in the confusion matrix are precision rate, recall rate, and accuracy rate. The precision rate is the probability that the real category accounts for the number of predicted column categories. It is the expected number divided by the total number of column categories (Qiu et al., 2020). Recall rate refers to the probability that the number of real values predicted by the model in the real value sample accounts for the real category. It is the expected number divided by the sum of row data (Shen et al., 2020). The selection of the loss and activation functions in CNN is also significant. The smaller the value of the loss function is, the better the robustness of the network model is (Akbari et al., 2021). It is mainly applied to the training process of the network model. When the training data of each batch enter the network model, the output value is calculated using forward propagation. Then, the loss function will use a particular method to calculate the difference between the output value and the real value, which is the loss value (Clough et al., 2020). After obtaining the loss value, the model searches for the optimal weight parameter through back-propagation to achieve the purpose of model learning. It can reduce the loss between the real value and the predicted value so that the expected value generated by the model will approximate the direction of the real value. (Qu et al., 2022). The commonly used functions in classification tasks include the mean square error (MSE) and the crossentropy error (Chen et al., 2021). The MSE loss function is quite intuitive. It uses the Euclidean distance between the predicted and real values to find the loss value. When the predicted value is closer to the real value, the MSE of the two will be smaller (Gupta et al., 2020). The equation is as follows: is the output of the neural network, y i ( ) refers to the supervised data, N represents the sample quantity, E represents the loss function, and k represents the number of iterations. However, there is a disadvantage when using MSE functions. When the output probability value of its partial derivative is close to 0 or 1, it may cause the partial derivative value of MSE to disappear when the network model is just trained. The partial derivative equation is as follows: i is the dimension of the data, and x i ( ) is the data input of the convolutional layer. The crossentropy function is essentially a function derived from the application of cross-entropy knowledge in information theory in the field of communication to classification tasks. It is also a function often used in classification tasks. The smaller the value of cross-entropy is, the closer the two probability distributions are. Its equation is: y 1 represents the probability that the sample is predicted to be the true value, y refers to the sample's label, and log is an exponential function. In the network model construction, the activation function selection is also significant. It is a function that runs on the network neuron. Its function is to map the input to the output port in a nonlinear form, which increases the nonlinearity of the network model so that its network can be applied to various nonlinear models. It is the activation function . The Sigmoid function is a composite function in exponential form, and it is the most frequently used function in the neural network. It tends to be close to human brain neurons in the physical sense and is the most familiar S-type function in biology. Its equation is: Sigmoid is the activation function, e is a constant, x is the data input, and Sigmoid ' is the partial derivative of the activation function. Figure 3 displays its function image and partial derivative image. When the input becomes increasingly larger, its output value gradually approaches 1, and it is smaller than 1 according to its output trend. As the input gets smaller and smaller, its output value gradually approaches 0, and the output value is always greater than or equal to 0 according to its output trend. In the back-propagation process, the gradient value will take the y-axis as the symmetry axis. Moreover, when the value of the Sigmoid function approaches 1 and 0, its gradient value tends to 0. That is, the gradient disappears (Szandała, 2021). The Sigmoid activation function is a composite function in exponential form. Its derivation involves the derivation of exponential function and the derivation of division. Hence, its calculation is relatively large compared with other functions, and the Relu activation function is an obvious piecewise function from the image. Figure 4 displays its function image and partial derivative image. When the input is negative, the output is 0. When the input is positive, the output remains unchanged. Besides, when the Sigmoid activation function is back-propagated in the partial derivative diagram, its gradient value quite easily approaches 0. However, the Relu activation function can be regarded as a positive input value in the partial derivative process. The gradient is always 1, and there will be no gradient disappearance problem. The calculation equation is as follows: Relu is the activation function, x is the input data, and Relu ' is the partial derivative of the activation function. After the basic situation of CNN is discussed, the next step is to optimize the emotion recognition model to improve the recognition accuracy. Facial Emotion Recognition Algorithm Based on Attention Mechanism Network Attention is generally an idea that tends to focus on aspects or certain areas. When attention is added to the neural network, the network will become quite intelligent. When it extracts features, it will focus on extracting features of key parts. However, ordinary CNN aims only at global features and cannot selectively pay attention to salient features of key positions for input images (Cai & Wei, 2020). Compared with the general convolution's facial expression recognition algorithm network, it is based on capturing the overall global features of the face. With the proposed facial expression algorithm based on the attention mechanism, it is easier to capture the key information of the face part, namely the local information of the facial expression. It combines attention to focus the network on key parts of the face (Qi et al., 2020). Face emotion recognition based on the attention mechanism mainly includes three modules: preprocessing, feature extraction, and expression classification (Jang et al., 2020). Figure 5 displays the specific process. This work improves the attention neural network, which mainly consists of three parts: space converter, backbone network, and attention network. Figure 6 presents the optimization results. The space converter network makes the network model have space invariance. It corrects the direction of the image by letting the neural network learn the spatial changes of the image, such as cutting and zooming a side face. It turns the image into the ideal direction of the face, performs a spatial transformation on the input image, and outputs a new image (Wen et al., 2022). The space converter network is mainly composed of two parts. One part is the local network, and the other part is the grid generator (Xu et al., 2022). This work designs a local network of two fully connected layers, as shown in Figure 7. Figure 5. Facial expression recognition The dataset involved here is small, and the requirements for network complexity are low. Moreover, to highlight its effect, the backbone network designed in this algorithm is improved based on the Alexnet model . Generally, the Alexnet model comprises five convolutional layers and three fully connected layers with eight layers of network structure. Overlapping pooling is selected as its pooling method. The size of the pooling window of this pooling method is larger than the step size, so each pooling has overlapping parts, which can avoid partial over-fitting (Hao et al., 2022). This work optimizes the network by adding the asymmetric convolutional layer. Figure 8 displays the optimized network. The replacement by the asymmetric convolutional layer enhances the backbone and does not require additional parameters. Channel attention is used to model the importance of each channel and then highlight or constrain channels for different tasks (Kong et al., 2021). The specific process is as follows. The input feature first passes through the global max pooling layer and the average pooling layer. Then it passes through two 1*1 convolutional layers to increase and reduce the dimension. Finally, two vectors with the same dimensions are generated and added to generate the channel through the activation function (Shi et al., 2022). Figure 9 is its channel attention model. Spatial attention is mainly used to improve the feature representation of key parts. The specific process is as follows. Two feature maps are obtained through the average pooling and global max pooling of channels for input features and then are spliced based on channels. The feature map after the cascade undergoes a 7*7 convolution operation. Its purpose is to reduce the dimension of the feature map, generate spatial attention through the activation function, and finally obtain the scaled new features (Wen et al., 2020). The attention module here adopts global average pooling and global max pooling, which is biased towards the overall situation. However, the attention mechanism network used is based on the above attention mechanism. It uses the importance pooling method to form local channel attention and cascades with the above spatial attention to finally form the improved attention mechanism module. The importance pooling method is a pooling layer based on local importance, highlighting the importance of different features. Compared with global pooling, it can highlight classification features more. This work applies it to channel attention to form channel attention of local importance. The process of adding importance pooling to channel attention is quite simple. First, the input features generate the importance map through convolutional learning. Then, its importance is applied to perform the normalization operation. After that, the obtained eigenvalues are multiplied by the input eigenvalues after exponential calculation. Finally, the division operation with the eigenvalues after the exponential operation should be conducted to better learn the importance and get channel attention. Figure 10 displays the improved attention module. When training the model, the data first pass through the space converter network, that is, after affine transformation; the input eigenvector is still 1*48*48. Finally, the attention module is added to the first and last convolutions of the model. design of Questionnaire for Knowledgeable Employee Knowledgeable workers are professionals who focus on knowledge and are driven by innovation. They possess a high degree of autonomy and creativity and can solve problems and promote organizational development through the use of knowledge. This type of employee usually has a high degree of education and professional skills, and is mainly engaged in knowledge-intensive or highly-informative fields, such as research and development, design, consulting, and marketing. Unlike workers in the traditional sense, knowledgeable employees pay more attention to exerting their creativity and value and focus more on self-realization and growth. Therefore, enterprises need to provide these employees with a good work environment and promotion mechanism and stimulate their enthusiasm and creativity to ensure their continuous contribution to the organization and competitive advantage. Hence, understanding their satisfaction with salary becomes quite crucial. Based on this, this work designs a questionnaire, and the answers to the fill-in-the-blank questions will be artificially classified. Table 1 is part of the questionnaire. Comparative Experimental Results of the Emotion Recognition Accuracy Between the Optimized Model and the Traditional Model Seven emotions are divided into three categories: positive, negative, and neutral. The selected dataset is the public FER2013 dataset. Table 2 shows the equipment environment used in this experiment. Figure 11 is the experimental result of the recognition accuracy comparison between the traditional and the optimized models. Figure 11a shows the data from the conventional model, and Figure 11b presents the data from the optimized model. Figure 11 suggests that the traditional emotion recognition model has the highest recognition rate for neutral emotions, and its recognition accuracy is 81%. Its accuracy of positive and negative emotion recognition is only 76% and 77%, respectively. The optimized model's recognition rate of Computer voice Python neutral emotion is 95%, the highest. Moreover, its recognition accuracy for positive and negative emotions is 92% and 91%. It reveals that the accuracy of neutral emotion recognition of the two models can differ by up to 14%. This comparative test verifies the effectiveness and feasibility of the optimized model. An Experimental Analysis of the Effect of Salary Satisfaction of Knowledgeable Employees on Job Performance The emotion of employees when they know their salary can reflect their satisfaction with the salary. The salary data and job performance data of 200 actual employees are selected. Besides, the company has done a salary satisfaction questionnaire. When employees know their salary, the proportion of employees with positive, neutral, and negative emotions is 73%, 21%, and 6%, respectively. Job performance is the January performance. Performance is divided into two levels. Performance ranking in the top 50% is excellent, and performance ranking in the bottom 50% is good. Figure 12 presents the experimental results. Figure 12 shows that 93 employees who maintain positive emotions have excellent performance, accounting for 63.7% of the total number of employees with positive emotions. Among the employees whose performance ranks in the bottom 50%, 9 people are in a negative mood, accounting for 75% of the total number of people in a negative mood. It suggests that regarding salary satisfaction, the performance of employees with positive emotions is generally relatively high, while that of employees with neutral emotions is mixed. However, the performance of employees in a negative mood is mostly in the bottom 50%. It reveals that the salary satisfaction of knowledgeable employees will impact their job performance. dISCUSSION The research results show that the optimization model is significantly higher than the traditional model regarding recognition accuracy. The recognition accuracy rates for the three emotions are 92%, 95%, and 91%, with all reaching over 90%. However, the recognition accuracy of traditional models is 76%, 81%, and 77%, respectively, which is much lower than the optimization model. The main reason is that Figure 12. The relationship between employee satisfaction and job performance the optimization model introduces an attention mechanism, making it more accurate in capturing key facial information. Meanwhile, a new fully connected layer is added to the optimization model, which processes the collected information multiple times, and then the optimization model can collect new feature information. The analysis of the relationship between salary satisfaction and job performance of knowledgeable employees reveals that the higher the salary satisfaction of employees is, the higher their job performance is. There is a positive correlation between the two. This is because most job performance for knowledgeable employees comes from completing corresponding prescribed work tasks on time. The higher their salary satisfaction is, the more motivation they have. The emotion recognition research conducted by Li et al. (2022) improved recognition accuracy through data preprocessing, with the highest recognition accuracy being only 88.3%, which was lower than the average recognition accuracy here. Moreover, the research conducted by Islam et al. (2021) optimized recognition accuracy by adding an attention mechanism. However, compared with this model, the model proposed here adds fully connected layer optimization. Although the recognition speed of the proposed model is slightly slower, its recognition accuracy is higher than the model being referenced . CONCLUSION With the times' progress, enterprises' demand for knowledge-based talent has become increasingly higher. How to retain employees and make them play their best role and use their best ability in their jobs have become issues that enterprises need to consider. Therefore, exploring the relationship between employee salary satisfaction and job performance is crucial. This work first introduces the details of the CNN model, loss function, and activation function. Then, the CNN model is optimized by introducing an attention mechanism to improve the recognition rate of the emotion recognition model. Finally, through comparative experiments, the rationality of the model is verified and the impact of employee salary satisfaction on employee job performance is verified. The experimental results show that the recognition accuracy of the model proposed is higher than that of the traditional model, especially the recognition rate of neutral emotions, which can reach 95%, thus verifying the effectiveness and feasibility of the model proposed. Additionally, the optimized model is adopted to carry out an experimental study on the salary satisfaction and job performance of employees in a company. It is found that 63.7% of the employees who are in a positive mood and have high satisfaction with their salary have excellent performance. Employees in a negative mood have low satisfaction with salary, and 75% of them have good performance, indicating that their satisfaction with their salary will directly affect their job performance. However, this work also has some deficiencies. First, in addition to salary satisfaction, multiple other factors also affect employee performance. Other variables will be added in the follow-up study. Second, although the optimized model improves the recognition accuracy, it makes the model operation more complex and increases the amount of calculation. Later, the calculation speed of the model will be improved by reducing the number of modules.
2023-05-20T15:12:27.475Z
2023-05-18T00:00:00.000
{ "year": 2023, "sha1": "f7ef5281454577cf9984b8094d8e9428707564bf", "oa_license": null, "oa_url": "https://www.igi-global.com/ViewTitle.aspx?TitleId=323426&isxn=9781668478912", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e32af4d362b5dffe4ad38a705e1ecb172318785d", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
216588976
pes2o/s2orc
v3-fos-license
Skimmin Improves Insulin Resistance via Regulating the Metabolism of Glucose: In Vitro and In Vivo Models Skimmin is the major pharmacologically active component present in Hydrangea paniculata, in the traditional Chinese medicine as an anti-inflammatory agent, and its anti-inflammation and anti-diabetic effect has had been studied in previous studies. The metabolism of glucose plays an important role in the pathophysiology of diabetes. Therefore, it was identified as an important target for improving diabetic. Herein, we found that skimmin relieved the palmitic acid and high-fat and high sugar-induced insulin resistance. Furthermore, skimmin enhanced the glucose uptake via inhibiting reactive oxygen species (ROS) and reducing the level of inflammatory correlation factor. Meanwhile, skimmin reduced the glucose output by promoting PI3K/Akt signaling pathway and down-regulating the expression of glycogen synthase kinase-3β (GSK3β) and glucose-6-phosphatase (G6Pase). In conclusion, skimmin can improve the insulin resistance by increasing glucose uptake and decreasing glucose output in vitro and in vivo. INTRODUCTION Type 2 diabetes is a group of metabolic diseases which characterized by multiple etiologies caused chronic hyperglycemia. The main cause of type 2 diabetes is insulin resistance. Insulin resistance means the physiological doses of insulin cannot get their normal biological effects, and decrease the uptake and utilization of glucose (Sathya Bhama et al., 2012). The previous studies indicated that the number of people with diabetes in China have reached 92.4 million in 2010 (Yang et al., 2010). The people of diabetes will be reached 336 million in 2030 (Wild et al., 2004). The establishment of insulin resistance model opens a window to study diabetes. Palmitic acid was usually used to induce the model of insulin resistance in HepG2 cells (Chen et al., 2019). Meanwhile, high fat and high sugar were utilized to induce the model of insulin resistance in SD rats (Fan et al., 2019a). With these models, it becomes more easier to research the effectiveness of drugs. Due to the low toxicity of natural drugs, more and more people are competing for prevention and treating type 2 diabetes. Previous study showed that skimmin can improve membranous glomerulonephritis through suppressing inflammation and immune complex deposition (Zhang et al., 2013). Skimmin also can inhibit the streptozotocin-induced diabetic nephropathy in Wistar rats (Zhang et al., 2012). However, the molecular mechanism of skimmin for suppressing insulin resistance has not been reported before. In our study, this is the first study to investigate the skimmin can increase glucose uptake, promote glycogen synthesis and improve insulin resistance in vitro and in vivo. Cell and Animal Model Building The HepG2 cells were purchased from ATCC, Virginia, USA. It was cultured in Dulbecco's modified Eagle's medium (DMEM) with 10% fetal bovine serum (FBS) at a 37°C incubator containing 5% CO 2 . HepG2 cells were exposed in different dose of palmitic acid (0.1, 0.2, 0.3, 0.4 mmol/L) for different time (24, 36, 48, 60 h). Then they were administrated with different concentration of skimmin (10, 20,40 mM) for 24 h. The glucose content of medium was tested by glucose oxidase kit. The optimal dosage and time of palmitate were determined according to the glucose concentration of medium. Adult Sprague Dawley rats (6-8 W) (n = 50) were purchased in Laboratory Animal Center of Zhengzhou University. The experiment of animals was approved by the Institutional Animal Care and Use Committee of Anyang Institute of Technology (IACUC approval no. 2018-001). The animals were anesthetized with 10% chloral hydrate and sacrificed by cervical dislocation. All efforts were made to minimize suffering. At first, The animals were randomly divided into two group: normal diet group (n = 10) and HFHS diet group (n = 40). The normal group was basic diet. The HFHS diet group was feed with 60% basic diet, 20% lard, 15% refined sugar, 1.5% cholesterol, 0.1% sodium cholate, and 3.4% peanuts. After 12 weeks of dietary manipulation, the 40 rats were again randomly divided into 4 groups: HFHS group (n = 10), Low dose (10 mg/kg/d) skimmin group (n = 10), Middle dose (25 mg/kg/d) skimmin group (n = 10), High dose (50 mg/kg/d) skimmin group (n = 10). Each group was administrated for 4 weeks. Glucose, Insulin, and Hepatic Function Assay The glucose concentration of medium and blood were detected by the kit of glucose oxidase-peroxidase (Baoping Bioengineering Institute, Zhengzhou). Glycogen levels in medium and liver tissue were measured via the kit of Glycogen Assay (Baoping Bioengineering Institute, Zhengzhou) according to the instructions of manufacturer. The levels of TNF-a, IL-6, insulin, and IL-1b were detected by ELISA kits (Baoping Bioengineering Institute, Zhengzhou) following the instructions of manufacturer. Alanine transaminase (ALT) and aspartate transaminase (AST) were determined by the automated biochemistry analyzer. DCFH-DA Staining Combined With Flow Cytometry Assay The cells were seed (1*10 5 ) in 6-well plates and cultured overnight, and then were fed with serum-free medium containing DCFH-DA (1:8000). Then, the cells were continuously cultured for 30 min in the incubator and washed with PBS for two times, the cells were collected and filtered with 200 using mesh screen. Then the intracellular ROS levels were determined by flow cytometry according to our previous research methods (Fan et al., 2019b). Immunohistochemistry Staining The tissue sections (5 mm) was performed antigen retrieval by microwave after deparaffinization and rehydration for 10 min in sodium citrate buffer. Sections were cooled to room temperature, treated with 3% H 2 O 2 for 10 min and blocked with 5% goat serum 40 min at room temperature. The sections were incubated at 4locked with 5% goat serum 40 min at room temperature. The sections weodium citrate buffer. Sections were cooled to room temperature, treated with 3% Hnt target for improving diabetic. Herei-rabbit, diluted 1:200) for 30 min. The sections were counterstained with hematoxylin after diaminobenzidine staining according to our previous research methods (Fan et al., 2019a). Statistical Analysis Data were expressed as the mean ch methods 40 min at room temperature. The sections weodium citrate buffer. Sections were cooled to room temperature, treated with 3% Hnt target for improving diabetic. Heresis of variance. P < 0.05 was considered to indicate a statistically significant difference. The Model of Insulin Resistance Is Built In Vitro and In Vivo The cells model of insulin resistance was established using palmitic acid to induce HepG2 cells. The cells were treated in different dose of palmitic acid (0.1, 0.2, 0.3, 0.4, 0.5 mmol/l) for different time (24, 36, 48, 72 h). Next, the glucose content in medium was detected by glucose oxidase kit. The results indicated that the glucose consumption was highest in the 0.2 mmol/L of palmitic acid. In addition, when 0.2 mmol/L palmitic acid caused the highest blood glucose consumption at 36 h ( Figures 1A, B). Therefore, the insulin resistance cell model was established under the following conditions: 0.2 mmol/L palmitic acid for 36 h. Besides, the animal model of insulin resistance was established by high fat and high sugar. The results showed that the levels of glucose, insulin, and insulin resistance index (HOMA-IR) were increased in the model group compared with the control group ( Figures 1C-E), which means the animal model of insulin resistance was built successfully. Skimmin Reduce Blood Glucose and Improve Insulin Resistance In Vitro and In Vivo The chemical structure of skimmin is shown in Figure 2A. MTS assay results showed that skimmin had no cytotoxicity to HepG2 cells ( Figure 2B). Then, we investigated whether skimmin had an effect on the glucose consumption of palmitic acid-induced HepG2 cells. The results showed that skimmin promoted the absorption of glucose in a dose dependent manner in palmitic acid-induced HepG2 cells. Metformin was used as the positive control group ( Figure 2C). Furthermore, the in vivo studies showed that skimmin decreased the level of serum glucose, insulin, and improved HOMA-IR ( Figures 2D-F). Furthermore, we found that skimmin can decrease liver weight, body weight, and ratio of them induced by high fat and high sugar ( Figures 2G-I). Besides, HE staining showed that skimmin inhibited the pathological changes of liver induced by high fat and high sugar ( Figure 2J). Meanwhile, skimmin suppressed the secretion of lipid factors ( Figure 2K), and improved the function of liver in a dose dependent manner ( Figure 2L). Skimmin Increase the Uptake of Glucose by Reducing the Activation of Inflammatory Signaling and Inhibiting Oxidative Stress In Vitro and In Vivo Nowadays, we have known that skimmin can promote the uptake of glucose to improve insulin resistance in vitro and in vivo. However, the molecular mechanism by which skimmin reduce blood glucose is still unclear. Oxidative stress is the pathological basis of insulin resistance (Cremonini and Oteiza, 2018;Dos Santos et al., 2018). The activity of the NADPH oxidase (NOX) is critical to the production of ROS in the organism. Previous studies showed that the ROS production increased by up-regulating the expression of NADPH oxidase 3 (NOX3), activating p38MAPK and JNK signaling pathway, and inducing insulin resistance in palmitate-induced HepG2 cells (Gao et al., 2010;Malik et al., 2019). Our studies found that skimmin can inhibit palmitic acid-induced the production of ROS, and 40 mmol/L skimmin was more obvious using flow cytometry assay (P < 0.05), which was better than metformin, a drug used to treat diabetes ( Figure 3A). In addition, skimmin also inhibited the increased of NOX3 protein compared with the insulin resistance group induced by palmitic acid. The effect of 40 µM skimmin was better than metformin ( Figure 3B) (P < 0.05). What is more, we found that skimmin reduced the phosphorylation expression of p38MAPK and JNKs compared with insulin resistance group in a dose dependent manner in vitro and in vivo ( Figures 3C, D). AP-1 transcription factors, including c-Fos, c-Jun, and ATF, which also are the down-stream of p38 and JNKs, have a well-known role in promoting IL-6 and TNF-a transcription (Oh et al., 2013;MacNeil et al., 2014). In order to further detect the anti-inflammatory mechanism of skimmin, we tested the expression of NF-kB and the secretion of inflammatory factors. Our studies indicated that skimmin inhibited the phosphorylation level of NF-kB and the secretion of IL-6, IL-1b, and TNF-a in a dose dependent manner in vitro and in vivo ( Figure 3E, F). The above results showed that skimmin can promote the uptake of glucose and reduce inflammation caused by insulin resistance through reducing the production of ROS and suppressing the expression of NOX3, p-p38MAPK, p-JNKs, and NF-kB. Skimmin Improve the Synthesis of Liver Glycogen by Increasing the Phosphorylation of Akt and Inhibiting the Expression of GSK3b In Vitro and In Vivo GSK3b is a critical enzyme which can reduce the synthesis of liver glycogen, increase the concentration of blood sugar in the body (Zhang et al., 2018). Some studies indicated that Grifola Frondosa stimulates glycogen synthesis by regulating PI3K/Akt/GSK3 signaling pathway, improving insulin resistance (Ma et al., 2014). To further explore the relationship between skimmin and glycogen synthesis. The PI3K/Akt/GSK3 signaling pathway was determined by Western blot after skimmin treatment in vitro and in vivo. The results demonstrated that skimmin can obviously upregulate the level of p-PI3K, p-Akt and downregulate the level of GSK3b compared with insulin resistance model group in a dose dependent manner in vitro and in vivo (Figures 4A, B). Previous studies showed that GSK3b inhibitor can suppress the synthesis of glycogen (Maqbool and Hoda, 2017). Furthermore, the volume of glycogen was detected by glycogen assay kit. We found that glycogen was decreased in the model group, while skimmin can increase the volume of glycogen in a dose dependent manner in vitro ( Figure 4C). G6pase increases the blood glucose concentration in the body by promoting the decomposition of glycogen (Chou et al., 2015). Immunohistochemical and Western blot results demonstrated that skimmin inhibit the expression of G6pase in a dose dependent manner in vivo (Figures 4D, E). The above results showed that skimmin can decrease glucose output by up-regulating the phosphorylation level of PI3K and Akt, suppressing the expression of GSK3, promoting the synthesis of glycogen. Meanwhile, skimmin also can decrease the expression of G6pase, inhibit the breakdown of glycogen. DISCUSSION The decomposition of glycogen can increase blood glucose, which is then decomposed and used by cells. The excess blood glucose is converted into glycogen, maintaining the balance of blood glucose in the body under the regulation of insulin and glucagon. Under the condition of insulin, insulin of normal concentration cannot play its due role, thus causes the increase of blood sugar (Yang et al., 2011;Tangvarasittichai, 2015). We explored the molecular mechanism of insulin resistance from two aspects: the utilization of blood glucose and the decomposition of glycogen. Oxidative stress leads to the production of a large number of ROS, which further leads to the apoptosis of islet cells, and then cause insulin resistance (Hu et al., 2016;Kuzmenko et al., 2016; resistance. # Significant compared with control group alone (P < 0.05). *Significant compared with insulin resistance group alone (P < 0.05). TC, triglyceride; TG, total cholesterol; LDL, low-density lipoprotein; HDL, high-density lipoprotein; ALT, alanine transaminase; AST, aspartate transaminase. Alcalá, et al., 2017). Previous studies showed that TNF-a induced the expression of NOX3, promotes ROS production, activates the JNKs signaling pathway, and produces insulin resistance in the HepG2 cells (Walton, 2017). What is more, high sugar and ROS also can activate p38MAPK and induce insulin resistance in vascular smooth muscle (Liu et al., 2018). Therefore, it is necessary to find the agent to decrease ROS, suppress the secretion of inflammatory factors. enhance islet cell secreting function, increase the utilization of blood glucose and reduce blood glucose. In addition, the glycogen synthesis also plays a critical role in the glucose output of the liver. Previous studies indicated that the enhancement of glycogenesis and decrease in glycogen synthesis are also responsible for insulin resistance (Ren et al., 2018). The PI3K/ Akt/GSK3 signaling pathway is involved in the metabolism of glycogen. The activation of PI3K is required for insulin-stimulated glucose uptake (Yang et al., 2015). Phosphorylated Akt can inhibit the activity of GSK3, and GSK3 can suppress synthesis of glycogen by inhibiting glycogen synthase (Yang et al., 2017). Furthermore, G6pase is a critical enzyme in gluconeogenesis and glycogenolysis and plays an important role in the homeostasis of glucose (Chou et al., 2015). Inhibiting the expression of G6pase can also reduce blood glucose. So, it is also important to find drugs to inhibit the production of glucose through upregulating the expression of p-AKt, downregulating GSK3 expression, and inhibiting G6pase expression. Skimmin is the main active substance in Hydrangea paniculata, and has the characteristics of anti-inflammation, anti-plasmodial, and anti-cancer (Moon et al., 2011). We build the palmitic acidinduced HepG2 insulin resistance cell model and high fat and high sugar-induced insulin resistance SD rat model. Then skimmin was used to treat the HepG2 cells and SD rats, metformin as a control agent. The results showed that skimmin increased glucose intake and suppressed the inflammatory response through decreasing the production of ROS, suppressing the protein expression of NOX3, p-p38MAPK, and p-JNKs. Meanwhile, skimmin decreased glucose output by increasing the phosphorylated PI3K and Akt, suppressing the expression of GSK3b, reducing the glycogen synthase, increase liver glycogen synthesis, and inhibiting G6pase expression, reducing decomposition of glycogen ( Figure 5). The above will provide better drug options for the treatment of type 2 diabetes. To validate the utility of skimmin for type 2 diabetes patient improvement, more detailed investigation regarding pharmacokinetics of skimmin following its overdose should be addressed. In addition, the effectiveness of skimmin on the relevance markers of insulin resistance need to be further explored. Taken together, skimmin improve eventually the insulin resistance by promoting the glucose intake, and inhibiting the glucose output. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/ supplementary material. ETHICS STATEMENT The animal study was reviewed and approved by Anyang Institute of Technology. AUTHOR CONTRIBUTIONS XF and HL designed the research, GZ and XC performed the research and wrote the paper. LH and DQ analyzed the data.
2020-04-29T13:09:57.822Z
2020-04-29T00:00:00.000
{ "year": 2020, "sha1": "8579e7a072f906e4c5bfb479a8762a6906b38727", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2020.00540/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a504a83a61c1f2c865c5b83c2807902a045c8927", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
69413619
pes2o/s2orc
v3-fos-license
The Effectiveness of Grammar Tutoring Program Based on Students ’ Feedback Batch 2016 In some ELESP speaking and writing classes, many students failed to meet a standard of good grammar & pronunciation. Therefore, two kinds of tutoring program were held namely grammar tutorial and pronunciation tutorial to improve students’ skill. Those programs have run for about six months, but some people questioned whether those programs were effective or not. As an effort in dealing with that phenomenon, this paper will discuss the effectiveness of grammar tutorial as one of the programs. The data were mainly taken from observation, interview, and questionnaire, and were presented qualitatively. In addition to that, this paper also shows some good practices that can be applied in the future tutoring programs. Based on the result of the analysis, grammar tutorial was effective due to the fact that 84% of the students agreed that this program helped them to improve their skill and to understand more about the grammar materials. Introduction Many ways have been done to make students understand the materials that have been given in class such as having interactive multimedia for teaching (Astuti, et al., 2018), having a literary work as a learning material (Mulatsih, 2018), developing problem-based learning (Isrokijah, 2016), implementing moodle-based learning (Wulandari, 2016), conducting a game session (Kapp, 2012), implementing reflective learning (Brockbank & McGill, 2007), finding students' motivation (Skinner & Belmont, 1993), having additional time for service learning (Sax, 1997) , conducting a tutoring program (Hock, et al., 2001), joining peer review or peer learning program (Chism, 1999), and etc.As one of the efforts in reaching the goal, peer teaching or peer learning has also been started by many practitioners for about four centuries.Osguthorpe and Scruggs (1986) proposed the effective method to improve handicapped students learning ability by having students as tutors in class.For a big class, peer instruction was proposed so that every student took part in the learning process (Crouch et al. 2007). Tutoring program could bring many benefits and disadvantages.Harper (2016) conducted a research about tutoring program which involved 91 children from grade one until grade eight.This tutoring program was conducted in small group.Statistic data showed that there was a significant improvement of students' skill in reading, spelling and counting.However, there was no progress in understanding sentences.Different from Harper, Wu (2016) analyzed the labeling system in tutorial program toward learning result and students' motivation.Although tutoring program increased students' self-efficacy and confidence, it turned out that the labeling system did not improve students' understanding.There were some benefits of conducting tutoring program, but some practitioners argued that it could not reach the best level of students' understanding.Although tutoring programs have been done for long time, some people still underestimated the effectiveness of these programs. Not only did some researchers claim that the tutoring program was not effective, some lecturers of ELESP Sanata Dharma University also thought the same after the implementation of the first tutoring programs.The tutoring programs of ELESP (grammar and pronunciation) itself started in the odd semester 2016.These programs were directed due to the fact that many students made some mistakes in writing and speaking English.For some cases, they did not even meet the minimum requirements of a good sentence.Some words were also mispronounced.This problem also sustained to the draft and defense of an undergraduate thesis. As stated before that after the first period of grammar tutoring program, some lecturers said that this program still did not help students a lot and it was not effective, it was crucial to know more from students' perspective about the effectiveness of the program due to the fact that they were the participants who experienced this program.Considering that matter, this paper will answer two main questions: to what extend does grammar tutoring program help students?And what are the positive and negative tutees' feedback that can be considered for future tutoring program? Method The concept of this tutoring program was adapted from King's peer teaching that was written in 2002 and O'Donnel's peer learning that was written in 2014.King proposed that the peer teaching consisted of a group of students with a tutor who would help their difficulties.The importance lied in these several aspects such as cognitive, interaction, knowledge development, context and its integration.Not only King who had a research in relation to peer teaching, some previous researches also dealt with tutoring program (Angelova, 2006;Briggs, 2013;Narayan, 2016;Ander, et al., 2016;Colvin, 2007).While Colvin (2007) argued that there was a lack of social awareness in peer tutoring that could lead to misunderstanding and power struggle between tutor and tutee, other researchers Angelova (2006); Briggs (2013); Narayan (2016) ;Ander, et al. (2016); tended to still conduct the peer teaching or tutoring program due to its' benefits.Ander, et al., (2016) had a randomized controlled trial of the Match/SAGA tutorial in Chicago.Their tutorial program has increased students' math grade and decreased the chance of failing in their math course as stated below. The tutorials improved math grades by 0.58 points on a 1-4 point scale, a sizable gain compared to the average math GPA among the control group of 1.77 (or essentially a C minus average).We also found that the tutorials cut in half the chance that students failed their math course (Ander et al., 2016, p. 10).Briggs (2013) also showed the improvement of students' competence including some ways for conducting peer teaching.Moreover, two researches from Angelova and Narayan proposed some strategies and factors that could lead to an effective tutoring program.Angelova (2006) showed some learning strategies for dual language learners in an English-Spanish peer teaching class.They were repetition, scaffolding with cues, codeswitching, invented spelling, use of formulaic speech, and non-verbal communication.Narayan (2016) underlined some factors that affected the effectiveness of peer mentoring.There were mentoring session, maintaining mentees, mentor time table, room allocation, mentor workstation, mentor attitude, attributes, role, previous mentoring experience, communication with support staff and mentor (p.9).But, none of those previous researches tried to gather the effectiveness of tutoring program based on students' perspective.Thus, this paper would reveal that topic based on students' feedback. Findings and Discussion This qualitative research began with pre-test for measuring students' basic competence of grammar.During the program, there were some observation steps for the method of tutoring.The questionnaire was distributed in the last meeting of the program and it mainly asked whether tutoring program has helped students or not based on the Likert scale from one to four.Because of the fact that many students did not attend the program continuously; the questionnaire was distributed to those students who mostly came to the tutoring program.There were 45 students who continuously took part in the program.The written feedback for better improvement was also provided in the questionnaire sheet.After analyzing the result of the questionnaire, there was an interview session with some students who came regularly to the grammar tutoring program. The Implementation of Tutorial Program Generally, the concept of tutoring involves at least two learners (one who has good ability for understanding the given knowledge and the other one who has less ability) who spend their time to study together.The one who has better competence will help the other one so that the tutee can understand the materials well.Technically, grammar tutoring program was done with 24 tutors from selective students from batch 2013 & 2014.Six lecturers took part in the process.There were three steps of selection: administration selection, written test, and interview test.In the administration selection, the candidate should have at least 3.5 for his GPA and A score for all grammar subjects.The written test was TOEFL test, and the interview dealt with the candidate's motivation, tutoring or working experience, and teaching method. The students who joined this program were from batch 2016 who got B, C, D, E, F score and from batch 2015 who got C, D, E, F score in the grammar subject.This program was considered as an additional class of grammar subject.Thus, this program was compulsory for those students.There were six classes and each of them consisted of 12 students with two tutors.One tutor helped six students.This program was regularly held on Saturdays at 09.00 -10.00 for 13 meetings.Before this program started, there was a briefing for the tutors.During the process of this program, there was a guidance process from the coordinator of this program. Students' feedback toward the implementation of tutorial program This section is the compilation among six parts, namely the result of pre-test as the background of students' competence level, the result of observation, result of questionnaire, interview, positive feedback and weaknesses of grammar tutoring program based on tutees' feedback.Due to the goal of this research that is the effectiveness of grammar tutoring program based on tutees' feedback, this paper does not provide the comparison between pre-test and post-test results.The consideration deals with many interventions from other subjects that increased students' ability.There were structure, speaking, listening, writing, and pronunciation classes that also distributed toward students' competence of understanding English.In conclusion, the measurement of the post test would not be objective due to the fact that there was not only tutoring program held at that time.The chart below is the specific result of grammar pre-test. The chart above showed that mainly students' competence was under 51%.The data confirmed that most students needed more effort to increase their competence to gain a better result.The mean of the pre-test result was 43.56% (17.4265 correct numbers out of 40 numbers).This was also the strong reason of conducting grammar tutoring program. Based on observation in tutoring classes, some tutors applied open discussion, only two tutors had a lecturing method.The discussion led to dynamic and lively atmosphere while the lecturing with so many questions to be asked dominantly by tutors made an intense class.For the communication, tutors spoke Bahasa Indonesia to explain grammar materials.Most students asked question to tutors and tutors also asked whether students had a difficulty in certain grammar topic or not.Students also gave feedback that there should be some fun activities during the tutoring program; such as games, tips and tricks session for students of ELESP.The tutoring method should vary in at least three meetings.Most of the tutoring classes did some exercises from a specific grammar book that was also used in the lecturers' classes.First, students did those exercises individually then they might ask the difficulty that they faced if their answer was incorrect.Some students did not come on time and there were some technical problems, such as the availability of some rooms, the man who was in charge for opening the room door was late, and some rooms were in the third floor.Those were the causes why some students delivered their opinion about the consistency of the starting time. From the questionnaire sheet that used Likert scale one to four (1 is for those who strongly disagreed, 2 is disagreed, 3 is agreed and 4 is for those who strongly agreed), most students agreed that grammar tutoring program has helped them to improve their competence, to study intensively, to understand about grammar more.The mean of their agreement that the program has improved their competence was 3.373611 (84.34%), the program has given them a chance for studying intensively was 3.413889 (85.34%), and the program has made them understand more about grammar was 3.397222 (84.93%).Below is the chart of the distribution of their agreement. From the data above, none of the students strongly disagreed that grammar tutoring program did not help them for their understanding and competence.There were only three who disagreed that this program helped them to increase their competence.Two participants disagreed that this tutoring program made them study intensively and increased their understanding.It means that the result of the questionnaire tends to reflect the positive feedback from the students.Students still wanted the grammar tutoring program to be continued. From the interview session, all interviewees said that this program was effective, even one interviewee confidently said that this program was very effective.The effectiveness of grammar tutoring program was seen from different reasons.First, it increased students' understanding about grammar.Second, tutors helped students in facing their personal difficulties when they studied at home and when they did not understand grammar materials in class by having a discussion session.Third, tutors gave similar exercise to the one in the class and guided students intensively by showing the way on how students should do it and sometimes the tutors' way was more easily understood.Below is one of the transcriptions. "When I had a difficulty about grammar material, I could ask the tutor and tutor helped me to face and solve it.I could understand more quickly.Discussion was the good practice of this program.But, it would be better if the discussion forum had less student no more than six students.Too many students made some could not focus.So I suggest that there should be additional number of tutors."Beside positive feedback, students also delivered some suggestions during interview process.They were about additional time, number of tutors, and the need of strict regulation because some students did not come in time. "Although this program is compulsory one, some students came late and sometimes they only signed three times out of thirteen."The result of the interview was the same with the written feedback on questionnaire sheet.Students might write their opinion freely.Five students marked that grammar tutoring program should be continued in the following semester, and three students wrote that this program helped them to study again the materials that had been given in the grammar class.The implementation was good based on six students' written feedbacks. Furthermore, they also added the weaknesses of this program that needed to be improved and some suggestions.Seven students wrote that the time allotment could be extended into one and a half hours.An hour was not enough to discuss the materials deeply for them.In this case, there were two students who explicitly wrote that each material should be discussed more deeply.They also proposed that the day of grammar tutoring program should not be on Saturdays.Weekdays were efficient enough since some of them lived far away from campus, and they needed to go to campus on Saturdays only for tutoring program.On the weekend, some wanted to go to their hometown, and some argued that they needed to spend their time hanging out with their friends.There were nine students who claimed that the day of the tutoring program needed to be changed.No wonder that a student wrote the decreasing of the number of students who came to tutoring program.One of them also suggested that there should be additional tutors so he could study in smaller group.Only one student thought that the program started too early in the morning. Conclusion Basically, students showed their good appreciation for grammar tutoring program.This result is taken from the analysis of questionnaire, students' written feedback and interview.These are the some good points of conducting a grammar tutoring program: students could tell and discuss their difficulty in the grammar subject with their tutor, students agreed that the program helped them to increase their competence and understanding, students agreed that they studied intensively during the program.However, there were some suggestions from students to make next tutoring program run better such as increasing the number of tutors, extending the time duration for tutoring, having a strict regulation, avoiding Saturday as tutoring day and having a smaller group discussion.In a nutshell, ELESP grammar tutoring program was effective based on students' opinion, and they needed it in the following semester too.
2019-02-19T14:08:19.455Z
2018-10-29T00:00:00.000
{ "year": 2018, "sha1": "1a00cd2f2a8ae0b18530aedd8d23b18b7919cc44", "oa_license": "CCBYSA", "oa_url": "https://e-journal.usd.ac.id/index.php/LLT/article/download/937/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1a00cd2f2a8ae0b18530aedd8d23b18b7919cc44", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
247548846
pes2o/s2orc
v3-fos-license
Environmental quality standards for diclofenac derived under the European water framework directive: 2. Avian secondary poisoning Diclofenac is a nonsteroidal anti-inflammatory human and veterinary medicine widely detected in European surface waters, especially downstream from Wastewater Treatment Plants. With some notable exceptions, veterinary uses of diclofenac in Europe are greatly restricted, so wastewater is the key Europe-wide exposure route for wildlife that may be exposed via the aquatic environment. Proposed Environmental Quality Standards (EQS) which include an assessment of avian exposure from secondary poisoning are under consideration by the European Commission (EC) to support the aims of the Water Framework Directive (WFD). In this paper we summarise information on avian toxicity plus laboratory and field evidence on diclofenac bioaccumulation and bioconcentration in avian food items. A safe diclofenac threshold value for birds of 3 μg kg−1 wet weight in food was previously derived by the European Medicines Agency and should be adopted as an EQS under the WFD to maintain consistency across European regulations. This value is also consistent with values of 1.16–3.99 µg kg−1diet proposed by the EC under the WFD. Water-based EQS of 5.4 or 230 ng L−1 in freshwater are derived from these dietary standards, respectively, by the EC and by us, with the large difference caused primarily by use of different values for bioaccumulation. A simple assessment of potential water-based EQS compliance is performed for both of these latter values against reported diclofenac concentrations in samples collected from European freshwaters. This shows that exceedances of the EC-derived EQS would be very widespread across Europe while exceedances of the EQS derived by us are confined to a relatively small number of sites in only some Member States. Since there is no evidence for any declines in European waterbird populations associated with diclofenac exposure we recommend use of conservative EQS of 3 µg kg−1diet or 230 ng L−1 in water to protect birds from diclofenac secondary poisoning through the food chain. lowest numerical value (i.e. the most sensitive value for any receptor group) is then selected as the primary EQS for that substance. Diclofenac is a nonsteroidal anti-inflammatory human and veterinary medicine widely detected in European surface waters, with wastewater treatment discharges identified as the major source [4]. Veterinary use of diclofenac in Europe is now highly restricted following an assessment that identified potential risks to European vulture populations through consumption of diclofenacdosed domestic animal carcasses [15]. There may be locally important exposures of vulture populations, where veterinary diclofenac use is still authorised, but these exposures will generally be via medicated carcases to soils and are, therefore, less relevant under the Water Framework Directive. The predominant environmental exposure route to diclofenac in Europe is, therefore, now via surface waters downstream from Wastewater Treatment Plants (WWTPs). European technical guidance is available to derive an EQS for the protection of wildlife from secondary poisoning through food chain transfer of a chemical [13]. The transfer of a substance from the water to organisms and through the food chain occurs as a result of bioconcentration or bioaccumulation mechanisms, termed the BioConcentration Factor (BCF) or BioAccumulation Factor (BAF), and between prey and predators by biomagnification, termed the BioMagnification Factor (BMF). The technical guidance defines the freshwater ecosystem food chain as: water → (BCF or BAF) → aquatic organisms (e.g., invertebrates) → (BMF) → fish → (BMF) → fish-eating predators. An EQS to protect birds against exposure to diclofenac via secondary poisoning of 5.4 ng L −1 in surface waters has been proposed by the European Commission [14]. However, an EQS set at this level would be failed by a very large number of European surface waters despite no evidence for population decreases related to diclofenac exposure of waterbirds that feed on fish or aquatic invertebrates (https:// www. bto. org/ our-scien ce/ publi catio ns/ birdt rends). It is, therefore, important to investigate potential sources of over-conservatism in the derivation of the proposed EQS. The present study reviews the available evidence for the derivation of an EQS for the protection of predators from secondary poisoning due to the consumption of food contaminated by diclofenac in surface waters and compares this to information on exposure to diclofenac in European surface waters to evaluate the potential scale of the risks posed by diclofenac use in Europe via this route. Where the potential for risks from secondary poisoning due to diclofenac exposures exists we suggest an approach to evaluate the true scale of any problem. In a companion paper Leverett et al. [29] provide an assessment of EC [14] EQS for aquatic organisms exposed to diclofenac and derive an alternative value. In this paper we provide a similar assessment of the proposed EC [14] secondary poisoning EQS and also derive an alternative EQS. Avian toxicity of diclofenac Acute lethal toxicity The critical data set for diclofenac toxicity to sensitive bird species is Oaks et al. [45]. In this study Oriental white-backed vultures (Gyps bengalensis) were exposed to diclofenac via either oral dosing or by feeding on meat from buffalo or goats injected with diclofenac. In the oral dosing study two juvenile vultures were provided with a single dose of 2.5 mg kg −1 and two more with a single oral dose of 0.25 mg kg −1 . Both of the high-dose and one of the low-dose vultures died through renal failure and visceral gout 36-58 h after administration, and all three birds displayed the same microscopic renal lesions found in field cases of vulture carcasses examined by the same authors. These lesions were characterised by a panel of three veterinary pathologists as: "Severe renal tubular necrosis with marked urate precipitation usually without giant cell or granulomatous response. Regions of proximal convoluted tubules that were not necrotic had swollen epithelial cells with very large nuclei, prominent nucleoli, and granular cytoplasm. Inflammation was not associated with this necrotizing process. There was no evidence of tubular epithelial regeneration. Cellular casts were present in the lumen of some tubules including collecting tubules. Although the cells of Bowman's capsule were prominent, the glomeruli appear to be spared. There was also sparing of the distal convoluted tubules and collecting tubules in most cases. Oxalate crystals in the kidneys, indicating ethylene glycol toxicity, were not observed. " Plasma samples taken from one high-dose and one low-dose vulture in this study showed hyperuricaemia after 24 h (775 and 654 mg L −1 uric acid, respectively compared to normal levels of ~ 100 mg L −1 [43]). However, the surviving low-dose vulture remained clinically normal 4 weeks after administration and had no microscopic renal lesions or detectable diclofenac residues at necropsy. Oaks et al. [45] then fed 20 more G. bengalensis juveniles either buffalo or goat meat that contained diclofenac injected into these mammals before slaughter. They calculated on the basis of food consumption that eight vultures received doses of 0.005-0.3 mg kg −1 body weight (lower dose), two vultures received doses of 0.5-0.6 mg kg −1 body weight (middle dose), and 10 vultures received doses of 0.8-1.0 mg kg −1 body weight (higher dose). These data are summarised in Table 1 and below. • Lower dose (0.005-0.3 mg kg −1 body weight). Two of the vultures in the low dose group died from renal failure at 4 and 6 days after exposure. A necropsy carried out on one surviving and clinically normal vulture in this group 8 days after exposure showed that it did not have any renal lesions or detectable diclofenac residues. The other five surviving vultures in this low dose group remained clinically normal at approximately 6 months after exposure. • Middle dose (0.5-0.6 mg kg −1 body weight). One of these vultures died from renal failure 1 day after exposure. The other surviving and clinically normal vulture had a necropsy performed on it at 8 days after exposure and did not have any renal lesions. • Higher dose (0.8-1 mg kg −1 body weight). All ten of these vultures died from renal failure and, like the other three vultures that died from renal failure in the other two groups, had the same histopathological renal lesions as the four vultures that were exposed orally to 0.25 or 2.5 mg kg −1 body weight diclofenac, and the field cases with visceral gout also examined by the authors. Swan et al. [55] used the Oaks et al. [45] data to estimate the median lethal dose (LD50) of diclofenac to G. bengalensis using a maximum-likelihood probit method. They identified Case 11 (see Table 1) as a potential outlier so analysed the data both with and without this bird. The estimated LD50 was 0.098 mg kg −1 vulture body weight (95% CI 0.027-0.351 mg kg −1 ) when the Case 11 outlier was included in the analysis and 0.225 mg kg −1 (95% CI 0.117-0.432 mg kg −1 ) when it was excluded. Green et al. [19] also used the Oaks et al. [45] data set and maximum-likelihood probit analysis to estimate the LD50 of diclofenac to G. bengalensis with and without the Case 11 outlier and calculated confidence limits using a Monte Carlo procedure. The estimated LD50 was almost indistinguishable from that calculated by Swan et al. [55] at 0.098 mg kg −1 vulture body weight (95% CI 0.028-0.337 mg kg −1 ) when the outlier was included in the analysis and 0.225 mg/kg (95% CI 0.119-0.423 mg/ kg) when it was excluded. These similarities between the reanalyses by Swan et al. [55] and Green et al. [19] are unsurprising, because both groups used essentially the same statistical technique on the same data. EMA [15] report an LD10 of 0.074 mg kg −1 , an LD5 of 0.054, and an LD1 of 0.03 mg kg −1 vulture body weight based on the data reported by Green et al. [19]. Swan et al. [55] and Naidoo et al. [43] provide some additional toxicity data for three further Gyps vulture species. Two African white-backed vultures (G. africanus) and three European griffon vultures (G. fulvus) were dosed once by oral gavage with 0.8 mg diclofenac kg −1 vulture body weight [55]. Injured, non-releasable birds were selected for the trials and fasted for 2-3 days prior to treatment to ensure that their crops were empty and that they would not regurgitate when dosed. These vultures were then fed on uncontaminated food 4 h after treatment and daily thereafter until death. All five diclofenac-treated vultures died within 2 days of treatment. At 24 h post-treatment four diclofenac-treated birds were lethargic, with death occurring 39 and 42 h post-treatment in G. africanus and after 28 and 35 h in G. fulvus (the third G. fulvus individual showed no signs of toxicity until it was found dead 48 h after treatment). Post mortem examination revealed extensive visceral gout in all diclofenac-treated birds, with significant lesions in the kidneys, liver, and spleen and extensive uric acid crystal deposition. Hyperuricaemia after 24 h was found in all individuals (G. africanus: ~ 700-1650 mg uric acid L −1 plasma; G. fulvus: ~ 275-500 mg L −1 ; read from Fig. 1a in Swan et al. [55]. In another experiment Naidoo et al. [43] intravenously dosed two adult Cape griffon vultures (G. coprotheres) with diclofenac at 0.8 mg kg −1 vulture body weight. Both vultures died within 48 h and the authors reported findings that were almost identical to those reported by Swan et al. [55]. All Gyps vulture data from the four species and 31 individuals dosed with diclofenac by Oaks et al. [45], Swan et al. [55], and Naidoo et al. [43] were combined and analysed by us using logistic regression (Regres-sItLogistic 2020, Robert Nau, Duke University). The results of this are plotted in Fig. 1 both with and without the Case 11 outlier identified by Swan et al. [55] and Green et al. [19]. When all data are plotted the LD50 is approximately 0.27 mg kg −1 with the outlier and approximately 0.33 mg kg −1 when the outlier is excluded. In contrast to the sensitivity of Gyps vultures, Rattner et al. [50] showed that turkey vultures (Cathartes aura) were much less acutely sensitive to diclofenac, with no adverse effects up to and including a single dose of 25 mg kg −1 vulture body weight. Lower levels of acute toxicity were also found in chicken (Gallus gallus; [23,42,49]), rock pigeon (Columba livia; [23]), Japanese quail (Coturnix japonica; [23]) and common mynah (Acridotheres tristis; [23]). However, Sharma et al. [54] found gout in two steppe eagle (Aquila nipalensis) carcasses, with diclofenac residues measured in kidney tissue from the one sampled eagle comparable with those found in kidney and liver tissues of wild Gyps vultures [45] found dead with extensive visceral gout in the Oaks et al. [45] study. In a survey of 31 veterinarians and institutions Cuthbert et al. [8] received information on over 870 cases of non-steroidal anti-inflammatory drug treatment for 79 species of birds including Gyps vultures, other raptors, storks, cranes, owls, and crows. There were no further data for diclofenac poisoning beyond those in Oaks et al. [45] and Swan et al. [55], so this survey provides no additional information on the toxicity of diclofenac to birds other than Gyps vultures. Recent studies suggest that differences in toxicity between different bird species may be partly due to differences in the ability to metabolise diclofenac (e.g., [24]. Hassan et al. [20] investigated the toxicity of a single dose of diclofenac to the Japanese quail (Coturnix japonica), Muscovy duck (Cairina moschata), and domestic pigeon (Columba livia domestica), with observations made over the following 15 days. Clinical pathology was only assessed in Muscovy ducks. LD50 values were estimated as 405 mg kg −1 for quail, and 190 mg kg −1 for Muscovy duck. Mortality was not observed for pigeons. An important aspect of the study was monitoring of the concentrations of diclofenac in the plasma of treated birds and this revealed that quails which survived had much shorter half-lives of diclofenac than quails which died during the study, although Muscovy ducks all had very similar half-lives of diclofenac regardless of whether or not they survived. The authors concluded that much more rapid removal rates for diclofenac in the tested species due to more rapid restoration of renal function resulted in much lower sensitivity compared to Gyps vultures, and that the high sensitivity of vultures is likely to be due to speciesspecific effects related to metabolism. Gyps vultures are, therefore, likely to represent extreme sensitivity to diclofenac and are a reasonable worst case for the sensitivity of all bird species. In summary, after exposure to diclofenac via food or oral dosing Gyps vultures are by far the most sensitive avian species recorded, with an LD50 of between approximately 0.1 to 0.3 mg kg −1 and an LD10 of approximately 0.07 mg kg −1 vulture body weight. The effects of diclofenac on Gyps vultures are acute and lethal and occur above a threshold dose that appears to be in the region of approximately 0.03 mg kg −1 vulture body weight. The only exception to this is the Case 11 outlier in Oaks et al. [45] reported to be dosed at 0.007 mg kg −1 , although diclofenac residues found in the kidney of this vulture (0.38 µg g −1 ) were similar to those found in vultures receiving doses two orders of magnitude higher than reported for this bird. The uric acid concentration for vulture Case 11 is also reported as below 100 mg L −1 (i.e. within the normal range), while the uric acid concentration in vulture Case B, which reportedly survived exposure to 0.6 mg kg −1 diclofenac, was much higher (approximately 800 mg kg −1 ) and within the range at which other diclofenac-exposed vultures died of renal failure. However, when necropsied at day 8 vulture Case B did not have any renal lesions or detectable diclofenac residues. It is, therefore, possible that samples from Cases 11 and B were misallocated in the test laboratory. A logistic regression run on the assumption that vulture Case 11 exposed to 0.007 mg kg −1 had in fact survived and vulture Case B exposed to 0.6 mg kg −1 had died produces an LD50 estimate of approximately 0.26 mg kg −1 , which does not differ substantially from estimates without this assumption. Any confusion between these two cases, therefore, has only a minor effect on the summary toxicity value used to derive an avian secondary poisoning EQS. Chronic sublethal toxicity No chronic avian dietary studies with any species have been reported for diclofenac. This may be because standard avian test species are relatively acutely insensitive to [55], and Naidoo et al. [43]. A: with outlier. B: without outlier. Solid line is the regression estimate and dashed lines are the 95% confidence interval this substance and protocols for longer term tests with (often endangered) vulture species are neither available nor desirable. However, enough is now known about the toxic effects of diclofenac in Gyps vultures to estimate a safe level for long term exposure of these and similarly sensitive birds. Diclofenac is known to be nephrotoxic at higher doses in humans [21] and birds [48], so it is unsurprising to find that the kidney and its supporting vascular system is the site of toxicity in Gyps vultures [37,45,55], with acute renal failure diagnosed as the cause of vulture deaths [45]. Naidoo and Swan [41] concluded that diclofenac interferes with uric acid transport in the kidney, thereby depriving kidney cells of an important antioxidant. The greater sensitivity of vultures is explained by the greater half-life of diclofenac in these birds when compared with other bird species (t 1/2 of 2 and 14 h in the chicken and vulture, respectively), and how this relates to the production of reactive oxygen species (ROS). In their study, Naidoo and Swan [41] found that increased ROS production only occurred after internal exposure to diclofenac for more than 12 h. Therefore, in vultures the loss of an intracellular antioxidant combined with additional ROS production leads to greater oxidative stress in renal tubular epithelial cells than occurs in other bird species exposed to diclofenac, and it is this that leads to renal failure and death. For birds with visceral gout ROS production leads to renal tubular damage which reduces uric acid excretion and then leads to the deposition of urate crystals that further damage the kidney, so obstructive uropathy, therefore, likely contributes to mortality. Secondary poisoning EQS derivation The EC [14] derived a secondary poisoning EQS from the avian toxicity data discussed above. The critical toxicity data selected as the point of departure were for oriental white-backed vultures weighing 4.75 kg and with a daily meat consumption of 341 g per day [19]. An allometric relationship for calculation of daily energy expenditure (DEE) for the closely related Cape vulture (Gyps coprotheres (Komen 1992; DEE [kJ d −1 ] = 826.7*BW[kg] 0.61 ) produces a value of 2139 kJ d-1 , so daily meat consumption of 341 g d −1 corresponds to an energy content of 6272 kJ kg −1 . The LD10 and LD50 reported by Green et al. [19] were 74 and 225 µg kg −1 body weight, respectively. EC [14] inferred from the data that vultures were fed with contaminated meat for 2 days, so they divided the total dose by a factor of 2 to produce an LD10 and LD50 of 37 and 112 µg kg −1 body weight per day, respectively. The energy normalised effect concentration was calculated according to a formula in EC [13]: LCx = LDx × BW/DEE. This produced an LC10 estimate of 0.082 µg kJ −1 diet and an LC50 of 0.249 µg kJ −1 diet. This was then adjusted to account for the half-life of diclofenac in Gyps vultures (16 h) to estimate the effect concentration after 5 days of exposure instead of only 2 days. The resulting LC10 and LC50 were 0.0722 and 0.219 µg kJ −1 diet which were then divided by an Assessment Factor of 100 to account for any remaining uncertainties. The resulting EQS values for relevant food items in aquatic food chains were calculated by multiplying the LC10/100 by the energy content of these food items, as listed in EC [13]. This resulted in EQS values of 3.99 µg kg −1 diet for fish, 1.16 µg kg −1 diet for bivalves, 3.58 µg kg −1 diet for freshwater arthropods, and 2.01 µg kg −1 diet for aquatic vegetation. These values are consistent with the EMA [15] recommended maximum concentration of residues of diclofenac in tissue to ensure the safety of vultures of 3 µg kg −1 diet . The lowest diet-based EQS (which was for bivalve molluscs) was then divided by a bioaccumulation factor (BAF) of 216 derived from a field study by Du et al. [11] to produce an EQS in water of 5.4 ng L −1 to protect against avian secondary poisoning. Du et al. [11] studied the levels of several pharmaceuticals in fish and invertebrates in a surface water course in Texas that received treated wastewater effluent, although at the time of the study there was no stream flow upstream of the wastewater discharge, indicating that both the water and biota samples collected were effectively from the undiluted effluent of a wastewater treatment plant. Although mean diclofenac concentrations were relatively consistent on the three consecutive days of the study there was high variability in diclofenac concentrations in replicate samples (79 ± 43, 71 ± 29, and 86 ± 55 ng L −1 ) and this level of variability was considerably higher than was observed for any of the other pharmaceuticals detected in the study. BAF values of between 140 to 419 L kg −1 with a geometric mean of 216 L kg −1 were reported for several species of molluscs. This included a mean concentration of 13 ± 5.6 µg kg −1 in planorbid snails which equates to a BAF of ~ 173 L kg −1 . Diclofenac was also detected in periphyton at a concentration of 3.6 ± 2.3 µg kg −1 , which equates to a BAF of ~ 48 L kg −1 . However, a study by the same authors at the same sampling site 2 years earlier reported different results [12], even though it was also conducted at a time when there was no upstream flow in the receiving water. This earlier study found lower concentrations of diclofenac in the water (7.6 ng L −1 in a single unreplicated sample) and did not detect any diclofenac in either periphyton or planorbid snails, which were the only biota sampled on that occasion. It is unclear what mechanisms might lead to such large differences in exposure and uptake in snails and periphyton between these two monitoring studies, with one study showing no evidence of uptake and the other showing uptake far greater than reported from the laboratory studies discussed in the next section. Bioaccumulation and biomagnification of diclofenac Estimated bioaccumulation Diclofenac is a monocarboxylic acid with a pKa value of approximately 4, meaning that at pH 4 it will be present in approximately equal amounts in both ionised and unionised forms. At pH 6 approximately 99% of diclofenac will be present in the ionised form, and at pH 7 approximately 99.9% will be ionised. Consequently, in both surface waters and physiological fluids diclofenac will be almost entirely present in its ionised form. Avdeef et al. [1] determined the partition coefficient for the ionised form of diclofenac to be 0.68, which is much lower than the criterion of the log K OW value being greater than (or equal to) three that is considered to indicate a potential for bioaccumulation [13]. There would also be a requirement to derive a EQS to protect against secondary poisoning if the substance has a reliable bioconcentration factor (BCF) or bioaccumulation factor (BAF) greater than 100, or a reliable biomagnification factor (BMF) of greater than one. Diclofenac binds extensively to blood plasma proteins and lipoproteins [5,10] with a high affinity and a large capacity for binding. This behaviour dominates the distribution of diclofenac in animals, effectively making blood the most relevant organ for any accumulation. This association with plasma explains the much more mobile and transient nature of accumulated diclofenac compared to typical apolar organic chemicals, and also the fact that analyses of diclofenac in the blood plasma of fish and birds have identified significant levels in some cases. Measurements of diclofenac in blood plasma may, therefore, be considered somewhat analogous to measurements of apolar compounds in lipids, because in both cases, the substances are concentrated in the sampled tissues. The physicochemical properties of diclofenac indicate that the substance is water soluble, ionised in aqueous environmental media, and unlikely to undergo significant environmental partitioning due to its presence in an anionic form in the environment [18]. Diclofenac may undergo some partitioning to cationic adsorbent phases in the environment, including some clay minerals, such as kaolinite, under some pH conditions. Empirical partitioning data are consistent with indications from physicochemical data that adsorption of diclofenac to both soils and sewage sludges is limited, suggesting a relatively high level of mobility in the environment [4]. Diclofenac environmental behaviour is, therefore, typical of many other human pharmaceuticals. It readily ionises, is water soluble, and shows a limited tendency to partition to environmental matrices, which is unsurprising as these are often clinical design requirements for medicines [6]. Importantly, diclofenac differs from general polar organic chemicals for which log K OW can be used as a surrogate indicator of bioaccumulation and secondary poisoning potential [13]. From the physicochemical properties of diclofenac, as judged by these traditional triggers, we would not expect secondary poisoning risks from diclofenac. However, food chain transfer has been postulated [51] and due to the potential for widespread and relatively consistent emissions there is the potential for concentrations in biota to be close to steady state despite rapid depuration of diclofenac, and any empirical evidence relating to the potential for food chain transfer should, therefore, be considered. Experimental bioaccumulation data Several bioaccumulation studies are available for diclofenac (e.g., [3,7,11] and [12, 17, 26, 28, 32-35, 44, 52, 53, 56, 61, 62]). Most of these studies do not comply with European or international guidance [13,47] for one or more of the following reasons: they did not measure whole body concentrations (only specific organs or tissues were analysed); they were not similar in experimental setup to the internationally accepted OECD guideline 305; or the concentration of the test compound was not measured at several timepoints in exposed organisms. 13 Schwaiger et al. [52] exposed rainbow trout (Oncorhynchis mykiss) for 28 days under flow-through conditions to diclofenac (99.9% pure) dissolved in 0.12‰ DMSO at 0, 1, 5, 20, 100, and 500 µg L −1 . The exposed fish were 1.8 years (average weight 167.6 ± 20.28 g; average length 25.9 ± 1.04 cm). Concentrations of diclofenac in test water were measured weekly. At the end of the exposure period samples of liver, kidney, spleen, gills, and muscles from five individuals from each test concentration were analysed to determine diclofenac concentrations. A concentration-related accumulation of diclofenac was found in these tissues, with the highest concentrations usually found in the liver, followed by kidney and gills, with only low concentrations found in muscle. BCF values declined with increasing exposure concentration and were 12-2732 in the liver, 5-971 in the kidney, 3-763 in the gills, and 0.3-69 in the muscle. The authors suggest that the reason that BCF values were inversely proportional to exposure concentration may have been due to almost complete saturation of tissues by diclofenac in the highest concentration group. This study did not measure whole body BCF, was not similar in experimental setup to the updated OECD guideline 305 [47], a carrier solvent was used at a higher concentration than permitted by the OECD guideline, and the concentration of the test compound was not measured at several timepoints in fish. The inverse relationship found between exposure concentration and the accumulation of diclofenac, in contrast to most other studies, may have been due to the use of a carrier solvent. Another study on accumulation in mussels [16] also used a carrier solvent and found a similar relationship between exposure concentrations and accumulation of diclofenac. It is not possible to extrapolate reliably from measured concentrations of diclofenac in specific fish tissues to a whole-body concentration, so the exposure of a piscivorous bird that consumes whole fish cannot be calculated. The study was, therefore, not compliant with European requirements [13] and should only be used to estimate a fish BCF in the absence of a compliant study. In contrast, a subsequent study by Memmert et al. [35] wholly complies with OECD Guideline 305 and European guidance [13]. In this study rainbow trout were exposed under flow-through conditions to radiolabelled diclofenac sodium salt (100% pure), without a solvent, at concentrations of 0, 2.1, and 18.7 µg L −1 . Test water samples for diclofenac analysis were taken daily. Four fish were sampled for diclofenac analysis on each of five dates during the accumulation phase and on four dates during the depuration phase. Three different methods were used to calculate the BCF: BCF SS (steady-state BCF, calculated from fish concentrations at the steady-state plateau), BCF K (kinetic BCF, calculated from fitted uptake and depuration rate constants), and BCF L (lipid-normalised BCF, which is the BCF SS normalised to 5% fish lipid content). The BCF values for low (2.1 µg L −1 ) and high (18.7 µg L −1 ) exposure were BCF SS : 5 and 3; BCF K : 2 and 2; and BCF L : 9 and 5. Therefore, in contrast to Schwaiger et al. [52], there was no evidence of a large concentrationdependency of the BCF in fish in this study. With the possible exception of the studies reported by Du et al. [11,12] which were discussed earlier, there are no relevant and reliable BAF field studies that comply with the EC [13] requirement that biota and water samples originate from the same area sampled at the same time. Memmert et al. [35] calculated the time for diclofenac to reach a stable plateau concentration in fish as approximately 14 days and the depuration half-life as approximately 1 day. Diclofenac depuration is also rapid in mussels [56]. This means that linking diclofenac concentrations in water to concentrations in fish or invertebrates in the field is difficult, especially if diclofenac release rates and concentrations vary over short time periods. Empirical bioaccumulation values determined from the field are, therefore, likely to be of very little use in assessing the bioaccumulation of diclofenac unless concentrations in water are well characterised over time. For example, studies by Brown et al. [3] and Fick et al. [17] are not useful for EQS derivation purposes, because concentrations of diclofenac in surface waters fluctuated with changes in effluent composition, and only fish plasma was analysed for diclofenac in these studies. The same deficiencies are evident in most other studies. Liu et al. [30] report highly variable tissue-specific BAFs for two fish species (Hemicultur leucisculus and Carassius auratus) sampled below WWTPs, while no diclofenac was found in the tissues of carp (Cyprinus carpio) sampled below a WWTP in a second study by the same research group [31]. Huerta et al. [22] report fish homogenate concentrations of diclofenac in Barbus graellsii and Micropterus salmoides from Mediterranean rivers, but no concurrent water samples were taken to allow calculation of bioaccumulation factors (BAFs). An experimental mesocosm study with zebra mussels (Dreissena polymorpha) reported by Daniele et al. [9] is a more reliable basis for deriving a diclofenac BAF. Continuous flows of diclofenac, at concentrations of 0, 0.05, 0.5, and 5 µg L −1 , were introduced into triplicate 20 m × 1 m outdoor stream mesocosms. Mussels were sampled after 3 and 6 months of exposure and BAF values across the different concentrations ranged between 4 and 13. This study complies with European guidance [13] for bioaccumulation assessment, because water and mussels were sampled in the same place and at the same time. The study also does not suffer from the same deficiencies as other field studies with diclofenac, because the exposure concentration was measured at several timepoints. A potential limitation of the study is that mixture effects are emerging as a concern in waterways for certain compounds and these are not considered; however, the available evidence does not support a cause for concern In another study with bivalve molluscs, Swiacka et al. [56] studied the bioconcentration of diclofenac by the marine bivalve Mytilus trossulus and calculated a BCF after 5 days of 9.57 L kg −1 , with relatively rapid metabolism and excretion of diclofenac. Biomagnification and trophic magnification factors Four studies are available which provide information on the biomagnification of diclofenac into predators from aquatic food items: one experimental study [27], and three reports based on field studies [2,58,59]. The studies by Lagesson et al. [27] and Xie et al. [58,59] considered entirely aquatic food chains in which uptake of diclofenac could occur through bioconcentration for all the organisms assessed. None of these studies showed that biomagnification had occurred. A more recent study by Bean et al. [2] reports on concentrations of diclofenac in water and in the plasma of various fish species and osprey nestlings (Pandion haliaetus) sampled from three regions of the Delaware River and Bay. Concentrations of diclofenac in water were either non-detectable or below the Method Detection Limit (MDL) of 4.74 ng L −1 . Diclofenac was not detected in the plasma of most fish sampled, although some samples did contain relatively high levels of diclofenac. However, diclofenac was detected in the plasma of all osprey nestlings that were sampled in the study, albeit mostly at below the detection limit, which contrasted with the much less frequent detection of diclofenac in fish plasma samples. The authors suggest that during the sampling period ospreys may have had a foraging range that was more extensive than the area sampled for fish. This study does demonstrate that diclofenac can be transferred along food chains but does not provide evidence of biomagnification. An important complicating factor in defining the most appropriate BAF value to use for diclofenac is that exposure concentrations can be highly variable, both temporally and spatially. Similarly, due to both uptake and depuration rates of diclofenac being very rapid for both invertebrates and vertebrates the concentrations observed in organisms can also be highly variable. This means that obtaining appropriately matched samples of both the exposure medium and the biota from field studies, in which the biota concentrations properly reflect a steady state condition with the surface water concentrations, is extremely difficult. This means that there is a high degree of uncertainty associated with the BAF values calculated from field studies. This issue could be addressed by extensive monitoring of both surface water and biota concentrations within a restricted area over an extended period of time to provide an average BAF value that could adequately represent longer term exposures. However, none of the existing field studies have been conducted over a sufficient period of time, and with sufficient sampling intensity, to achieve this. Selection of bioaccumulation factor for use in risk assessment Field-based bioconcentration factors for diclofenac in fish cannot be estimated accurately from any of the existing studies because of variable release rates and a short half-life in fish. Therefore, although field-based studies might provide the most realistic information on bioaccumulation under environmental conditions, none of the available studies include sufficiently detailed information on both the exposure concentrations and body burdens to allow calculation of reliable BCF, BAF, or BMF values. Use of experimentally derived values for fish exposed in the laboratory is, therefore, preferable for this substance, based on the currently available information. The only reliable fish bioconcentration study which complies with European guidance [13] is by Memmert et al. [35]. This study shows that the highest bioconcentration factor in rainbow trout is the lipid normalised BCF value of 9. A reliable study on diclofenac accumulation in mussels [9] provided a very similar, although slightly higher, BAF value of 13. A value of 230 ng L −1 (rounded down from 230.8 ng L −1 ) results if this BAF is used to derive a waterbased EQS, known as the QS water, sec pois , from the EMA [15] food threshold of 3 µg kg −1 diet . Mesocosm and field studies provide evidence that diclofenac does not biomagnify through aquatic food chains and trophic levels [2,27,58,59], so this process does not need to be taken into account when setting a secondary poisoning EQS to protect top predators (e.g., [25]). Diclofenac avian secondary poisoning risk characterisation The measured environmental concentrations of a substance provide context for proposed EQS values by showing whether these values are currently met or exceeded. Exposure data for diclofenac in European surface waters have been reported in detail by Merrington et al. [36] and summarised by Leverett et al. [29], and these data can also be used to perform an indicative compliance assessment for the QS water, sec pois . A generic exposure concentration for Europe has been determined from these data as 0.090 µg L −1 and is defined as the unweighted mean of 90 th percentile values from all individual countries. The number of samples per country is extremely variable, with over 20,000 samples from France, but fewer than 10 samples from countries, such as Ireland, Denmark, and Estonia. Sufficient data are, therefore, available for a risk characterisation to be undertaken for piscivorous bird species assumed to feed only on fish captured from freshwaters close to a source of diclofenac exposure, such as a WWTP. The overall risk characterisation ratio for Europe, based on an exposure concentration of 0.090 µg L −1 , is 16.7 if the QS water, sec pois is set at 5.4 ng L −1 and 0.39 if it is set at 230 ng L −1 . However, this overall estimate of the risk does not consider the differences in exposure levels between different regions. The proportion of samples available from each different regulatory organisation that has reported concentrations of 0.0054 and 0.23 µg L −1 or greater is summarised in Table 2 to provide an indication of the potential levels of compliance with an EQS for diclofenac across different regions. Table 2 also includes information on the population and sampling density to provide an indication of the level of representivity of the data for each of the countries. The overall level of compliance from the entire data set is 35.7% of samples based on a QS water, sec pois of 0.0054 µg L −1 and 97.1% of samples based on a QS water, sec pois of 0.23 µg L −1 . However, given the very large differences in the extent of monitoring data available for different countries it is not clear to what extent this may reflect the true overall compliance with either of the proposed QS water, sec pois values. Results in this data set that were reported as less than the limit of detection were substituted for a value of half of the limit of detection for the purpose of assessing compliance. Consequently, any limits of detection that were 0.011 µg L −1 or higher were substituted with a value that exceeds the QS water, sec pois of 5.4 ng L −1 recommended by EC [14]. The results of the indicative compliance assessment shown in Table 2 show that several countries (Austria, England, Northern Ireland, Portugal, Slovakia, and Slovenia) have no sites which pass the QS water, sec pois of 5.4 ng L −1 and in these cases this is due to an inadequate limit of detection for assessing compliance with this QS water, sec pois . Switzerland also has no sites that comply with this QS water, sec pois of 5.4 ng L −1 . However, in this case the limit of detection is lower than the QS water, sec pois and all the samples exceed this concentration. It is not possible to assess potential compliance against the EQS derived for the food of waterbirds due to a lack [14] cite six papers in which measurements of diclofenac in biota are reported, focussing particularly on molluscs. Mussels (Mytilus galloprovincialis) collected from Portonovo Bay in the Central Adriatic Sea contained diclofenac concentrations of < 1, 16.11, and < 1 µg/ kg dwt in July, August, and September 2014, respectively [38,39] which equates to concentrations of < 1, 1.29, and < 1 µg/kg wwt according to the conversion factor for bivalves in EC [13]. In a subsequent study Mezzelani et al. [40] sampled M. galloprovincialis from six sites in the Tyrrhenian Sea and eight sites in the Adriatic Sea over up to four consecutive years (2014-2017) and, at some sites, during different seasons. Across these sites there were a total of 61 sampling dates, with five replicate samples taken on each sampling date. Diclofenac was reported as below the limit of detection of 1.4 µg/ kg dw in 29 of these sets of samples. Concentrations in the remaining 32 sets of samples varied, with a mean and standard deviation across all samples of 59.96 ± 68.13 µg/ kg dw, which equates to concentrations of 4.8 ± 5.45 µg/ kg wwt according to the conversion factor for bivalves in EC [13]. The mean coefficient of variation for diclofenac concentration across all sites was 107% so there was very high variability within samples from the same site taken on the same date. This is hard to explain when each mean and standard deviation for a site and date was calculated from five replicate samples each containing five homogenised mussels. In China, Xie et al. [58] sampled phytoplankton, zooplankton, zoobenthos, and fish from 16 sites in Lake Taihu and reported concentrations of diclofenac in molluscs from below the limit of detection (0.06 µg/kg dwt) to 11.7 µg/kg dwt (equivalent to 0.94 µg/kg wwt). In a subsequent study on the same lake [59] they reported a range of diclofenac in molluscs from below the limit of detection to 47 µg/kg dwt (equivalent to 3.76 µg/kg wwt). Yang et al. [60] sampled biota from nine sites in the New Qinhuai River, the Qinhuai River, and a section of the Yangtze River and reported mean and standard deviation concentrations in molluscs of 3.4 ± 0.2 µg/kg wwt in the New Quinhuai River, below the limit of detection of 0.18 µg/kg wwt in the Quinhuai River, and 2.3 ± 1.2 µg/kg wwt in the Yangtze River. Finally, in the USA, Du et al. [11] reported mean concentrations of diclofenac in different species of molluscs sampled from a wastewater stream in Texas of between 11 and 33 ± 6.7 µg/kg dwt/fwt, although molluscs sampled from the same site 2 years previously [12] did not contain diclofenac concentrations above the limit of detection (0.45 µg/kg). These studies reviewed by EC (2021) suggest that a diclofenac biota standard of 3 µg/kg wwt in food would be exceeded in molluscs at some sites on some occasions, particularly if the site is heavily polluted. However, most mollusc samples from the small number of studies reviewed by EC [14] contained diclofenac concentrations that fall below the 3 µg/kg wwt threshold proposed by EMA [15]. Data from unbiased biota monitoring programmes are required to assess the likely EQS exceedance rate of biota-based standards for diclofenac across Europe. The criteria on which the requirement to derive an EQS for secondary poisoning are based depend on the possibility of accumulation, as indicated by either empirical evidence of bioaccumulation or biomagnification, or a high log K OW value, or evidence that the substance is highly toxic to either birds or mammals. In the case of diclofenac, toxicity data for vultures indicates a high level of toxicity to at least some bird species, although the substance does not appear to meet criteria based on bioaccumulation, except in one field bioaccumulation study by Du et al. [11]. Exposure of wildlife via the aquatic food chain is the only directly relevant environmental exposure resulting from human pharmaceutical uses of diclofenac across Europe. This is because the drug is excreted in either urine or bile and the parent substance and its metabolites are, therefore, released into the environment via wastewater effluents. There could still potentially be some limited veterinary sources of diclofenac into the environment, but they are unlikely to travel along direct exposure pathways into the aquatic environment, and, therefore, could not be controlled through the implementation of an EQS for surface waters. Other potential sources from human pharmaceutical uses, such as improper disposal or wash-off of topical applications are likely to share the same route of exposure to the aquatic environment via wastewater treatment plants. The generic exposure concentration used in this paper is a reasonable worstcase regional concentration for Europe, because it has been calculated as the mean of the 90th percentile concentrations from several different European countries. However, higher concentrations could be encountered locally, where there are specific emission sources, such as major hospitals, large numbers of care homes, limited dilution of wastewater effluents into receiving waters, or diclofenac production facilities. Country-specific assessments of potential compliance with the EQS derived in the present study suggest that levels of non-compliance could be relatively high in some regions, such as Germany and Flanders, whereas potential non-compliance in France, the country with the most extensive monitoring data set for diclofenac, is less than 5%. The region-specific risk characterisation is based on a face value comparison of the concentrations of diclofenac reported in individual spot samples against the proposed EQS for diclofenac in the water column expressed as an annual average. Furthermore, the extent to which regionspecific monitoring has been targeted at those sites most likely to be receiving diclofenac exposures or has been aimed at providing an overall indication of country-wide exposures is unknown and likely to vary between different regions. This means that making robust comparisons of the potential compliance situation between different regions is difficult. The relatively transient nature of diclofenac concentrations in biota, which reflects both variability in water column concentrations and rates of metabolism and excretion, means that high concentrations in fish and molluscs are unlikely to be maintained over longer exposure periods unless releases to water are continuous and contain consistently high diclofenac concentrations. A more reliable indication of the potential for secondary poisoning would be via monitoring of concentrations in fish and molluscs in aquatic ecosystems. The cumulative frequency distributions of the reported monitoring data, based on individual sample results, are shown in Fig. 2 for Flanders, France, and Germany. The EC [14] and our proposed thresholds for diclofenac derived for the protection of birds of, respectively, 5.4 and 230 ng L −1 are also indicated for reference. This provisional compliance assessment is at best indicative as it is based on a face value assessment against individual samples rather than annual average concentrations calculated from regular samples collected over the course of a year or more. The data set for Flanders includes 1025 samples covering 84 sites collected over a 5-year period. The data set for France includes 21,472 samples covering 1827 sites collected over a 3-year period. The data set for Germany includes 233 samples covering 24 sites collected over a 2-year period. All samples from all three of these countries were reported as being from routine monitoring and were collected from receiving freshwaters. Regulatory monitoring programmes are routinely targeted towards the most potentially problematic sites, and this may be the situation with the data sets for Flanders and Germany, both of which have a much lower number of sampling sites with higher diclofenac concentrations than in the data set from France. The relatively rapid uptake and excretion of diclofenac by fish [35] means that fish of very different sizes, and potentially also trophic levels, are likely to contain similar concentrations of diclofenac. Xie et al. [58] found no significant relationship between trophic level and diclofenac concentrations in biota, and Xie et al. [59] found a significantly negative relationship between trophic level and diclofenac concentrations in biota. This contrasts with many bioaccumulating chemicals for which concentrations in fish tend to increase with increasing fish size. This is because larger piscivorous fish feed on smaller fish and so non-polar chemicals biomagnify along food chains because of lipophilicity and relatively low excretion rates. A consequence of this is that exposure concentrations of diclofenac via food at potentially different avian trophic levels are likely to be very similar, although ingestion rates may differ due to differing feeding rates. The ionisable nature of many pharmaceuticals means that approaches for assessing bioaccumulation based on K OW are inappropriate. These approaches were developed for nonpolar organic chemicals and rely on the hydrophobicity of a substance and its tendency to partition into lipid phases from aqueous environments. Although the predicted partition coefficients for many pharmaceuticals in their unionised forms are relatively high (e.g., 4.5 for diclofenac), the unionised form of the chemical is not relevant to its fate in aqueous environmental systems. Furthermore, the tendency of diclofenac to bind to plasma proteins indicates a very different behaviour when compared to nonpolar chemicals once it has been bioaccumulated by organisms. To avoid these problems associated with K OW -based approaches to bioaccumulation for pharmaceuticals, such as diclofenac, it is important that risk assessment focuses on empirical data rather than estimation methods. Data on concentrations of diclofenac in European surface waters suggest that there are potential risks to waterbirds. It would, therefore, be prudent to monitor , and Germany (dark blue) (data from [29]. The EQS for secondary poisoning derived by the EC [14] is indicated by the vertical black dotted line, and the EQS derived in the present study is indicated by the vertical black dashed line diclofenac concentrations in water and biota in those surface waters known to receive high concentrations of diclofenac from WWTPs, as well as at appropriate reference sites which are not directly impacted by major local wastewater discharges. Monitoring of locations where waterbirds gather would also be appropriate, and any such locations that are close to significant sources of exposure (e.g., urban areas) may be a particular priority for monitoring. The concentrations of diclofenac in waterbirds can then be related to the population dynamics of piscivorous bird populations in the vicinity of these discharges to determine whether there is any evidence for adverse effects. This should include monitoring of diclofenac concentrations in whole fish and other prey items, so that a "food basket" approach can be used to assess potential secondary poisoning risks to relevant predatory vertebrates. Monitoring of predators would necessarily be non-destructive, and limited to the collection of blood plasma samples from relevant avian predators. This would enable a qualitative assessment to be made of the potential for exposure of these kinds of species via their food although as the EQS is set as a concentration in the food of predators this would not be a suitable approach for compliance assessment.
2022-03-20T13:20:08.556Z
2022-03-19T00:00:00.000
{ "year": 2022, "sha1": "2f46964cfc89f7b7b5da5403e28e084aef50a21f", "oa_license": "CCBY", "oa_url": "https://enveurope.springeropen.com/track/pdf/10.1186/s12302-022-00601-7", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "2f46964cfc89f7b7b5da5403e28e084aef50a21f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
241959147
pes2o/s2orc
v3-fos-license
Further Investigation of Ta ḍmīn (Implication of Meaning) in the Qur’an with Reference to Four Muslim-Arabic Authored English Translations This research is an extension of Nouraldeen (2020). The principal objective of this project is to investigate the English translation of complete taḍmīn in the Qur’an and shed light on other types of taḍmīn in the Qur’an. ِAlthough complete taḍmīn is probably not as numerous as other types of taḍmīn in the Qur’an, it deserves much attention for the interesting additional meaning it provides. Two sources are used to collect the āyāt (verses) that involve complete taḍmīn. Four English translations of the second sūrah (chapter) produced by Arab transla tors are analysed and assessed with a suggested translation that is believed to improve the current translations. None of the four translations are aware of taḍmīn when translating the Qur’an and this suggests a further empirical study which to elicit the views of some Qur’an translators with regards to why taḍmīn is not given attention and how it can be rendered. One unexpected finding is that some Qur’an translations are not consistent when translating same words in different āyāt (verses). This study is intended to be part of an ongoing work which studies and assesses the English translation of all āyāt (verses) that include complete taḍmīn as they appear in the same arrangement of the suwar (chapters) in the Qur’an. Qur’an Introduction 1 Taḍmīn is a linguistic and rhetorical phenomenon of which a preposition and a noun/verb are the core. It occurs when a preposition follows a noun/verb with which is not standardly collocated. The purpose of this linguistic-rhetorical phenomenon is to produce two meanings using one preposition and one noun/verb. In Arabic, taḍmīn could be interpreted as taqāruḍ 'mutual borrowing' in which one preposition acts in the place of another. Verb/noun-preposition taḍmīn is another phenomenon which involves the presence of a noun/verb and a preposition (see: Nouraldeen, 2020, pp. 239-240). Types of taḍmīn in the Qur'an Taḍmīn is one of the unique characteristics of the Qur'an. However, scholarly articles have given little attention to its translation into English (Hummadi et al., 2020, p. 2). I believe there are different types of taḍmīn in the Qur'an. This is inferred from two sources. The first is the one I have used in Nouraldeen (2020, p. 241) authored by Fadel (2005). The second one is At-taḥrīr wa at-tanwīr by Ibn ʕāšūr (1984). Although Fadel is fairly comprehensive (Nouraldeen, 2020, p. 241), it is not dedicated to one type of taḍmīn, i.e. complete taḍmīn 2 , with which my studies on taḍmīn in the Qur'an are concerned. Fadel includes other types of taḍmīn (see table 1). Studying more sources might reveal further types of taḍmīn in the Qur'an. (a) Incomplete implicit preposition. This occurs when one verb implies another one which standardly takes a complement preposition. An example of this type is found in Q 27:18 ‫النمل‬ ‫واد‬ ‫على‬ ‫أتوا‬ ‫إذا‬ ‫حتى‬ "when they came across a valley of ants" (Khattab, 2016, p. 314). The Arabic text has an explicit verb ‫أتوا‬ 'came', an explicit preposition ‫على‬ (lit. 'on') and an implicit verb ‫أشرف/أشفى‬ 'approach'. The explicit Arabic verb ‫أتوا‬ is transitive, i.e. it takes an object and cannot take a preposition unlike the English verb 'came' which is intransitive and needs a following preposition 'to'. Therefore, in the case of the Arabic text, implicitly, there is no preposition. Moreover, the explicit Arabic verb ‫أتوا‬ is followed by an explicit preposition ‫على‬ with which does not standardly collocate 3 . However, the explicit Arabic preposition ‫على‬ (lit. 'on') is standardly collocated with the implicit Arabic verb ‫أشرف/أشفى‬ 'approach' unlike 'approach' in English which is transitive. This difference between Arabic and English in terms of standard collocation is a challenge when translating from Arabic into English. (b) Incomplete explicit preposition: This involves a verb that implies another one but without an explicit preposition. An example of this type is found in Q 2:26 ‫بعوضة‬ ‫ما‬ ‫مثلا‬ ‫يضرب‬ ‫أن‬ ‫يستحيي‬ ‫لا‬ ‫هللا‬ ‫إن‬ "Allah does not shy away from using he parable of a mosquito" (Khattab, 2016, p. 4). The explicit Arabic verb ‫يستحيي‬ 'to be shy' could be transitive with َ ‫ب‬ ْ ‫ر‬ َ ‫ض‬ ‫مثل‬ as an object or could be intransitive followed by ‫ن‬ ‫مِ‬ (lit. 'from') and then ‫مثل‬ ِ ‫ب‬ ْ ‫ر‬ َ ‫.ض‬ On the other hand, the explicit verb ‫يستحيي‬ implies an implicit intransitive verb ‫أمسك‬ 'refrain' which is standardly collocated with an implicit preposition ‫عن‬ 'from'. (c) Verbal: This involves a verb that implies another verb without an implicit or an explicit preposition. An example of this type is found in Q 24:27 ‫تستأنسوا‬ ‫حتى‬ ‫بيوتكم‬ ‫غير‬ ‫بيوتا‬ ‫تدخلوا‬ ‫لا‬ "Do not enter any house other than your own until you have asked for permission". (Khattab, 2016, pp. 290-291). The translation emphasizes the implicit verb, which is ‫,تستأذنوا‬ not the explicit one ‫.تستأنسوا‬ The explicit verb ‫تستأنسوا‬ means that when asking for permission to enter any house rather than his /her own, (a) he/she should know if anyone is there, (b) the host should 'feel at ease with receiving the guest' and (c) the guest should 'feel if he/she is welcomed by the host'. This could be translated as '… until you know that the host feels at ease with receiving you while asking for permission'. (d) Incomplete verb: This involves a verb that is standardly collocated with an implicit and an explicit preposition where the latter is a more common collocation in Arabic. However, the less common preposition is used explicitly for rhetorical purposes, i.e. enriching the text/speech with additional meaning and bringing the importance of the meaning resulting from the less common preposition with the verb to the attention of readers/listeners. An example of this type is found in Q 2:4 ‫إليك‬ ‫ل‬ ِ ‫ز‬ ْ ‫ن‬ ُ ‫أ‬ "has been revealed to you". (Khattab, 2016, p. 2). In Arabic, the verb ‫ل‬ َ ‫ز‬ ْ ‫ن‬ َ ‫أ‬ 'reveal' is more commonly collocated with ‫على‬ 'on' (Ibn ʕāšūr, 1984, vol 1, p. 239). Its counterpart in English is 'to'. However, it is less commonly collocated with 'to' (no counterpart in English for 'to' collocated with ‫ل‬ َ ‫ز‬ ْ ‫ن‬ َ ‫أ‬ 'reveal' in Arabic). In the example given above, the emphasis of the meaning of ‫على‬ ‫نزل‬ ُ ‫أ‬ 'reveal to' (lit. 'reveal on') is on the book (the Qur'an). Nevertheless, when using the explicit Arabic preposition ‫إلى‬ 'to' with 'reveal', the emphasis falls on the Prophet who the book, i.e. the Qur'an', is revealed to and that the book is well established in him (Ibn ʕāšūr, 1984, vol 1, p. 239). Methodology I thought that Qur'an translators who are Arabic native speakers of Islamic background and have an excellent command of English would be an appropriate sample to study their English translation of complete taḍmīn in the Qur'an. This is because given their deep knowledge of Arabic, they could be expected to take taḍmīn into consideration when translating the Qur'an. Hummadi (2020, p. 3) believes that prepositions in the Qur'an are sometimes not translated properly due to the absence of "the required knowledge of the use of prepositions in the Holy Qur'an". I believe Qur'an translators who are Arabic native speakers will probably have this essential knowledge. Out of the four types of taḍmīn in the Qur'an mentioned earlier, this study will analyse and discuss four English Qur'an translations of the āyāt (verses) of ch. 2 (al-Baqarah) with complete taḍmīn. The reason for studying this type of taḍmīn is that, unlike other types, it helps to understand this phenomenon and therefore understand the other types. As it is complete taḍmīn, all elements are used. Therefore, when other types miss one or more of these elements, this will make it easier to understand these types given that complete taḍmīn is understood. Moreover, complete taḍmīn has implicit elements which it is believed that they may not be taken into consideration when translating the Qur'an into English, although translating them will assist in fully understanding the āyāt (verses) appropriately. Bridges (2020). The āyāt (verses) involving complete taḍmīn will be analysed, discussed and assessed. As there is no reference in Arabic, to the best of my knowledge, that encompasses all āyāt (verses) with complete taḍmīn in the Qur'an, I had to consult different sources to pinpoint some of the places where it occurs in the Qur'an. The different references used in this study are Ibn ʕāšūr (1984) and Fadel (2005). Therefore, this study coupled with my previous one (Nouraldeen, 2020) will hopefully plant the seeds for future pieces of research that embrace all āyāt (verses) with complete taḍmīn in the Qur'an. The āyāt (verses) chosen for this research will be studied as they are arranged in the Qur'an starting from the first surah (chapter) to the final one, unlike in Fadel (2005) where they are not unfortunately arranged in the same order as they appear in the Qur'an. I believe following the arrangement of the Qur'an when studying taḍmīn will facilitate analysis and discussion and make it easier for the reader to follow. The sūrah (chapter) that will be analysed and discussed in this study is ch. 2 (al-Baqarah). The first sūrah (chapter) of the Qur'an has no complete taḍmīn examples. Al-Baqarah is the longest surah (chapter) in the Qur'an and, as might accordingly be expected, some āyāt (verses) with taḍmīn. As the focus of this paper is to study complete taḍmīn in the Qur'an, the four English translations of the Qur'an will be analysed and discussed using the four-element model that was suggested by Nourladeen (2020, p. 240). These elements are explicit noun/verb, implicit preposition, implicit noun/verb and explicit preposition. In this study, I modify this model by adding nouns which mean 'verbal nouns'. Analysis and Discussion In each āyah (verse), the four English translations will be presented, followed by a table which arranges the four elements of taḍmīn and finds which element is present or absent in these translations. After discussing and analysing the translations, the purpose of taḍmīn and the meanings it provides will be presented and an improvement to the translations wherever needed will be suggested. In the conclusion section, the three research questions will be answered based on the outcomes of the discussion and analysis. All four translators translate the explicit verb ‫َوا‬ ‫ل‬ َ ‫خ‬ as 'are alone', except Khattab (see below), using the grammatical structure 'auxiliary verb' (verb 'to be') + 'adjective', while the Arabic text uses a lexical (main) verb in the past tense. Khattab uses 'alone' without verb 'to be' with an arguably elliptical 'they are'. This explicit verb ‫َوا‬ ‫ل‬ َ ‫خ‬ may be rendered as 'meet in private' because this incident in the āyah (verse) is in contrast to the beginning of it 'when they meet the believers' [in public]. So, I believe 'meet in private' would be an acceptable translation, as it is consistent with the contrasting incident. The explicit preposition ‫إلى‬ (lit. 'to') is not rendered by the four translators. This is because the English translation 'are alone' is standardly collocated with 'with'. However, the translators may have not realised that the explicit verb ‫َوا‬ ‫ل‬ َ ‫خ‬ is not standardly collocated with the explicit preposition ‫.إلى‬ Ibn ʕāšūr (1984, vol 1, p. 291) expresses the view that ‫َوا‬ ‫ل‬ َ ‫خ‬ 'are alone' is standardly collocated with the implicit preposition ِ ‫بـــ‬ 'with', while the explicit preposition ‫إلى‬ 'to' is standardly collocated with the implicit verb ‫رجع‬ ‫أو‬ ‫عاد‬ ‫أو‬ َ ‫آب‬ 'return' or 'come back'. At the beginning of the āyah (verse), Allah describes the situation in which the unbelievers go to see or talk with the believers using the verb 'meet', which indicates that this is probably a brief and uninteresting meeting. However, when they go to see their Satans 4 , they return to them with an interest in meeting them and they meet them in private. These meanings would not be expressed without using ellipsis in the form of taḍmīn. Therefore, an appropriate idiomatic translation might be 'when they return to their Satans meeting up with them in private'. ‫ن‬ is intransitive, the English verb 'acknowledge' is transitive. Therefore, the explicit preposition ‫َــ‬ ‫ل‬ cannot be rendered; however, its existence in the āyah (verse) reveals the implicit verb. A suggested translation to improve the four translations could be as 'we will never 6 believe in you, nor will we ever acknowledge your God'. Another suggested translation is 'we will neither believe in you nor acknowledge your God'. Taḍmīn in this āyah (verse) is a sign of the fervent disbelief the non-believers adhere to. They not only disbelieve; they do not acknowledge belief in Allah (God) at all. These meanings would not be revealed without the use of the rhetorical feature of taḍmīn. This part of the āyah (verse) may seem similar in meaning to āyah (verse) 1 above and therefore it might be claimed that there is no need to analyse and discuss it. As a matter of fact, the explicit verb ‫خلا‬ is identical to the one above ‫خلوا‬ in āyah (verse) 1. However, the subject ‫بعضهم‬ and the prepositional phrase ‫إلى‬ ‫بعض‬ are different and thus the implicit verb ‫سكن‬ ‫أو‬ ‫ارتاح‬ 'feel at ease' is also non-identical 7 . Furthermore, even though the explicit verbs ‫خلوا/خلا‬ in both āyāt (verses) are the same, what is surprising is that some of the four translators render them inconsistently. With regards to the translation of taḍmīn in this āyah (verse), none of the four translators translate the explicit preposition ‫إلى‬ (lit. 'to'). Surprisingly, Khattab does not even translate the explicit verb ‫.خلا‬ Instead, he translates it as a phrase 'in private'. On the other hand, Bridges does not translate neither the explicit preposition nor the implicit one. Fadel (2005, p. 326) believes that the explicit preposition ‫إلى‬ suggests an implicit verb which is ‫ارتاح/سكن‬ 'feel at ease'. Verb/noun-preposition collocations may differ from one language to another ‫ـــ‬ and this is true for 'feel at ease' which is standardly collocated with 'with' in English and with ‫إ‬ ‫لى‬ in Arabic. At the beginning of the āyah (verse), Allah describes how the unbelievers meet the believers using the verb ‫وا‬ ‫َقُ‬ ‫ل‬ 'meet' because there is likely no intimacy between them. However, when they meet the disbelievers, they come together with each other, which may be evidence of an intimate closeness and feeling of ease. These meanings are pointed to in the āyah (verse) in an elliptical form using taḍmīn. Unlike the other translators, Hammad translates the explicit preposition ‫إلى‬ (lit. 'to'). His choice may be interpreted as meaning that he is aware of the explicit preposition and he translates it, though he may not be aware that this explicit preposition is not standardly collocated with the explicit noun. Another interpretation is that he has to use 'to' because he either (a) wants to be close to the ST no matter if it is standardly collocated in English with 'approach' or (b) he might believe that it is standardly collocated with 'approach' in English. In fact, 'to' in English does not standardly collocate with 'approach'. In Arabic, the explicit noun ‫الرفث‬ used in the āyah (verse) might mean two things. It may mean 'the act of having sex' or 'foreplay'. So, to avoid ambiguity, the āyah (verse) uses taḍmīn by using the explicit preposition ‫إلى‬ which is not standardly collocated with the explicit noun ‫الرفث‬ to indicate another noun ‫(الجماع)‬ ‫الإفضاء‬ with a different meaning. This implied noun ‫(الجماع)‬ ‫الإفضاء‬ has one meaning only, which is 'the act of having sex', while ‫الرفث‬ means 'foreplay' in this context. Another possible reason why taḍmīn is used is to attract the attention of the readers/listeners to the importance of engaging in foreplay before having sex. This is also emphasised on in the same sūrah (chapter) Q 2: 223. So, by using taḍmīn, these two meanings are included in an elliptical rhetorical form. As not all elements of taḍmīn in the āyah (verse) are translated, the following translation is probably an improvement and transfers all elements 'engaging in foreplay with your wife and then having intercourse with her'. Conclusion As is recommended in Nouraldeen (2020, pp. 239 and 244), interviews with Qur'an translators should be conducted in the future to work out why taḍmīn in the Qur'an is not being paid attention to and how it can be translated and appropriately communicated in the target language. In the cases considered in this article, none of the four translations typically pay attention to the explicit prepositions which do not standardly collocate with the explicit nouns/verbs. As discussed, this non-standard form of collocation is used intentionally in the Qur'an to present two meanings using ellipsis in its taḍmīn form. What is surprising is that some Qur'an translators show inconsistency when translating the same words used in the same sense, such as Bridges in translating the explicit verb ‫خلوا‬ in āyah (verse) 1 as 'are alone' and in āyah (verse) 3 as 'come privately'. Another example where Bridges translates the same word ‫نساء‬ as 'women' in āyah (verse) 5 but as 'wives' in āyah (verse) 6. These differences between Arabic and English in terms of standard collocations are a challenge when translating. Such differences might involve the presence of the preposition in Arabic and the lack of it in English as in the case of 'believe' (in one sense only out of the two mentioned in discussion and analysis of āyah (verse) 2) and 'swear' or the use of different prepositions as in the case of ‫تثبيتا‬ 'affirmation' āyah (verse) 7 which is collocated with the preposition ‫ِـ‬ ‫ل‬ 'for', while in English 'affirmation' is collocated with 'of'. To answer the research questions, it is not clear whether Qur'an translators of Islamic and Arabic origin are aware of taḍmīn or not. It can be noted that out of the seven āyāt (verses) only Khattab translates taḍmīn, and only once, in āyah (verse) 6. However, it is clear that the four translators do not pay attention to translating taḍmīn although it is obvious in the previously analysed āyāt (verses) that the explicit prepositions are not standardly collocated with the explicit nouns/verbs. They mostly translate a part of taḍmīn, the implicit preposition. However, they do not translate the explicit preposition. In my opinion, taḍmīn should be translated as it adds additional meanings. This study has not been able to carry out interviews with some Qur'an translators to find out how they view taḍmīn and why it is not translated. Therefore, there is a need to conduct further empirical research and interview some of them. As this study is part of an ongoing work on the translation of taḍmīn in the Qur'an, my future pieces of research might consider the possibility of interviewing some of Qur'an translators.
2021-08-23T18:28:00.324Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "ae96d9eb36b4676e19f88a0f920ab1498ce2bf02", "oa_license": "CCBY", "oa_url": "https://al-kindipublisher.com/index.php/ijllt/article/download/1535/1273", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "27d31502437c0e93e5c4f718cd6ad7acf61e7437", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [] }
18253232
pes2o/s2orc
v3-fos-license
Dark Matter in Globular Clusters We first review reasons why dark matter is an interesting issue in connection with star clusters. Next we consider to what extent the presence of dark matter is consistent with their dynamics and structure. We review various model-dependent and model-independent methods which have been applied to two well studied clusters, NGC 6397 and 47 Tuc. We suggest that about half of the mass in each object is still unobserved, possibly in the form of a mixture of low-mass stars and white dwarfs. Introduction It is remarkable how many of the major problems in astronomy are inuenced by what we know about globular clusters. Nevertheless, the dark matter problem is an exception. The question whether globular clusters contain signi cant amounts of dark matter has not often emerged in the literature, however much it may have been discussed informally, and to the best of our knowledge it has never previously been reviewed. In this paper we rst attempt to set out the background to the problem, explaining why the search for dark matter in the context of globular clusters is an interesting problem. Next we consider ways in which such a search may be carried out, with particular emphasis on techniques based on dynamical modelling. We shall see that this topic has reached a new and exciting point of development, thanks to the depth to which the mass function of globular clusters can now be examined observationally. Nevertheless important uncertainties still remain, and direct attention to both theoretical and observational problems for the coming years. The Importance of Dark Matter in Globular Clusters There are several a priori reasons why the search for dark matter may be taken seriously. 1. In terms of mass, globular clusters are the next step down from the smallest stellar systems which de nitely contain dark matter { the dwarf spheroidals (see Ashman 1992, Pryor 1992). 2. The presence or absence of dark matter in globulars would be a signi cant piece of evidence in the study of galaxy formation. It would clarify the relation between globulars and their host galaxy, and between globulars and other small stellar systems. 3. Renewed interest in the topic is timely because of the ood of new results on deep mass functions, and the observation of white dwarfs, in several globular clusters (cf. the papers by Piotto and by Fahlman, these proceedings). 4. Hypothetical black hole stellar remnants were invoked by Larson (1984) to account for anomalously high-velocity stars in some clusters. 5. Upper limits on the dark matter content of globulars are needed in order to assess the feasibility of searches by such techniques as microlensing (Griest, pers. comm.). 6. Though the issue seems controversial, there is a possibility that certain types of dark matter might a ect the evolution of stars (Renzini 1987, andreferences in Heggie et al. 1993). 7. The typical masses of globular clusters have a special signi cance in reasonably standard cosmological models (Peebles 1984, Rosenblatt et al. 1988, West 1993, and Peebles suggested that globulars might contain a reasonably uniform dark matter component. Other theories (e.g. Silk & Stebbins 1993, Moore & Silk 1995 are also relevant to the structure and content of globular clusters, but give a less clear picture of the expected distribution of the dark matter. Searching for Dark Matter in Globular Clusters 3.1. SEARCH STRATEGIES How one searches for dark matter depends on what it is, and in principle one may consider all the usual suspects: black holes, neutron stars, white dwarfs, low-mass stars, wimps, etc (see Carr 1994 for a review, especially on baryonic varieties). Certain types of dark matter could be detected from the resulting gamma-ray emission (cf. references in Heggie et al. 1993). On the more conventional side, low-mass stars and brown dwarfs may be detected by a variety of means. Detection and spectroscopy from the ground in the IR may be feasible with 8m class telescopes (Fusi Pecci et al. 1994). For estimates of the usefulness of space-based observations, and of microlensing, see the paper by Longaretti et al. in this volume, and references therein. In the remainder of this paper we consider the traditional modelbuilding strategy. It is the counterpart in globular clusters of the classical Oort technique which revealed the possibility of missing mass in the solar neighbourhood (Oort 1932). In principle one should construct a model of a cluster which is consistent with all relevant dynamical data, including (i) the surface brightness pro le; (ii) star counts; (iii) radial velocities; (iv) proper motions; (v) pulsar \spin-up" (see the paper by Kulkarni in these proceedings); and (vi) dynamical evolution. Finally the mass of the model may be compared with that of visible matter in order to determine the amount of dark matter. In practice dynamical models are often constructed from a subset of the above list of data. For example, the commonest models are constructed from the surface brightness pro le only. Without kinematic data, however, these do not constrain the mass su ciently. (The most obvious illustration of this assertion is a single-component model, to which any amount of dark stars of the same mass could be added without altering the surface brightness pro le; see also Longaretti et al., this volume.) Though radial velocity data is rather commonly used, it is unusual for data on proper motions to be taken into account, Leonard et al. (1992) being a notable exception. An example of a cluster in which this technique has revealed the presence of dark matter is M71. Richer & Fahlman (1989) obtained a mass-to-light ratio (M=L) V ' 0:57 from star counts out to radius 3:4 0 , compared with a global value in the range 1 1:4 obtained by dynamical modelling of kinematic data. This result indicates that at least about 50% of the mass of this cluster resides in unobserved components, and Richer & Fahlman concluded, after further modelling, that it was most likely to consist of stars with masses below that of the faintest stars observable in their study, i.e. less than about 0:33M . What makes a reexamination of such investigations timely is that it now seems possible to push star counts down to masses closer to 0:1M in some clusters, as the following examples illustrate. And another dark matter candidate which can now be counted convincingly is the population of white dwarfs, or at least the brightest ones. It has already been shown (see the paper by Fahlman, these proceedings) that their numbers correspond nearly to what would be expected from the evolution of stars originally slightly more massive than the present turno . Therefore they may be expected to contribute substantially to any missing mass. The following two examples are meant to illustrate the advances in modelling to which these new observational data should lead. In addition, however, it is our aim to illustrate the variety of modelling techniques which are now available. We shall try to show that each has its strengths and weaknesses, and that use of several techniques for the same cluster helps to guard against the risk of drawing model-dependent conclusions. 3.2. EXAMPLE: NGC 6397 As the frequency of its appearance in the papers in this volume show, this cluster has almost replaced M15 as the classic example of a post-collapse cluster, one reason being its relative proximity. Like several post-collapse clusters, it exhibits population gradients (Djorgovski et al. 1991). According to Aguilar et al. (1988) it is one of the most fragile galactic globular clusters. Anisotropic multi-mass King models for NGC 6397 can be found in Meylan & Mayor (1991), who tted to radial velocities and the surface brightness pro le. An isotropic model was constructed by King et al. (1995), who tted to star counts and the surface brightness pro le, but no kinematic data. This last study furnishes a beautiful example of mass segregation in observational data, and the t to the surface brightness pro le is excellent. There is also a single-component King model given by Da Costa (1979). As already mentioned, deep star counts and kinematic data are important for establishing the existence of dark matter, and so we have constructed a multi-mass isotropic King model which ts the projected radial velocity dispersion pro le of Meylan & Mayor (their Table 2), as well as their V -band surface brightness pro le and the deep star counts of Paresce et al. (1995a). (Note, incidentally, that there exists some disagreement between di erent groups King, pers. comm., and Piotto, this volume] in the counts of the stars of lowest mass.) We used the same mass bins as Meylan & Mayor, and our best tting model has the global mass function given in Table 1. Other parameters of the model, in standard notation, are W 0 = 12:7, 2 1=(2j 2 ) = 12:0km 2 s 2 , r c = 0:19pc, and r t = 26pc. We have checked that the resulting model is consistent with the star counts of King et al. at two radii. The t to the surface brightness pro le is of comparable quality (judged by 2 ) to that of the models of Meylan & Mayor. At this point it is worth pausing to consider the limitations and advantages of multi-mass King models. Their principal advantage (and it is a formidable one, which explains their great popularity) is speed. Among their limitations, however, are Neglect of anisotropy, rotation, attening All three pose considerable modelling problems, and even though anisotropy is often included, it is far from clear on dynamical grounds that the usual recipe (i.e. Michie-King models) is at all appropriate. Approximate dynamical evolution The lowered Maxwellian distribution was introduced as an approximate solution of the one-component Fokker-Planck equation. For a long time, however, it has been possible to solve this equation by direct numerical methods (see below). Lack of primordial binaries These e ectively give rise to a small population of bright objects more massive than the turno mass. They are usually ignored, though the work of King et al. is a notable exception. Problems of population gradients The implication is that the surface brightness pro le is sampling di erent populations at di erent radii, and in principle this may a ect modelling based on the surface brightness pro le. Poor statistical methodology It has been claimed (Merritt & Tremblay 1994) that astronomy is one of the last disciplines to hold out against the trend towards non-parametric statistics, and the use of parametrised models such as multi-mass King models introduces unquanti ed biases and other undesirable de ciencies. Many of these drawbacks can be put right, but usually at the cost of computational speed. Dynamical evolution can be handled better by means of Fokker-Planck calculations, and in fact Drukier (1994) has carried out an excellent study of this cluster using this technique. To some extent his preferred models were guided by ground-based faint star counts which have now been superseded, but there is little doubt that modest adjustment of the parameters of his models would be su cient to restore good agreement. The limitations of the Fokker-Planck method include Time-consuming computations Neglect of anisotropy, rotation and attening, though it is now becoming possible to include anisotropy e ciently (see the papers by Takahashi and Giersz in these proceedings). Also there has been a modest recent revival of interest in the Fokker-Planck modelling of rotating clusters. Omission of disk shocking, though this could easily be included at a satisfactory level of approximation. Omission of primordial binaries, despite their known importance in core bounce and post-collapse evolution (see Hut, these proceedings). They could be included, but the necessary cross sections are not well known, especially for unequal masses, and the level of approximation would be rough. Monte Carlo methods (see Giersz, these proceedings) o er the best prospect here. between the data derived from di erent models or di erent selections of observational data helps to assess the the magnitude of the systematic errors in these estimations. The high mass of the model of Meylan & Mayor, for example, may stem from their use of anisotropy, which extends the halo. In any case r t is di cult to determine for this cluster because of the high background density (Drukier et al. 1993). The interesting number in this table is the proportion of white dwarfs, which we think may be higher than was previously believed. Our model also contains a lower proportion of low-mass stars (in the last six bins in Table 1) than in the models of Meylan & Mayor. At the time when the latter models were constructed the main sequence mass function was poorly constrained. 3.3. EXAMPLE: 47 TUC It can be claimed that conclusions such as this are still too model-dependent, and could be relaxed if some of the speci c choices of multi-mass King models (e.g. the choice of distribution function) were altered. One of the aims of non-parametric methods is to avoid this pitfall. Of the four clusters studied non-parametrically by Gebhardt & Fischer (1995), we select 47 Tuc, being one of those for which recent deep star counts have become available. Though it would be desirable to construct new King models taking account of this data, we have not yet done so, and it is interesting to see what conclusions can be drawn from the above non-parametric study alone. The dynamical status of 47 Tuc is a little controversial, though it is commonly assumed to be a high-concentration cluster approaching core collapse. Without doubt it is amongst the most massive clusters. The advantages of non-parametric methods have already been touched upon, and it is worth listing their possible defects. These include Neglect of anisotropy, rotation and attening Neglect of dynamical and physical constraints The method makes no assumption about the form of distribution function of the population used to trace the potential, even where it may be well constrained by theory. The method also makes no assumption that the mass density is positive, and in fact the results inferred for 47 Tuc imply that a negative mass-to-light ratio is acceptable at the 90% con dence level in one range of radius. This defect may be related with the previous item (concerning isotropy), as Richstone has pointed out (pers. comm.) that the problem with M=L is avoided if some anisotropy is introduced. E ect of population gradients. In fact Guhathakurta et al. (1992) have reported the existence of a population gradient in 47 Tuc. As with multi-mass King models tted to the surface brightness, however, it is not known how important the e ect may be. From the results of Gebhardt & Fischer we compute that the mass within a sphere of 7 0 projected radius is about 6:7 10 5 M . This is close to the value of approximately 5:5 10 5 M inferred from multi-mass anisotropic King-Michie models tted by Meylan (1989). The result just mentioned suggests that the di erences between these methods may be rather philosophical than substantive. Indeed, one can think of the model-tting method of Meylan as simply a di erent way of constructing a rather arbitrary potential. Only the mass bin containing the giants is directly connected to the observations; the others simply give rise to a potential eld su cient to agree with the kinematic data. The heaviest bins govern the potential at small radii, and successive bins build up the potential well at successively greater radii. Now let us consider the surface density at 4:6 0 , where deep star counts were obtained by De Marchi & Paresce (1995). By summing their mass function and taking into account the eld area, we obtained a surface density in counted main sequence stars of about 305M =pc 2 . This may be compared with the projected density of all matter which we computed from the non-parametric model of Gebhardt & Fischer; this value is approximately 1100M =pc 2 , with a lower limit of about 770M =pc 2 at 90% con dence. At face value these data imply a substantial fraction of \missing mass", and we immediately mention some possible explanations. Giants These were excluded from the counts of De Marchi & Paresce. The mean mass of the stars in their most massive bin was about 0:75M , and we estimate that inclusion of stars between this bin and turno ( 0:9M , cf. Hesser et al. 1987) would increase the surface density to about 385M =pc 2 . This implies that the proportion of the projected mass unaccounted for is still at least 50%, and it could be as high as about 70% (Table 3). Low-mass stars All the remaining mass could be accounted for if, below the least massive stars counted, the mass function varies as dN(m) / m 1 x dm with x ' 0:6. Though De Marchi & Paresce nd that the mass function attens at low masses, this conclusion is controversial (see the paper by Piotto, this volume). M/L relation This is controversial for stars of low mass, and errors here can lead to large di erences in the inferred mass function. Note, however, that the surface density is obtained from the magnitude distribution N(m) by the integral R M(m)dN(m), where M is the mass of a star of magnitude m. Several M=L relations are plotted by De Marchi & Paresce, and indicate that the resulting uncertainty in the projected mass density is at most 0.1 dex. Completeness De Marchi & Paresce claim that, even at the faintest bin, their counts are at least 67% complete. This has been corrected for in the data which we used to compute the projected density. White dwarfs Previous estimates of the mass fraction in all white dwarfs are given in Table 3. Though the rst ve are global estimates, it seems unlikely that at the radius of these observations (about 0:7 half-mass radii) mass segregation could produce a much larger proportion of white dwarfs. Now that white dwarfs can be counted in 47 Tuc (Paresce et al. 1995b) it is worth reexamining the sorts of models listed in Table 3 to check whether the population of white dwarfs could not perhaps be rather more signi cant than was previously thought. The result could also illuminate the still rather controversial evidence on the presence of cataclysmic variables in clusters such as 47 Tuc. Conclusions We wish to emphasise that all kinds of dynamical models are useful in this kind of study. All have some advantages, but the list of their limitations is depressingly long. Studying the same cluster by di erent techniques is an important way of guarding against some of these. Based on such methods, the studies of the two clusters on which we have concentrated suggest that a large fraction of their mass, around 50%, is still unobserved. It is not implausible, however, that all of this can be accounted for by white dwarfs or low-mass stars, and so we consider that this represents a generous upper limit on the mass fraction of other, more exotic forms of dark matter in these two clusters. In other words, though there are several good reasons for studying the dark matter problem in globular clusters, it is rst necessary to improve our knowledge of the abundance of white dwarfs and low-mass stars. At present it is not even clear which of these might dominate. But the time is ripe for renewed study of these low-luminosity components, thanks to the wealth of new observational data.
2014-10-01T00:00:00.000Z
1995-11-24T00:00:00.000
{ "year": 1995, "sha1": "255392bcdba81480bb27845c8c2e7294de4b737f", "oa_license": null, "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/E1CDD1B75A0692DB9957525EB51C7D95/S0074180900001649a.pdf/div-class-title-dark-matter-in-globular-clusters-div.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "be079f11a04cf90002bd641f1fa75f5e67f488ce", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
220721416
pes2o/s2orc
v3-fos-license
Long-term sickness absence among young and middle-aged workers in Norway: the impact of a population-level intervention Background The study objective was to evaluate the impact of a population-level intervention (the IA Agreement) on the of one-year risk for long-term sickness absence spells (LSAS) among young and middle aged workers in Norway. Methods Using an observational design, we conducted a quasi-experimental study to analyse registry data on individual LSAS for all employed individuals in 2000 (n = 298,690) and 2005 (n = 352,618), born in Norway between 1976 and 1967. The intervention of interest was the tripartite agreement for a more inclusive working life (the IA Agreement). We estimated difference in pre-post differences (DID) in LSAS between individuals working in IA companies with the intervention and companies without, in 2000 and 2005. We used logistic regression models and present odds ratios (DID OR) with accompanying 95% CI. We stratified analyses by sex, industry and company size. Results We found no significant change in the overall risk of long-term sickness absence spells after implementing the intervention among young and middle aged workers. Stratified by sex, the intervention resulted in a slight decrease in LSAS risk among female workers (DID OR 0.93 (0.91–0.96)) while the intervention showed no impact among male workers (DID OR 1.01 (0.97–1.06)). We found that companies signing the IA Agreement were large (≥50 employees) and often within the manufacturing and health and social sectors. In large manufacturing companies, we found a reduction in LSAS, among workers both in companies with and without the intervention, resulting in no statistically significant impact of the IA intervention. In large health and social companies, we found an increase in LSAS among workers both in companies with and without the intervention. The increase was smaller among the workers in companies offering the IA intervention compared with workers in companies without, resulting in a positive impact of the IA intervention in the health and social industry. This impact was statistically significant only among female workers. Conclusions The results indicate that the impact of the IA Agreement on the risk of long-term sickness absence spells varies considerably depending on sex and industry. These findings suggest that reducing LSAS may warrant industry-specific interventions. Background Sickness absence (SA) remains a significant problem globally with a financial and health burden for societies and individuals. In Norway, the level of SA is considered relatively high compared to neighbouring countries [1]. Consequently, reducing SA is an important political objective and initiatives with this aim have received significant attention. One such initiative was introduced in 2001, where the employer organizations, employee organizations, and the Government signed the Tripartite Agreement for a More Inclusive Working Life (the IA Agreement). The IA Agreement's three main goals were to reduce SA, secure recruitment of people with disabilities and vulnerable groups into the labour market, and prolong working life [2]. The IA Agreement can be considered a population-level "intervention", where companies voluntarily signed an IA Agreement with the Norwegian Labour and Welfare Administration (NAV). Signing an IA Agreement granted access to consulting services and subsidies to assist the companies' work on reducing sickness absence and increasing work participation. This "intervention" is a large societal investment constituting a major cost, as the budget for one year of these services has been estimated to be 38 million euros [3]. The Agreement has been renewed five times, most recently in December 2018 for the period 2019-2022. Its latest renewal entails an expansion meaning that the IA Agreement comprises all of Norway's companies and employees. It also entails a clearer focus on reducing SA and decreasing the withdrawal from work life. The agreement also states that the efforts should target the long-term and recurring sickness absence spells. Despite its latest expansion and its high cost, there is still a lack of effect-focused studies on its impact [4]. There have been previous non-scientific evaluations of the intervention, but these have seldom taken into account the heterogeneity between companies with and without the IA Agreement, such as company size and industry [3,5,6]. Moreover, the few scientific studies assessing the IA Agreement's effectiveness have given contradictory results. Two studies showed no impact on SA [7,8], while more recent studies, using causal inference methods, found positive impacts with a higher probability of returning to work [9] and a lower likelihood of receiving a full disability pension [10]. These studies are, however, done on selected samples, such as participants on work rehabilitation [9] and older employees aged 50-61 [10], highlighting the need for studies on other samples/cohorts. In the present observational study, we have access to data on all individuals born in Norway between 1967 and 1976, their individual-level SA, and the IA status of the company they work in pre and post intervention period (2000 and 2005). These data provide an opportunity to examine the IA Agreement's impact on achieving the goal to reduce SA in a young/middle aged population of workers (age span 24-38 years during follow-up). This is an especially important group to examine, as SA early in working life is associated with later SA and thus lower work participation [11]. We have chosen to examine long-term sickness absence spells as these constitute the largest part of sick leave in Norway [12], increase the risk of withdrawal from work life and have a large impact on social expenditure. By linking individual-level data from several registers with company-level data on the IA status and using a quasiexperimental design, our aim was to evaluate the impact of this population-level intervention on long-term sickness absence spells (LSAS). The assignment of companies and workers into the IA agreement or not was outside the control of the study and was registered retrospectively. We performed analyses stratified by sex, company size and industry, as previous evaluations showed that the distribution of company size and industry varies by IA status [3,13], and that SA varies substantially between sex and industry [14]. Distinguishing between possible different impacts of the intervention by industry and company size may assist future prioritizations of certain groups. Methods The IA Agreement in Norway, first introduced in 2001, is the population-level intervention of interest in this study. Using an observational design, we conducted a quasi-experimental study following recent guidelines on evaluating population health interventions [15]. The intervention group (individuals working in companies with an IA Agreement) was compared with the control group (individuals working in companies without an IA Agreement) regarding their pre-post differences in LSAS in the study periods 2000 and 2005. Data source The data material comes from a cohort consisting of all individuals born in Norway between 1967 and 1976 (n = 626,928). They have been identified through the national identification number in the Medical Birth Registry of Norway allowing for data linkage across several national registries. Background characteristics and data on SA and industry were available from Statistics Norway's events database on employment and welfare, FD-Trygd [16]. National registries are updated either on an annual basis, or are event-history databases, in which case we have the precise dates of the events. In addition, we have obtained annual data from the registries of NAV on the companies' size and the companies' annual IA status. It was possible to link this information to the individuals' employment in a company with or without the IA Agreement. Figure 1 illustrates the design and a flowchart of the source population (n = 626,928). The study population consisted of those employed and not on SA at the beginning of the year (2000: n = 298,690, 2005: n = 352, 618). We compared the annual risk of LSAS in the pre period (year 2000) and the post period (year 2005) between those who worked in a company with an IA Agreement (intervention group) and those who worked in companies without an IA Agreement (control group). The intervention period ranged from January 2001, to December 2004. Only those employed and not on SA on 1st January 2000 were included in the pre period (N = 466,538). Individuals were excluded if they; Design and population (i) worked in a company with missing information on IA status, (ii) were working in a company that had signed the Agreement after 2003, or (iii) had left the Agreement prior to 2005. Of the initial population, 36% (N = 167,848) were excluded according to these criteria. Our independent variable measured whether the participants worked in a company that had signed the IA Agreement in the period 2001-2003. We considered signing the Agreement in 2004 or later as too late in terms of evaluating the impact on LSAS in 2005. Signing an IA Agreement gives the companies access to funding of both individual interventions, to retain employees at work during illness, and group-based preventative interventions. The interventions include opportunities for tighter and earlier follow-up of workers with LSAS, increased use of graded sickness certification, and access to subsidies for work adjustments. Furthermore, IA companies have access to "NAV working life centres" that assist companies in their strategic work related to the goals of the IA Agreement [1]. Thus, signing the IA Agreement can be seen as a proxy for initiating specific interventions to prevent LSAS among the company's employees. The NAV data enabled classification of the treatment and control groups by identifying employees working in companies with and without an IA Agreement on an annual basis. The IA Agreement status (intervention assignment) was therefore not at the discretion of the researcher. Outcome (long-term sickness absence spells) LSAS data were obtained from the event database FD-Trygd, which records all physician-diagnosed SA spells lasting > 16 calendar days (LSAS). Employees in Norwegian companies receive full salary from the employer during certified sickness absence. NAV reimburses the employer for absences lasting > 16 calendar days, and these absence spells are registered in the database; therefore, registration is considered to be complete for employees. We obtained individual records on long-term sickness absence spells (> 16 days) for the periods January 1st to December 31st of 2000 and 2005. We estimated the risk of having one or more long-term sickness absence spells during the one-year period among those who were at risk at the beginning of the year, for 2000 and 2005, respectively (cumulative incidence). There are many ways to measure SA, the most common being workdays on sick leave as a proportion of expected workdays [6]. This measure is used when presenting the national statistics on SA. However, in an epidemiological framework, the proper quantification depends strongly on the aim and study sample. Five valuable measures have however been proposed in a review on how to measure SA, and cumulative incidence is one of these [17]. We chose to use cumulative incidence because the same measure of SA has been used in earlier scientific papers evaluating the IA Agreement's impact on sickness absence [7,8], thus enabling easier comparisons of results. Our choice was also based on a belief that LSAS could capture changes resulting from the IA interventions, which aim to reduce long lasting and recurring sickness absence spells, and may lead to shorter (< 16 days) and fewer SA spells. Covariates Data on sex and year of birth were obtained from the Medical Birth Registry of Norway. Information on industry was obtained from Statistics Norway's FD-Trygd database on an annual basis and was coded according to the Standard Industrial Classification 2002 [16]. This classification has a hierarchical, top-down structure that begins with general characteristics and narrows down to more specific job areas. The first two digits of the code represent the major industries to which a company belongs [18]. In this study we have Data on company size were obtained from the registers of NAV on an annual basis and divided in three different groups: (i) small companies (0-10 employees), (ii) medium-sized companies (11-49 employees) and (iii) large companies (> 50 employees). Statistics We estimated the one-year risk of one or more longterm sickness absence spells during the pre-intervention period (year 2000) and post-intervention period (year 2005). No information was available on the specific interventions used in the companies; thus, our analysis evaluates the total impact of working in a company that has signed the Agreement and possibly started offering workplace interventions to reduce employees' LSAS. We used the difference-in-difference (DID) method, which can account for fixed unobserved individual differences. The counterfactual method inherent in DID allows us to evaluate the LSAS for the control group IF they hypothetically had received the treatment/intervention and is thus a reflection of the change in the intervention group due to the intervention. We calculated pre-post differences between the intervention and the control group. The impact of the IA Agreement on LSAS was estimated as the difference in pre-post differences (differences between 2005 and 2000) between the IA and non-IA groups, using logistic regression models [19]. The Odds Ratios (DID OR) with accompanying 95% confidence intervals (CI) are presented and the statistical significance level was set at a p-value < 0.05. An OR value < 1.0 indicates that the additive difference in LSAS risk between 2005 and 2000 (risk 2005risk 2000 ) in the intervention group is lower than the corresponding figure in the control group. This is referred to as a positive impact of the intervention on LSAS. The DID method has a strong common trend assumption, namely that the intervention and control groups would have followed the same trajectory had the intervention not taken place. This was checked through comparing LSAS with common trend graphs in a period prior to the intervention (1998)(1999)(2000) and the assumption was considered satisfied (see Supplementary Figure 1 Based on earlier studies evaluating the IA Agreement, we know that it was unlikely that the intervention and control group would be similar in all manners. We therefore stratified the analyses by sex, industry and company size. To ensure statistical power in the adjusted analyses, only industries with > 2000 employees in each group (control and intervention) for both years (2000 and 2005) were included. This led to subgroup analyses in manufacturing, construction, wholesale/retail, transport/storage, financial/real estate, public administration, education and health/social work. STATA/SE 14.0 Software was used for analyses (STATA Corporation, College Station, Texas, U.S.A). The study was approved by the Regional Committees for Medical and Health Research Ethics (REK). Table 1 shows that the mean age of employees in companies with and without an IA Agreement was similar, however, the two groups differed when it came to important variables such as sex, company size and industry. A higher proportion of employees in companies with IA Agreement were female (58%) and working in large companies (70%). They were most commonly employed in health/social work (35%), manufacturing (15%) and education (14%). Employees in the control group, on the other hand, were most commonly employed in wholesale/retail (26%), financial and real estate (18%) and manufacturing (14%). They were more evenly distributed between small (28%), medium-sized (37%), and large companies (34%). The two groups also differed when it came to LSAS, as the employees in companies signing the IA Agreement had a 3 percentage point (PP) higher one-year risk of LSAS prior to the intervention compared with employees in control companies (16.9 and 13.9%, respectively). See Table 1 for more study population characteristics. Figure 2a and b show the risk of LSAS in the control and treatment groups for women and men, respectively, and for the period pre and post IA Agreement. From 2000 to 2005, we found an overall increase in LSAS among women in both the intervention (21.4 to 22.8%) and control group (18.0 to 20.4%). Men showed no changes in LSAS in either the intervention (10.8 to 10.5%) or the control group (10.6 to 10.1%). Among women, accounting for the differences in LSAS at baseline, and despite the increase in LSAS for both intervention and control group, a significant positive impact of having an IA Agreement was found (OR 0.93, CI 0.91-0.96). Among men, however, when taking into account the differences in the LSAS at baseline, no significant impact of the IA Agreement was found (OR 1.01, CI 0.97-1.06). Figure 2c and d show the risk of LSAS in the control and treatment group for women and men, respectively, by company size. As SA increases with age, we conducted supplementary analyses where age was adjusted for in the overall results to account for possible effects of being four years older. In these supplementary analyses, the risk of LSAS is more similar in the pre and post period and the impact of the IA Agreement remained the same, but slightly reduced. Figure 2c suggests that there is a positive impact of the IA Agreement for women in large and medium-sized companies; even though the risk of LSAS increases in both the control and intervention groups, it increases more in the control group than the intervention group. For men, a negative impact of the IA Agreement was found in large companies; even though the risk of LSAS decreases in both the control and intervention groups, it decreases more in the control group than the intervention group. Results Our evaluation of the impact of the Norwegian Agreement for a More Inclusive Working Life shows no significant overall impact on the risk of LSAS after implementing the intervention. In this period, from 2000 to 2005, we found an overall increase in risk of LSAS in both the intervention (16.9 to 17.6%) and control group (13.9 to 14.4%). The results, however, show that the impact of the implemented intervention varied by industry and company size. In Fig. 3.1 and 3.2, we present the estimated impact of the IA Agreement for large companies in eight selected industries, for men and women separately. This is a choice based on the available sample size; see Supplementary Table 1 and 2 for estimates for all company sizes. For women in large companies within the health and social work sector, the financial/real estate and public administration sectors we found a positive impact of the IA Agreement. For men in large companies, we found a positive impact of the IA Agreement within the health and social work sector, the wholesale/retail and transport sector. No impact was found within large companies in manufacturing, construction or education. In Table 2, we present the impact of the IA Agreement adjusted for company size and industry. For women the Agreement had an overall significant positive impact, decreasing the one-year risk of LSAS by 1.3 PP. For the different industries, our results indicate a positive impact of the IA Agreement among female workers in large companies in the public administration sector (11.1 PP decrease in LSAS risk) and female employees working in the health and social work sector (ranging from 2.3 PP for employees in medium companies to 3.2 PP for employees in large companies). For male workers, our overall results show no impact on the one-year risk of LSAS. Among men, the only industry that showed a significant positive impact of the IA Agreement was among workers in large wholesale and retail companies (2.6 PP decrease in LSAS risk). There was a significant negative impact in large companies in the construction and public administration sectors (3.1 PP and 2.1 PP increase in LSAS risk, respectively). Discussion We found no significant impact overall, of implementing the IA Agreement on the risk of long-term sickness absence spells. When stratifying by sex, there was an overall positive impact of the IA Agreement among female workers, whilst no effect was found among male workers. Companies signing the Agreement were more likely to be large (≥50 employees) and were more often within the manufacturing and health and social work sectors. In large manufacturing companies, there was a statistically significant reduction in the risk of LSAS among both male and female workers after the implementation of the IA Agreement. As this was found in both intervention and control companies, it indicates no impact of the IA intervention. In large health and social companies there was, in contrast, an increase in the risk of LSAS after the introduction of the IA Agreement. The increase was lower in the intervention group compared to the control group, resulting in a positive impact of the actual IA intervention. This pattern was mainly evident among female workers in large health and social companies. In sum, the results indicate that the impact of the IA Agreement on risk of LSAS varied considerably depending on sex, industry, and company size. Methodological considerations One of the strengths of this study is the use of statistical analyses that take into account the difference in LSAS pre and post intervention. Difference-in-difference analysis is a causal inference method that can be applied to counter selection bias and confounding. The large study population also made it possible to stratify by sex, industry and company size. This stratification, in combination with the DID analyses, could therefore reduce the bias and confounding that may result from the significant differences in the distribution of company size, industry and sex between the intervention and control groups. This difference in the distribution of the IA Agreement by industry and company size was, however, a challenge in some strata where groups were small, leading to less robust estimates. In addition, self-selection bias may be an issue, as we observed that the employees in companies signing the IA Agreement had a higher risk of LSAS prior to the intervention, compared with employees in control companies. This might challenge the key assumption of exogeneity in DID, which posits that the selection into the intervention (the IA Agreement) should not be predicted by the outcome (LSAS) prior to the intervention. However, there is no evidence that individuals choose where they work based on the company's IA status, and the companies do not only base their choice of signing the IA Agreement on prior sickness absence level [20]. It is also worth mentioning that the NAV Working Life Centres had a recruitment campaign which mainly targeted companies with high SA, and we cannot rule out that this may have influenced risk of bias. Difference-in-difference analysis is often used to counter selection bias between the intervention and control group, including for repeated cross-sectional data where [21]. The DID method can therefore account for changes within the groups over time, as long as the change is the same in both the intervention and control groups. Age is an example of this in our study, as the groups both age by 4 years (see Table 1). This change should therefore not influence our results, as the effect of age is assumed to be equal for both groups. This is supported by the supplementary analyses we conducted where the risk of LSAS was higher but the direction of impact of the IA Agreement remained the same, but slightly reduced (see supplementary Figure 2). One of the many challenges in evaluating the IA Agreement's impact on LSAS was that we did not have data on exactly when the IA Agreement was signed by a specific company (only yearly data). We did not have data on when they started introducing the different preventive measures, either. Other studies have shown, however, that most companies signing the Agreement increased their effort to lower SA following the start of the IA Agreement in 2001, regardless of the date they formally signed the Agreement [5]. Even so, the specific preventive actions or activities they may offer is not available in our data, preventing us from evaluating the possible differential impact of activities on SA. To address this, we have used company size as a proxy measure as this can indicate differences in the means or resources they have available to use in implementing the IA activities, applying for grants and so on. Larger companies may have more resources available to make use Sickness absence is multi-factorial and influenced not only by the individual's health status and work environment, but also by the regulations in the welfare system. A challenge often encountered when evaluating populationlevel interventions is other "interventions" outside the scope of the evaluation that impact the outcome. During the intervention period in our study, a sick leave reform was introduced. The reform was implemented in 2004, and a key element was the activity requirement. It required an individual to engage in work-related activity as early as possible (at the latest within eight weeks) in order to be entitled to sick pay. The only exception was Another limitation of this study is the focus on LSAS alone as the outcome, as the IA Agreement incorporates three goals that might influence each other. Reducing SA is only one of the goals. The two other goals are to secure recruitment of people with disabilities and vulnerable groups into the labour market and to prolong working life. These three goals may affect each other, as a company that increases the recruitment of people with disabilities and prolongs working life for older workers, might also experience increased SA due to this. It is therefore possible that we may present an underestimation of the impact of the intervention on LSAS in our study for companies with a high goal attainment on inclusion of disabled workers, but this is unknown. It is therefore important to have in mind that our results are only evaluating one of the goals of the IA Agreement. Our results in light of other findings Our finding that the impact of the IA Agreement on LSAS varied considerably depending on sex and industry contributes to limited literature on the impacts of this population based intervention. Evaluations of the IA Agreement have been published in some reports without peer review [5,6,22,23], and have indicated a positive impact of the IA Agreement in manufacturing, as this sector shows a decrease in LSAS after implementation of the intervention. Earlier reports have also indicated a negative impact of the intervention in the health and social work sector, as LSAS increases in this sector in the same period [5]. Our results were therefore a bit surprising as we found the opposite; there was no impact of the IA Agreement in manufacturing, and a positive impact in health and social work. These contradicting results can be explained by the fact that we use an analytical method that takes into account the intervention and control group differences, both before the IA Agreement was implemented and 4 years after. This explanation is strengthened by the fact that we get similar findings (decrease in LSAS for employees in manufacturing companies and increase in LSAS for employees in health and social work companies) when we use the same statistical methods as in the reports, without the use of DID method. It is also important to bear in mind that how sickness absence is estimated, can lead to contrasting results when interpreting the impact of the IA Agreement. We evaluated the impact on the change in one-year risk of long-term sickness absence spells. We cannot rule out the possibility that a different result may have been obtained if a different measure had been used such as the mean duration of sickness absence spells, the annual number of spells or the use of graded sickness absence spells. However, other scientific papers on the IA Agreement use the same measure of SA as in this study, enabling easier comparisons of results. A study from 2011, on the impact of the IA Agreement on sickness absence [7] shows similar results as in this study, namely no impact in the overall sample. However, in contrast, we find that the impact varies by industry. This may partly be explained by the considerably smaller sample in the 2011 study, which impedes stratification by industry. In their analysis, they also used office workers as a reference, and did not include information on company size or take into account the baseline difference in sickness absence. Another study by Midtsundstad et al. from 2014 showed, on the other hand, a positive impact of the intervention on overall sickness absence and a varying effect by industry [8]. They used the same DID method as in our study and found a positive impact on sickness absence among IA companies in the public administration sector. This is partially in line with our findings; however, in our study, the positive impact in the public administration sector was only found in women and varied according to company size. They also did not find positive impacts of the intervention for manufacturing, construction and transport; although the overall sickness absence decreased in manufacturing, it was not due to the intervention, which is also similar to our findings. This can indicate that the decrease in sickness absence may be related to other factors than the IA Agreement, for example company characteristics or the focus on sickness absence and work environment in the manufacturing sector as a whole, resulting in an impact for all companies and not only those signing the IA Agreement. We also found a positive impact among female health and social workers in medium and large companies, which was not found in the other study [8]. A potential reason for this discrepancy in results may be that they focused on older workers (aged 50+) whilst our study included younger and middle-aged workers (aged 25-34). Beyond these two studies, there is little scientific knowledge on the impact of the IA Agreement on sickness absence that also takes into account the possible differential impacts by industry. Other scientifically based evaluations of the IA Agreement have been carried out, largely reporting a positive impact, however, they have evaluated other outcomes, such as disability benefits [10] and return to work after rehabilitation [9], and are therefore not comparable to our study. Implications The present study strengthens the evidence that the impact of the IA Agreement on the risk of long-term sickness absence spells varies greatly between industries and the size of the companies. Very few clear implications can be given based on this study, as we do not have data on the preventive measures used in different sectors. The varying impact by industry may however, imply that the IA Agreements measures (such as focusing on close follow up of those on sick leave, adjust work tasks to enable the employee to work even when sick) may suit the needs of some industries and not others. For example, in some industries the possibilities to adapt the work tasks are more limited than in others, such as manual workers in manufacturing or construction compared with office workers in public administration. These measures may therefore not be suitable for all industries. It is, also evident that the impact varies greatly depending on the size of the company, which may imply that larger companies have more resources available to make use of and benefit from the IA Agreement's activities compared to smaller companies. Based on this it is evident that there is a need for a greater focus on industry-related exposures and the possibilities for preventive measures addressing the specific challenges in each industry. Previous studies have found that 23-28% of long-term sickness absence is attributable to work-related exposures [24,25]. This indicates that interventions targeting the work environment can be considered an important method for decreasing sickness absence. According to the surveillance of work environment for the working population in Norway [14], health and social workers have challenges in terms of high emotional demands, role conflicts, job strain, unwanted sexual attention, violence and threats, working nights, neck bending, and awkward lifting. In contrast, manufacturing workers have more exposure to noise, vibrations, awkward lifting, squatting/kneeling, downsizing, and job insecurity. This may indicate that reducing sickness absence in these two sectors would warrant different preventive strategies and actions. Conclusion This study provides knowledge on the differential impacts of the IA Agreement on the risk of long-term sickness absence spells by sex, industry and company size. The results indicate that reducing LSAS may warrant industry-specific interventions. The data available, however, gives us no possibility to interpret which of the IA activities are effective. Future research should therefore obtain data that can more precisely reflect IA activity at the company level and focus on evaluating specific preventive measures, as some measures may have more impact on LSAS than others.
2020-07-24T14:29:10.651Z
2020-01-22T00:00:00.000
{ "year": 2020, "sha1": "765c1d616cbd7b38b6a5cd0a3ea9d8ddfc341acd", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-020-09205-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "765c1d616cbd7b38b6a5cd0a3ea9d8ddfc341acd", "s2fieldsofstudy": [ "Sociology", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
246751007
pes2o/s2orc
v3-fos-license
Micro/nano biomedical devices for point-of-care diagnosis of infectious respiratory diseases Corona Virus Disease 2019 (COVID-19) has developed into a global pandemic in the last two years, causing significant impacts on our daily life in many countries. Rapid and accurate detection of COVID-19 is of great importance to both treatments and pandemic management. Till now, a variety of point-of-care testing (POCT) approaches devices, including nucleic acid-based test and immunological detection, have been developed and some of them has been rapidly ruled out for clinical diagnosis of COVID-19 due to the requirement of mass testing. In this review, we provide a summary and commentary on the methods and biomedical devices innovated or renovated for the quick and early diagnosis of COVID-19. In particular, some of micro and nano devices with miniaturized structures, showing outstanding analytical performances such as ultra-sensitivity, rapidness, accuracy and low cost, are discussed in this paper. We also provide our insights on the further implementation of biomedical devices using advanced micro and nano technologies to meet the demand of point-of-care diagnosis and home testing to facilitate pandemic management. In general, our paper provides a comprehensive overview of the latest advances on the POCT device for diagnosis of COVID-19, which may provide insightful knowledge for researcher to further develop novel diagnostic technologies for rapid and on-site detection of pathogens including SARS-CoV-2. Introduction The ongoing outbreak of COVID-19 has caused a global pandemic with considerable morbidity and mortality [1][2][3][4]. Without timely diagnosis and treatment, some COVID-19 patients may develop into worse symptoms, including pneumonia and acute respiratory distress syndrome, even deaths, especially for the seniors above 50 years old [5,6]. The fighting against COVID-19, among all infectious diseases caused by viruses, still remains challenging to the human beings, albeit tremendous efforts and technical advances in public healthcare [7]. Although various medicines or vaccines have been proved to be effective to the disease [8][9][10], advanced techniques for rapid and accurate detection of the virus still greatly contribute the control on viral spreading and early treatment [11][12][13]. SARS-CoV-2 is one of the beta coronavirus genera, which comprises a nucleocapsid (N) protein associated single-stranded positive-sense RNA (29,881 nucleotides, the genetic material) and three structural surface proteins, including the membrane (M), the spike (S) and the envelope (E) (Fig. 1a) [14][15][16][17][18]. It has been demonstrated that SARS-CoV-2 is infectious in humans, animals, and herds [19]. Compared with the other emerging viruses that have caused wide epidemics in recent years, such as Middle East respiratory syndrome coronavirus (MERS), severe acute respiratory syndrome coronavirus (SARS-CoV), Ebola virus and Zika virus, SARS-CoV-2 is spreading with a significantly higher rate and wider range [20][21][22][23][24], making even more difficult to control. The major symptoms of infected patients include neurological and respiratory diseases, such as fever, pain in the muscles and tiredness, cough and shortness of breath [25][26][27]. Unfortunately, these symptoms are not specific for the diagnosis of the infection [28]. In the past few months, numerous scientific teams and companies have reported methods for COVID-19 detection [29][30][31][32][33]. In terms of their working principle, the major diagnostics methods include: immunoassay-based methods for detection of antibodies in blood serums [34][35][36][37] and nucleic acid testing (NAT) for direct determination of the virus in nasal/throat swabs (Fig. 1b) [38,39]. The characteristics of some frequently used NAT methods and immunoassay-based methods are summarized in Table 1. Thanks to the advantages of robust and sensitive assay, NAT has gained tremendous development with the technical innovation in molecular biology and biomedical engineering, and currently it is a gold standard for COVID-19 diagnosis [40,41]. They can identify and detect trace amounts of the specific viral genomic sequence with various amplification, e.g. reverse transcript -polymerase chain reaction (RT-PCR). In terms of the mechanism underlying the routes for identifying and amplifying the targeted nucleic acids, three methods are currently adopted, including thermo-cycling-based amplification, isothermal amplification methods and CRISPR-based methods (Fig. 1c) [42][43][44]. NAT is generally very sensitive, but it needs a central laboratory and well-trained technical to operate experiments and interpret the data. Alternatively, immunoassays can provide the information concerning any active viral infections as well as past exposures [45]. The basic mechanism of immunoassay was to detect the specific antibodies against SARS-CoV-2 produced by the immune response in blood serums [46], particularly immunoglobulin M (IgM) and immunoglobulin G (IgG) [47]. As a result, detecting the existence of IgG and IgM antibodies against SARS-CoV-2 is a feasible way to indicate infection [48]. Furthermore, due to the reason that there are a large portion of the population that have been vaccinated, it is important to distinguish between actual infection and vaccination. According to the suggestions of China Food and Drug Administration (CFDA), a combination test of IgM antibodies against the S protein and the N protein can help. Specifically, the people who has a positive IgM antibody test against the S protein needs to take an extra test against the N protein. If both tests were positive, it means that the antibodies are induced from an actual infection. Otherwise, the antibodies are induced from vaccination. Usually, SARS-CoV-2 triggered antibodies could be detected as early as on the 3rd day and decrease gradually as immune responses go on in patients' body. The antibodies (c) An overview of RNA detection methods, including reverse transcription polymerase chain reaction (RT-PCR), isothermal amplification methods and clustered regularly interspaced short palindromic repeats (CRISPR)-based methods [7]; Reproduced with permission. Copyright 2020, BioRxiv. (d) Schematic illustration of concentrations of the IgG and IgM antibodies in individuals once infected with SARS-CoV-2: IgM antibodies are first to appear in serum samples and are detectable as early as 3 days after infection. The concentration of IgM peaks between 2 and 3 weeks. IgG antibodies come after IgM but last longer. They peak after 2 weeks [49]; (e) Schematic detection mechanisms of IgG and IgM based on lateral flow immunoassays and enzyme-linked immunosorbent assays. generally become undetectable in about two weeks (Fig. 1d) [49]. Typically, IgM antibodies appear in the serum samples in an earlier stage than IgG does. To date, a wide variety of immunoassays have been developed to detect SARS-CoV-2 antibodies, including enzyme-linked immunosorbent assay, chemiluminescent immunoassay [50] and even a lateral flow immunoassay (Fig. 1e) [51]. Compared with NAT, these immunoassays can provide more convenient and rapid detection of SARS-CoV-2 antibodies in human serum or blood without needing biosafety laboratories, which can also be suitable for the epidemiological of COVID-19. Clinically, chest computed tomography (CT) [52,53] and transmission electron microscopy (TEM) [54] also assist in the evaluation and diagnosis of the symptomatic patients. Nevertheless, these instruments are unaffordable in most undeveloped countries. So far, noticeable reviews about SARS-CoV-2 have been provided, with special focus on the origin [55,56], transmission [57,58], clinical features [59][60][61], and even treatment methods [62][63][64]. However, a timely review comprehensively summarizing the advancements of rapid diagnostics platforms for point-of-care testing (POCT) of SARS-CoV-2 remains open, especially for those with improved performance by involving micro and nanotechnologies. In this review, we summarize the emerging micro-/nano-scale biomedical devices applied for detecting SARS-CoV-2 in the last two years, which can provide a fast turn-around and sample-to-answer assay. In particular, those micro-nano devices may provide a point-of-care testing and home diagnosis to release the burden of central hospital [65]. We hope this review offers an insight on the methodologies used in developing advanced devices for point-of-care detection of COVID-19 and inspires novel diagnostic technologies in clinical trials in the future. Nucleic acid testing Polymerase chain reaction (PCR) offers ultra-high sensitivities and sequence specificities in medical and biological applications, including DNA sequencing [66], functional genes analysis [67] and infectious diseases identification [68]. Especially during the current period, a great number of efforts have been made to improve the performances of PCR systems, which have boosted the whole market. With an additional reverse transcription (RT) process that transfers RNA into complementary DNA (cDNA) strands, RT-PCR can be used for direct detection of RNA-viruses [69]. In the past few years, RT-PCR has achieved a considerable achievement that speeded up its practical applications. Basically, a standard RT-PCR test takes about 3 h from RNA extraction to the final amplification ( Fig. 1c) [70]. It uses specific primer sets to hybridize and amplify the target genomic sequence [71]. With the designed primers, it is possible to test the presence of SARS-CoV-2 with a real-time PCR instrument in a Biosafety Level II Laboratory [38]. However, as the gold-standard method, RT-qPCR requires central laboratory and skilled person to operate and interpret the data. The time-consuming and high-cost facts of the thermal-cycling instruments limit their applications in most of on-site, point-of-care circumstances [72]. Moreover, the dependence on the specialized reagents further limits its universally available in those resource-limited regions [73]. Presently, the existing regents and enzymes for the PCR reactions typically require specific refrigeration for storage and transport [74]. As a result, the reagents and medical professionals may be easily constrained in those low and middle-income countries during the pandemic. To improve the testing capabilities in remote locations or at the point-of-care applications, more attention should be given to the innovations of low-cost and stable reagents for PCR with the consideration of room-temperature stable reagents [26,75]. In most scenarios where the expensive thermal cycling instruments are not affordable, isothermal amplification methods, including loopmediated isothermal amplification (LAMP) [40], recombinase polymerase amplification (RPA) [76], nicking enzyme amplification reaction (NEAR) [77] and rolling circle amplification (RCA) [78] have been developed for the point-of-care testing of SARS-CoV-2. These isothermal amplification methods, conducted with a fixed temperature [79], are independent of expensive thermal cycling units that a PCR amplification needs. Each of these methods has its unique strategy or mechanism to initiate and recycle a new round of dsDNA separation, extension and synthesis (Fig. 2). Among them, LAMP and RPA are initiated by a DNA target while NEAR enables direct RNA identification and amplification [40]. In addition to the above advantages, there are also actually known issues for isothermal amplification such as nonspecific or non-template amplification. Reverse transcription loop-mediated isothermal amplification (RT-LAMP) is one of the most common isothermal methods for RNA-based pathogen diagnosis [80,81], which consists of a reverse transcriptase process and a one-step amplification reaction where RNA plays a part as the final template (Fig. 2a). It can recognize and amplify a specific nucleic acid fragment at a constant temperature (60-65 C) in less than 1 h with high sensitivity by utilizing a set of four to six primers and a strand-displacement polymerase [82]. The stem-loop DNAs as the final products include multiple inverted repeats of the target and exhibit a cauliflower-like appearance. In comparison with the real-time RT-PCR assay, a single protocol-based LAMP is a more straightforward method that can fade the dependence on thermocycler as well as energy consumption [83]. The RT-LAMP results from viral RNA amplification are usually read out colorimetrically or fluorescently, by adding colorimetric pH indicators or fluorescence dyes [84]. In addition, LAMP could also be integrated with sequencing infrastructures [85]. Up to now, LAMP assay kits have already been commercially available. Davidson et al. used paper based-strategy and RT-LAMP to develop an instant device, which visually detected SARS-CoV-2 in saliva in 60 min, with a detection limit of up to 200 copies/μl [86]. However, it does not show absolute superiorities over PCR in clinical scenarios due to the fact that there are limited suitable devices which can do sample processing and RNA extraction on-site [80]. Moreover, there are four to six primers used during LAMP reactions, as a result, the primers need to be strictly designed to avoid potential cross-interactions. Moreover, LAMP may cause false positive detection of samples as well as the carry-over contaminations. Compared with LAMP, RPA is a relatively new isothermal amplification method reported firstly in 2006 [87]. However, it has experienced an exponential growth in terms of popularity and applications due to the advantages of fast reaction speed and high sensitivity [88]. During amplification, recombinases are combined with primers to form protein-DNA complexes that can bind specifically to the target genes, initiating chain exchange reactions and DNA replications. It is capable of amplifying as low as 1-10 DNA target copies within 10 min with minimal sample preparation [89]. The amplification is under control from a specific combination of enzymes and proteins below 37 C (Fig. 2b). With a reverse transcription process, RT-RPA based assays could be easily adapted for detecting SARS-CoV-2 [90,91]. The developed assay produced 100% diagnostic sensitivity and specificity with a total run time between 15 and 20 min when compared to RT-qPCR (n ¼ 20), indicating a viable alternative detection method [76]. Thomas et al. developed an RPA-LF based on test strips that detected 35.4 copies/μl of SARS-COV-2 cDNA nucleocapsid (N) gene [91]. However, the prices of RPA kits are relatively high, which limits its clinical applications. Moreover, the flexibility in the kit formulation, as well as the application, are highly limited compared to other isothermal methods [92]. NEAR belongs to the family of isothermal amplification detection methods, which is mainly used for detection of short oligonucleotide sequences [40]. With the help of a DNA polymerase, the primer will extend in the presence of a target template (Fig. 2c). The extended primer will be cut by the nicking endonuclease, releasing the short oligonucleotides due to insufficiently stable duplex under 55 C. Subsequently, the primers are regenerated and undergo another round of extension and cleavage. Based on the NEAR isothermal amplification technique, a rapid detector called ID NOW has been manufactured by Abbott. Co (Chicago, IL, USA) and authorized [93,94]. RCA is an efficient isothermal enzymatic process that simulates the rolling circle replication process of natural microbial circular DNA (Fig. 2d) [33,95]. With a DNA polymerase, a single primer can trigger the strand displacement synthesis along the circular DNA template to achieve isothermal linear amplification. Owing to its isothermal nature, it is an ideal method with a simple and efficient process. Up to now, RCA-based detection systems or platforms have been successfully applied to test various types of targets [96]. Typically, in a circle-to-circle amplification process, amplicons of the first round RCA will be converted into multiple circles by monomerization (endonuclease digestion) and ligation. Subsequently, newly formed circles will act as templates for the following round RCA, thus reaching an ultra-low limit of detection [97]. To further improve the efficiency of amplification, some exponential or quadratic amplification formats are developed and combined with RCA-based approaches. As a result, it will achieve a detection limit up to sub-femtomolar level [98]. Moreover, it significantly simplifies the operation without the dependence on the time-consuming and labor-intensive operations. Tian et al. reported a typically RCA-based amplification method for quick and ultra-sensitive detection of SARS-CoV-2, achieving a limit of detection of 0.4 fM [78]. Generally, the amplification template for RCA is required to be circular DNA or a linear DNA circularized firstly. In addition, the results of RCA do not have ideal quality control goals up to now, which greatly limit their wide applications. CRISPR/Cas-based nucleic acid detection technologies were recently developed for COVID-19 detection [99], by using a unique group of Cas-nucleases, which demonstrated noticeable advantages of sensitivity, specificity, rapidity and simplicity for detection (Fig. 2e). By using extracted nucleic acids as input, both CRISPR Cas13-and Cas12-based assays have been developed for SARS-CoV-2 detection [100]. Cas12, as an RNA guided DNase, can cleave ssDNA indiscriminately upon binding its target sequence [101]. In normal conditions, it will be combined with isothermal amplification to achieve high detection sensitivity and specificity. Broughton et al. proposed a CRISPR-Cas12-based assay, which performed RT-LAMP at 62 C for 20-30 min followed by Cas12 detection at 37 C for 10 min of predefined coronavirus sequences and visualized on a lateral flow strip [102]. Other isothermal amplification methods, such as recombinase aided amplification (RAA) have also been used to amplify the extracted RNAs before a CRISPR/Cas12a reaction. When SARS-CoV-2 presents in the system, a quenched green fluorescent molecule labelled ssDNA reporter will be cleaved by Cas12a, resulting in the motivation of green fluorescence, which can be observed directly with a naked eye under 485 nm light [103]. In comparison, Cas13a is a non-specific RNase that remains inactive until it binds its programmed RNA target [104]. Cas13a-based detection is highly programmable and specific, as it relies on complementary based pairing between the target RNA and the CRISPR RNA sequence [105]. A CRISPR-Cas13a-based tool named SHERLOCK (Specific High-sensitivity Enzymatic Reporter unLOCKing) was recently designed by Feng Zhang group specifically for SARS-CoV-2 diagnosis. This protocol involves an RPA and T7 transcription, followed with method Cas13-mediated collateral cleavage of a single-stranded RNA reporter. Combining with colorimetric or fluorescent readouts, the assay enabled detection of 10 copies/μL of synthetic RNA [106]. In the pursuit of less time and labor-intensive, Arizti-Sanz et al. finalize detection by combining isothermal amplification, T7 transcription and Cas13-based detection into a single step. Compared to the two-step assay, this single-step SHERLOCK assay could diagnose SARS-CoV-2 with reduced sample-to-answer time and equal sensitivity with optimized conditions [107]. Zhang et al. combined RPA with SHERLOCK to detect the S gene and Orf1ab gene of SARS-COV-2 [108]. The CRISPR-based molecular diagnostics has great potential towards POCT, quantitation and digital analysis of SARS-CoV-2 with detection sensitivity comparable to real-time RT-PCR assay. If coupled with lateral flow readouts, they will be an attractive option for easy, at-home testing scenarios. However, as newly-emerged methods, more efforts should be taken to guarantee its accuracy and prevent the issues of aerosol contamination and false positive rate in clinical trials. Immunoassays Serological testing of antibodies is another common method for detection of COVID-19. Compared to nucleic acid methods, it offers advantageous turn-around time, throughput and workload [109,110]. Although a wide range of doctors and experts have marked that the results from immunological methods may not be considered for the final diagnosis since they only indicate the previous infection, it does not mean that the IgM/IgG serological tests are useless since the immune status of individuals is still important to be acknowledged by the doctors in the following step of treatment (Table 2) [111]. The clinical interpretation of all possible scenarios that can be encountered when testing a patient with both RT-qPCR and IgM/IgG immunoassay are illustrated in Table .2. Based on the current knowledge about the rise and fall of SARS-CoV-2, the correlation of IgM level and IgG level varies during the initial time of infection, the onset of symptoms and recovery phase. The key takeaway is that the results of nucleic-acid tests and IgM/IgG serological tests do not necessarily need to agree [49]. A disagreement between the two tests, if any, can often be traced to the after-infection time points at which the tests were performed. Since the exact time of infection is often unknown, combining these two testing can further improve the accuracy of COVID-19 diagnosis [112,113]. Enzyme-linked immunosorbent assay (ELISA) is one of the most widely used methods for the detection of protein-based biomarkers [114,115]. Briefly, the ELISA for total antibodies detection is developed based on double-antigens sandwich immunoassay, using mammalian cell-expressed recombinant antigens contained the receptor binding domain (RBD) of the spike protein of SARS-CoV-2 as the immobilized and HRP-conjugated antigen. According to statistics, compared with a single PCR test, the positive detection rate is significantly increased from 51.9% to 98.6% by combining the PCR with the ELISA assay for each patient [116]. Srivatsa et al. used aptamer-functionalized gold nanoparticles to identify SARS-CoV-2 spike proteins, limit of detection can up to 3540 genome copies/μl [117]. However, the cross-reactivity and low antibodies titers are among common factors that limited the detection efficacy of ELISA. It is also critical to executing the procedures of serial washings and incubation with reagents in ELISA to reduce the background noise and amplify the signal. In clinical, their results are used as an assistant to diagnose the infection. Usually, the timing of requests for serological assays and the interpretation of antibody results are pre-requisites of crucial importance in their efficacy [118]. Combining with chemiluminescence and the immunoreactions, chemiluminescent immunoassay (CLIA) can quantitatively determine the concentrations of corresponding antigens or antibodies through the intensity of luminescence with high sensitivity and specificity [119]. Based on automatic platforms, CLIA enables high throughput detection, making an outstanding contribution to the early screening. Lyu et al. used ABEI/Co2þ dual-functionalized magnetic beads to perform rapid CLIA detection of SARS-CoV-2 nucleocapsid protein (NP) [120]. Compared with ELISA and CLIA, lateral flow immunoassay (LFI) enables direct test without extraction, which made it well fit for the largescope, on-site screening [121]. LFI typically contain sample pad, conjugation pad, nitrocellulose membrane, test line, control line and plastic Table 2 The clinical significance of IgM/IgG serological test results. ("þ" means "Positive", "-" means "Negative"). Test Results Clinical Significance cassette [122]. The mechanism of LFI is based on the hydration and transport of reagents as the specimen across the strip via chromatographic lateral flow. If anti-SARS-CoV-2 IgG and IgM antibodies present in the sample, they will be bound by the corresponding antigen labelled gold colorimetric reagent fixed on the conjugate pad. With samples continuing to travel up the strip, the IgM antibodies are bound on the M (IgM) line, and the IgG are bound to the G (IgG) line, presenting a reddish-purple line at the test zone [112]. During the performance of all valid tests, a control line will appear whether the sample is positive or negative, demonstrating the fluid has migrated adequately through the device. In normal conditions, the detection takes at most 15 min to obtain results with one drop of various specimens such as sera, plasma of venous blood and finger stick blood. Chen et al. developed a LFIA strip based on SERS for anti-SARS-COV-2 IgM and IgG [123]. However, when the concentration of antibodies is none or at a low level, there is a risk of missed detection by the false negative results. Micro/nano devices for point-of-care testing of SARS-CoV-2 Recent advances in microfluidic technology and nanotechnology have brought us closer than ever to the realization of simple yet highly sensitive and specific devices that could be used in complicated environments without central lab. Based on either nucleic acid testing or immunoassay, the micro/nano devices significantly enrich the toolset of POCT of SARS-CoV-2, which have the potential to rapidly diagnose pathogens or antibodies and efficiently monitor the infection transmission by self-tests even at home. They can act as a bridge between laboratory-based testing and home detection. Presently, more and more these devices are transformed by the companies as promising platforms for COVID-19 detection. Especially those miniaturized devices, which integrated all the steps (nucleic acid extraction, amplification and detection) by fluidic manipulation, are conducive to complex real-time diagnosis. Moreover, the introduction of nanomaterials could also significantly increase the detection sensitivity of immunoassays. In this section, we summarize the very recent progresses on the micro/nano technologies in the field of POCT, hoping to provide useful information and insight for the further researches in the area. Micro/nano devices for nucleic acid testing In a typical diagnostic test, there are two types of inaccuracy, including the false-negative result (FNR) and the false-positive result (FPR) [124,125]. Statistically, RT-PCR assay based on nasal or oropharyngeal swabs could produce up to 30% FNR in the clinical diagnosis of COVID-19 [126][127][128]. On the standpoint of the clinical stage of the disease and different anatomic sites in the virus-carriers, sampling strategy has a close relationship with the inconsistency of viral load which resulted in the high FNR [39]. In a traditional sampling process, regular swabs can only provide limited physical interactions with mucosal tissues. As a result, only superficial tissues could be collected, which are also readily contaminated by food and drink, resulting in a low sampling efficacy [128]. Chen et al. reported a microneedle-based oropharyngeal swab for effective and precise viral sampling. Based on the soft-tissue penetration capability of these microneedles (MNs), there would be a significant increase in sample depth (Fig. 3a). In addition, the MNs are modified with antibodies, which will further assist in SARS-CoV-2 collection through chemical bonding [128]. Once the preparation process can be simplified, the proposed novel swab will act as a promising candidate for diverse oral or respiratory diseases sampling in the future. However, it is also worth noting that medical staffs will be in close contact with the suspected COVID-19 patients during sampling, leading to a high risk of cross-infection. In addition, the swabs themselves are potential source of infection [129]. To solve this problem, a miniature robot consisted of an active 2-degree of freedom (DOF) end-effector was proposed to assist nasopharyngeal swab collection remotely. The successful working of the miniature robot has already been verified on a pig nose. Subsequent works will focus on pursuing ethical approval for in-vivo tests. The captured nasal or oropharyngeal swabs will then be subjected to a series of standard procedures for the subsequent lysis and enrichment. Effective DNA extraction and enrichment is the premise of accurate detection [130]. However, this process not only relies on enhanced-biosafety lab, but also requires skilled personnel and mandatory instruments. Manual operation procedures may raise issues, such as artificial variants or biased data. Followed with sample preparation, a variety of nucleic-acid based methods, ranging from PCR-based methods to some isothermal amplification methods, are available to the diagnosis of infectious diseases. From sample preparation to the assay protocol, it usually requires a few hours [131]. To meet the diagnostic demands of the infectious pathogens, especially in those resource-limited settings, there is an urgent need for portable and integrated microfluidic platforms or micro/nano devices that can provide fast, accurate and even multiplex diagnosis at the point-of-care [132]. Generally, traditional PCR instruments are practical obstacles for wide use in POCT scenarios due to the reason that they are more expensive and bigger in size [28]. They rely heavily on external electric powers to realize quick increase or decrease of the temperature. Owing to their greatly reduced size, micro/nano devices possess many unique properties compared to macro-scale devices, which opens a new perspective for field testing based on PCR. They require less reagents during amplification and have higher heat transfer efficiency. Moreover, the improved portability makes them an ideal choice for point-of-care diagnosis of COVID-19 [133,134]. Shi et al. reported a miniaturized, portable and battery-powered heater with functions of thermo-cycled control and passive continuous-flow control as the platform for PCR reactions (Fig. 3b). Integrated with a 3D microreactor, the system can be used for multiplexed detection of clinical-level DNA targets with more convenience [28]. Microfluidic systems are also well suited for highly automated processes. Recently, a microfluidic device integrated with functions of sample treatment, one step RT-PCR and direct fluorescence detection was developed and verified with influenza-A viruses (Fig. 3c). The device enabled automatic sample lysis and enrichment by using glycan-coated magnetic beads, whose capture rates could be 50% [135]. The sealed microfluidic device, with all reagents pre-loaded, could be adapted to detect SARS-CoV-2 easily. Another factor that limits the widespread use of PCR is the detection time. Under normal conditions, a standard PCR procedure takes at least 45 min. Based on microfluidic devices, a compact, high-speed reciprocal flow RT-qPCR system, Gene-SoC®, has become available for specific gene amplification and detection within 15 min [136] (Fig. 3d). The system has one heater for the RT reaction and two heaters for thermal cycling, with two micro-blowers at each flow ends for the high-speed shuttle of the PCR solution. It can achieve a limit of detection (LOD) of 1.0 Â 10 1 copies/reaction with the use of a single disposable tip per analysis. Although it has some disadvantages, such as requiring the simplification and refinement of the RNA extraction procedure, it has been demonstrated with some clinical samples that it could be used for the rapid and low-throughput selection of patients with COVID-19. In addition to PCR-based methods, micro/nano biomedical devices that utilize isothermal nucleic acid amplification methods have also been investigated. Due to the isothermal amplification does not need a thermal cycling, it's much easier to be engineered with micro and nano device, e.g. coupling with microfluidics technology [137,138]. Recent researchers have developed a range of microfluidic devices with different structures for detection of bacterial and viral pathogens, which enable multiplexed detection of different targets with corresponding primer sets deposited in each channel (Fig. 4a, Fig. 4b) [139][140][141]. In general, compared to thermal cycling-based amplification methods, the isothermal amplification methods can usually provide lower cost, faster reaction speed, less specialized equipment and easy readout [79]. They can be combined with the application of various fully enclosed micro-structured devices, with less energy consumption to maintain a constant temperature [142]. These features greatly simplify isothermal amplification implementation in POCT platforms. Compared with those microfluidic chips based on PCR, these devices enhance their applicability for rapid detection with less requirements for instruments during amplification. The isothermal amplification can be triggered on a thermostatic heating plate or even within a thermos [143], which is very promising to provide a sample-in-result-out solution for the in-filed testing of SARS-CoV-2. By using a smartphone, the fluorescence emission generated by the dyes based on the device during amplification can be monitored in real-time [144]. The image analysis could also provide quantitative results on the time at which amplification occurred (Fig. 4c). Yang et al. reported a simple yet efficient isothermal amplification platform containing a custom-fabricated detector and a multiplexed microwell array chip to perform the RT-LAMP assay within 25 min (Fig. 4d). The platform integrated functions of sample preparation, isothermal amplification. The results could be read with naked eyes directly [145]. The system has been successfully used to detect 130 real clinical samples. In resource-limited settings, the lack of the required infrastructure and facilities for pathogens detection will lead to infected people either not being detected or, if identified, being diagnosed at a sufficiently late stage. With developments in paper-based microfluidic devices, some paper-based platforms based on isothermal amplification methods have been proposed for their low cost [146]. An additional feature of paper-based devices, especially for detection of infectious pathogens is that they are readily disposable, which prevents the potentials of cross-infection. Xu et al. reported a microfluidic origami-paper-based device for multiplexed detection of malaria from whole blood by using LAMP (Fig. 4e) [147]. All the required steps, including nucleic acid extraction, isothermal amplification and visual detection were integrated into the device for POCT application. Tang et al. proposed another fully functioned paper-based device (Fig. 4f). The device allowed on-chip dried reagent storage and equipment-free isothermal amplification, which further promoted their potential applications in remote settings [148]. Nguyen et al. created a sliding-paper device that combines LAMP with dopamine to detect SARS-COV-2 DNA in 25 min with a detection limit of 10 4 ng/μl [81]. ID NOW was launched initially in 2014 as an advanced molecular diagnostic platform for the detection of influenza A&B, streptococcus A and respiratory syncytial viruses. As to the detection of SARS-CoV-2, two primers targeting at the RdRp gene are used to trigger the amplification. Combined with fluorescence detection, it allows you to make effective clinical decisions sooner. It can enable detection of positive samples within 5 min and negative ones in 13 min (Fig. 4g) [93,149,150]. Due to the characteristics of smaller size and faster reaction speed, it has already been distributed to numerous medical centers and non-traditional places where the testee can get detection results in several minutes. However, the sensitive and specificity of the system are highly reliable on the nicking enzyme and modified primers. The disintegration difference of [135]; Reproduced with permission. Copyright 2020, Royal Society of Chemistry. (d) A compact, reciprocal flow PCR system, GeneSoC®, with one heater for the reverse transcriptase reaction and two heaters for thermal cycling, can be used for specific gene amplification and fluorescence detection based on PCR in a very short time (within 15 min) [136]. Reproduced with permission. Copyright 2020, Elsevier. the enzyme will lead to different amplification efficiency, which will affect the accuracy and repeatability of the results ultimately. Micro/nano devices for immunoassays In addition to nucleic-acid tests, micro/nano technologies have also been extensively investigated for developing immunoassay-based POCT platforms. Presently, commercial products of IgM only and IgM-IgG combined LFI tests have been developed by a couple of In Vitro Diagnostic (IVD) companies [51]. These simple yet robust point-of-care LFI can simultaneously detect IgM and IgG antibodies of the test at different infection stages [151][152][153]. However, traditional colloidal gold-based LFI is usually limited to relatively low sensitivity and incapable of quantification measurement [154]. With the aid of nano technologies, the LFI assays enable more sensitive and rapid detection. Wang et al. reported a LFI assay based on a selenium nanoparticle-modified SAR-S-CoV-2 nucleoprotein to be with high sensitivity and detection speed (Fig. 5a) [155]. The assay enabled simultaneous detection of IgG and IgM in human serum with a limit of detection of 5 ng/mL and 20 ng/mL, respectively, within 10 min. To get more quantitative results, Chen et al. reported another LFI assay for detection of IgG in human serum based on a recombinant nucleocapsid phosphoprotein and lanthanide-doped polystyrene nanoparticles (LNPs) as a fluorescent reporter [156]. Once the functionalized LNPs were captured at the control or test zone, they would produce a bright fluorescence whose excitation and emission wavelengths are 365 nm and 615 nm, respectively. The proposed assay can improve from semiquantitative to accurate quantification by using official IgG standard. Based on microfluidic devices, a novel smartphone-based POCT analyzer with microchannel capillary flow assay platform was developed for quantitation analysis of malaria biomarker (Fig. 5b) [157]. The novel analyzer integrated the ultra-high sensitivity of chemiluminescent detection, the high reaction kinetics of the microfluidic spiral chambers design and the data processing capabilities of smartphone, reaching a limit of detection (LOD) of 8 ng/mL for malaria. The terminal results derived from the positive and negative controls to decrease the risk of false diagnosis. Furthermore, the quantitative platform could easily be adapted for the detection of IgM and IgG. Essentially, the test results from this category of methods cannot confirm the existence of the target virus. Instead, it provides a piece of immunological evidence for physicians to make the correct diagnosis along with other tests, as well as establish a treatment strategy. Rapid and early identification of infectious pathogens allows for effective implementation of disease prevention and treatment measures. Based on micro/nano technologies and the principle of immunoassay, some POCT platforms are constructed directly for detection of infectious pathogens waiving nucleic acid amplifications. Integrating the detection devices into wearables can expand opportunities for long-term and noninvasive monitoring of infections [158,159]. Xue et al. reported an intelligent wearable face mask integrated with a flexible immunosensor for highly sensitive screening of exhaled coronavirus aerosols. In addition, some other kinds of on-site detection devices, such as the electrochemical biosensors, allow detection of multiple kinds of molecules, including antigens and antibodies with high sensitivity and specificity [160][161][162][163]. Yakoh et al. reported a paper-based electrochemical biosensor for label-free detection of SARS-CoV-2 antibodies without the specific requirements of antibodies [164]. With optical assistance, Mohammad et al. designed an electro-optofluidic chip that detected target SARS-CoV-2 RNAs without amplification, the detection limits up to 10 4 -10 9 copies/ml for swab samples [165]. A custom-made fidget spinner that rapidly concentrated pathogens in 1 mL samples of undiluted urine by more than 100-fold for the on-device colorimetric detection of bacterial load and pathogen identification was designed and fabricated ( Fig. 5c) [166]. The device enabled on-site detection of infection with naked eyes within 50 min in urine samples from 39 patients suspected of having a urinary tract infection. Although the device is aimed to detect urinary tract infection (UTI) in their original report, we believe it will be a good, inexpensive handheld point-of-care device for the rapid concentration and detection of SARS-CoV-2 in low-resource, undeveloped countries. The use of nanomaterials or nanostructures could efficiently increase the sensitivity to meet the detection requirements. By using graphene sheets to modify the field-effect transistor (FET), Park et al. reported an ultra-sensitive sensor for detection of SARS-CoV-2 in clinical samples [167]. The limit of detection of the graphene modified FET sensor could be up to 1 fg/mL. Yu et al. reported an immunofluorescence microdevice integrated with ZnO nanorods for highly sensitive detection of avian influenza virus (AIV) [168]. The unique properties of ZnO nanorods boost the LOD of the device to an ultra-low level, which can be approximate 22 times more sensitive than conventional ELISA (Fig. 5d). Summary and perspectives The pandemic of COVID-19 has caused wide-scope outcomes to most low-and middle-income countries where the infrastructure and corefacilities for early detection are insufficient. This review provides a comprehensive summary of micro/nano biomedical devices, as well as two main categories of technologies for the rapid diagnosis of COVID-19 (including nucleic acid-based methods and immunoassays) and their working principles and targets of the virus. Among these is an immunochromatographic assay that is more applicable to primary screening in these areas. Immunoassay offers advantages such as direct detection of the serum/plasma and whole blood specimens without additional processing steps and expensive equipment. In comparison, nucleic acid-based methods are more accurate for diagnosis of COVID-19. A series of isothermal amplification techniques based on LAMP, RPA and NEAR, which do not rely on expensive thermal-cycling instruments, have shown their advantages in low-cost and simple-operations. However, after more than 20 years' development, there are still few instruments or devices based on these isothermal amplification methos that have been widely used on the market. A lot of optimization works need to be done to avoid problems such as false-positive results. Moreover, CRISPR-based techniques enable promising sensitivity and ultralow detection limit down to a few viral RNA copies, showing an emerging methodology for diagnosis. Clinicians may make a choice of selections in terms of their local circumstances. However, it's necessary to take into account of both the advantages and disadvantages of each method may be more effective and affordable in most area. Meanwhile, it is worth noting that the false negative rate from single testing by even gold standard assay (RT-PCR) is relatively high in the clinical diagnosis of COVID-19, according to previous statistics. Further researches are still needed under the current circumstances. Till now, clinical-recognized high-throughput testing is unavailable with the infections continue to emerge, there is an urgent call for the development of multiplexed, high-throughput POCT platforms for on-site detection. The capability of early and rapid diagnosis will facilitate the chance of patients obtaining proper medical treatment, decrease the risk of medical staff infection, support therapeutic drug delivery systems. In addition, the instruments that can be used for screening a variety of viruses simultaneously should be further developed and prepared for the future contingencies. (a) A highly sensitibe lateral flow immunoassay for detection of IgM and IgG using a selenium nanoparticle-modified SARS-CoV-2 nucleoprotein as the capture antibody [155]; Reproduced with permission. Copyright 2020, Royal Society of Chemistry. (b) A smartphone-based POCT analyzer with microchannel capillary flow assay platform for quantitative detection of malaria [157]; Reproduced with permission. Copyright 2020, Nature publishing group. (c) A diagnostic fidget spinner device as a versatile bacterial infection diagnostic platform for low-resource settings [166]; Reproduced with permission. Copyright 2020, Nature Publishing Group. (d) An immunofluorescence microdevice integrated with ZnO nanorods for highly sensitive detection of AIV [168]; Reproduced with permission. Copyright 2020, Weiley-VCH. Data and materials availability All data needed to evaluate the conclusions in the paper are present in the paper. Funding source All sources of funding should also be acknowledged and you should declare any involvement of study sponsors in the study design; collection, analysis and interpretation of data; the writing of the manuscript; the decision to submit the manuscript for publication. If the study sponsors had no such involvement, this should be stated. Please Ethical approval and informed consent (if applicable) If the work involves the use of human subjects, the author should ensure that the work described has been carried out in accordance with The Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans. The manuscript should be in line with the Recommendations for the Conduct, Reporting, Editing and Publication of Scholarly Work in Medical Journals and aim for the inclusion of representative human populations (sex, age and ethnicity) as per those recommendations. The terms sex and gendershould be used correctly. All animal experiments should comply with the ARRIVE guidelines and should be carried out in accordance with the U.K. Animals (Scientific Procedures) Act, 1986 and associated guidelines, EU Directive 2010/63/ EU for animal experiments, or the National Institutes of Health guide for the care and use of Laboratory animals (NIH Publications No. 8023, revised 1978). The sex of animals must be indicated, and where appropriate, the influence (or association) of sex on the results of the study. The author should also clearly indicate in the Material and methods section of the manuscript that applicable guidelines, regulations and laws have been followed and required ethical approval has been obtained. Patient consent (if applicable) Completion of this section is mandatory for Case Reports, Clinical Pictures, and Adverse Drug Reactions. Please sign below to confirm that all necessary consents required by applicable law from any relevant patient, research participant, and/or other individual whose information is included in the article have been obtained in writing. The signed consent form(s) should be retained by the corresponding author and NOT sent to Medicine in Novel Technology and Devices. Declaration of competing interest The authors declare that there are no conflicts of interest.
2022-02-12T14:12:20.692Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "467eebc82ce242b23b934c037bf058b625d32df6", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.medntd.2022.100116", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "54fd3c346612c4a2a5281013a3127748c841d985", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
21757016
pes2o/s2orc
v3-fos-license
ISLAM AND JIHAD : THE QUEST FOR PEACE AND TOLERANCE The topic of the writing is Islam and Jihad. The main focuses of this writing is how the concept of “Jihad” understood by the West, which is labeled as “Radical Islamist” on the base of the action of the radical Islamic groups, while Islam undermines tolerance and peace? The methodology of this writing is descriptive analyzes of the concept of Islam and jihad. It found that the idea of jihad has multiple meanings in Islam which is not confined to holy war. It is lesser jihad which is considered as holy war. However, holy war in Islam does not refer to military aggression as understood within the Christian tradition. The values of peace and tolerance are emphasized as Islam is very concerned with the sanctity of human life, justice and humanity. In addition, the history of Islam has shown that in the classical era Muslims could coexist with non-Muslims in harmony and peace. It is clear that Islam is deeply concerned with peace and tolerance such as Silm, assalamu alaikum etc. Jihad has been misunderstood and distorted in the west and among Muslim radicals as it tends to be associated with Muslim aggression, holy war, violence, and terrorism. Keywords; Jihad – Terrorisme Fitna – Radical – Justice I. Introduction T he terrorist attacks and suicide bombings perpetrated under the banner of jihad have destroyed the image of Islam particularly in the West.This is especially evident after the attacks of September 11, 2001 on the World Trade Center in New York and the Pentagon in Washington, followed by suicide bombings in Bali and Jakarta, Madrid, and Britain, committed by the alleged Osama bin Laden and his al-Qaeda's network.As the consequence, Islam appears to be misunderstood and even stigmatized in Western media, films, and publications.To the extreme point, Islam is described as a religion which promotes violence, terrorism, and discrimination to other religions.Of the current example is the anti-Qur'an movie entitled Fitna made by the Dutch politician Geert Wilders.The movie depicts the Qur'an as a source of justification for killing and rape.Showing the distorted image of Islam, the movie has sparked many criticisms and reactions from Muslims in many parts of the world, condemning the West as being hostile to Islam. Islam is not only misunderstood by the West but also by certain Muslim groups who are often labeled as "radical Islamist" and "Muslim extremists".This is largely due to their puritan and rigid understanding of Islam as well as their violent actions.According to Fadhl, the radical Islamic groups such as the Taliban and al-Qaeda, have been influenced by Wahhabism which "rejected any attempt to interpret the Divine law from a historical, contextual perspective, and in fact, treated the vast majority of Islamic history as a corruption or aberration from the true and authentic Islam".Furthermore, Wahhabism is not only intolerant to various schools of thought within Islam but also hostile towards non-Muslims by insisting that a Muslim should adopt none of the customs on non-Muslims and should not befriend them either.The actions of the radical Islamic groups tend to confirm the negative portrayal of Islam in the West.In this regard, the actions of the minority of the jihadi groups have defined the religion of Islam to non-Muslims in the West.The misconception and misunderstanding of Islam in the West on the one side, and the exclusive understanding of Islam with anti-West posture on the other side have potentials to justify Samuel Huntington's thesis on the clash of civilizations. Attempting to bridge the gap between the Western and Islamist conception of Islam, this artikel provides a holistic understanding of Islam as a religion of peace and tolerance by exploring what the basic resources of Islam say about the issue of tolerance, pluralism, and peace.Secondly, this artikel will trace the historical experience of Islam in developing peaceful co-existence with Christians and Jews.Lastly, as Islam endorses self-defence under special circumstances, the artikel will discuss the idea of jihad, its meanings and abuses by radical Islamists and examine whether it undermines the nature of Islam as a religion of tolerance and peace. II. Peace and Tolerance in the Qur'an Like other major religions, Islam is deeply concerned with and devoted to peace.The etymological root of the word Islam is Silm, which means peace.The normal greeting of a Muslim to everyone is Assalam-u-'alaykum: peace be upon you.According to Seyyed Hossen Nasr, the Muslim greeting "Salam" could be compared with "Shalom" in Jew and the phrase "Shanti, shanty, shanty" in Hinduism which all of them contain divine message to emphasize and spread peace.To consult the issue of peace and tolerance in Islam, one should read the basic resources of Islam, namely the Qur'an and the prophetic tradition (hadith).In such sources there are numerous references to peace and tolerance.In what follows I will discuss several verses related to Islamic principles that lead to peace and tolerance. In the holy Qur'an, there are many verses that emphasize the unity of humankind.It is therefore no surprising if one would find verses declaring that Islam is a religion of peace, harmony, hope, justice and tolerance, not only for Muslims but also for the whole of mankind.The Qur'an further declares that the Prophet Muhammad was sent but as a mercy to the whole universe. According to Syed Othman and Nik Mustafa, such declaration lucidly suppresses any difference given to race or nation; to a "chosen people"; to the "seed of Abraham" or the "seed of David"; to Hindu Arya-varta; to Jew or Gentile; to Arab or 'Ajam (Persian), Turk or Tajik, European or Asian, White or coloured, Aryan, Semitic, Mongolian, or African; to American, Australian or Polynesian.It implies that Islam advocates tolerance on the basis of universal principles and values to reflect the existence of mercy that God has promised for all men irrespective of their religion, race, culture, value, norms, languages and traditions. There are several verses related to unity of humankind as follow: "Mankind was one single nation, and God sent messengers with glad tidings and warnings; and with them He sent the book in truth, to judge between people in matters wherein they differed…" "Mankind was but one nation, but differed (later).Had it not been for a world that went forth before from thy Lord, their differences would have been settled between them." "O humankind, God has created you from a single (pair) of a male and female, and made you into diverse nations and tribes so that you may come to know each other.Verily, the most honored of you in the sight of God is he who is the most righteous." "If thy Lord had willed, He would have made humankind into a single nation, but they will not cease to be diverse… And, for this God created them (humankind)" The above verses clearly testify that mankind is originally made up of a single nation or a single people but had later differed essentially due to differences in religious faiths.Differences in those aspects other than religious are not significant at all.In fact, the differences are assets to be preserved in order to build communication with each other.This leads us to the most crucial dimension of Islamic universality, namely the acceptance of religious diversity. Islam is the youngest of the Abrahamic traditions after Christianity and Jew.According to Sachedina, Islam's self-understanding since its inception in the seventh century has included a critical element of pluralism and tolerance, namely its relation to other religions.The Qur'an says "there shall be no compulsion in religion".Instead of denying the validity of other human experiences and transcendence, Islam recognizes and even confirms its salvific efficacy within the wider boundaries of monotheism: "Surely they that believe, and those of Jewry, and the Christians, and those Sabaens, whoso believes in God and the Last Day, and works righteousness-their wage awaits them with their Lord, and no fear shall be on them, neither shall they sorrow." The Qur'an clearly sees itself as a critical link in the revelatory experience of humankind, a universal path intended for all.In particular, it regards Jews and Christians as "People of the Book," people who have also received a revelation and scripture from God (the Torah for Jews and the Gospels from Christians).In this respect, Qur'an and Islam recognize that followers of the three great Abrahamic religions, the children of Abraham, share a common belief in the one God, in biblical prophets such as Moses and Jesus, in human accountability, and in a final judgment followed by eternal reward or punishment.According to Saikal, the three monotheistic faiths not only "embrace a common concept of God and His attributes, but also give equal weight to the sanctity of life as a precious gift from God".In this context, the above verse shows the unique characteristic of Islam in that belief in the oneness of God unites the Muslim community with all humanity because God is the creator of all humans, irrespective of their religious traditions.For Sachedina, the verse declares that on the Day of Judgment all human beings will be judged, irrespective of sectarian affiliation, on their moral performance as citizens of the world community. III. Peace and Tolerance in Islamic History The sources of peace and tolerance in Islam are not only represented by Al-Qur'an, but also by the exemplary acts and behaviour of the Prophet Muhammad.In Islam, the prophet Muhammad has been the ideal model for Muslims to follow as he is considered as a perfect human being.In fact, his utterances and acts (hadith) which were recorded by his companions are considered as the second source of Islam after Al Qur'an. The period of Medina in which Muslims were the majority provide historical experience of how Muslims co-existed with non-Muslims.After migration (hijra) to Medina, Muhammad had implemented the Qur'anic ideals by encouraging cooperation and solidarity among all inhabitants of Medina which comprised Muslims, Jews, Christians and others, through the famous Medina Constitution.The constitution which was put in writing ensured complete freedom (including freedom of worship), equality and justice for all groups. Although Islam became the predominantly religious-political power in Medina, Muhammad never asked or imposed Jews and Christians to accept Islam, unless they particularly wished to do so, because they had received perfectly valid revelations of their own.As already mentioned above, the Qur'an insists that there shall be no coercion in matters of faith, and commands Muslims to respect the beliefs of Jews and Christians, whom the Qur'an calls ahl kitab, a phrase usually translated "People of an earlier revelation:" "Do not argue with the followers of earlier revelation otherwise than in a most kindly manner-unless it be such of them as are bent on evil-and say: "we believe in that which has been bestowed from on high upon us, as well as that which has been bestowed upon you; for our God and your God is one and the same, and it is unto Him that we (all) surrender ourselves." According to Rahman, the Medina constitution promulgated by the Prophet Muhammad "guaranteed religious freedom of the Jews as a community, emphasizing the closest possible cooperation among the Muslims (Muhajirin and Ansar), calling on the Jews and the Muslims to cooperate for peace and, so far as general law and order was concerned, ensuring the absolute authority of the Prophet to decide and settle disputes".The Medina constitution has impressed modern scholars because it is the first official political document that put forward the principles of religious and economic freedoms.The constitution appeared to reconcile a variety of conflicting interests among the groups.The highlights of the documents go thus: The Muslims are declared as one community (ummah) to the exclusion of all men.The bond which unites them is their common faith.Their friendship and their enmity is governed not by considerations of common ties of blood or economy, tribe or family, but it is the ideology which unites them in their willingness to suffer and live together and pursue a common way of life which lends them the consciousness of a community.And yet the Jews in Madinah are accorded equality-the word equality occurs time and again in the treaty.They are not to be wronged nor their enemies to be aided.The Muslims have their faith, the Jews have theirs.The freedom of religion is recognized and the Jew of Banu Auf are declared as one community with the believers. Moreover, the document lays down general rules of conduct, namely: The Muslims and the Jews are jointly responsible for the maintenance of peace and stability of Madinah, except whoever does wrong and acts treacherously will be punished accordingly.No `neighbourly protection' is given to the Quraish and those who help them.All disputes are to be referred to God and to Muhammad. The discussion on the relationship between Muslims and non-Muslims in the past should include the Ahl dhimma.The status of Christians and Jews who submitted under Muslim rule since the Prophet Muhammad's era was called "dhimmi".According to the dhimmi status system, non-Muslims must pay a poll tax in return for Muslim protection and the privilege of living in Muslim territory.In this system, non-Muslims are exempt from military service, but they are excluded from occupying high positions that involve dealing with high state interests, like being the president or prime minister of the country.Many Western scholars criticize this system as a discrimination rather than toleration to non-Muslims.However, this is not really the case if one considers the sociohistorical context of such dhimmi system.Fadhl rightly argues that when the Qur'an was revealed, it was common inside and outside of Arabia to levy poll taxes against alien groups.The poll tax from non-Muslims was meant in return for the protection of the Muslim state at that time. With regards to Ahl dhimma, it is historical fact that the Prophet condemned oppression of them as a sinful deviation, declaring in no uncertain terms, "On the Day judgment I my self will act as the accuser of any person who oppresses a person under the protection (dhimma) of Islam, and lays excessive [financial or other social] burdens on him".There is also a hadith compiled in the Sahih of al-Bukhari that reads: "One should fight for the protection of the ahl aldhimma and they should not be enslaved".This all indicates how the Prophet Muhammad emphasizes the attitude of justice, tolerance and peace to Christians and Jews in order to bring harmony among different believers. It is also interesting to see the Caliph Umar ibn al-Khattab's toleration to non-Muslims.When the Romans conquered Jerusalem, the Jews were expelled, reducing them to exiles across the world, in what is known as the Jewish diasporas.The Roman Christians imposed a complete ban on the Jews.However when Caliph Umar conquered Jerusalem in 683 A.D. the Christians and Jews were allowed to stay.The point is that it was Muslim ruler who allowed them to return and ended their suffering. The same case was shown in Spain where Muslims ruled for 800 years.During that time, the Jews and Christians stayed within their faiths and lived together with Muslims.However, when Ferdinand and Isabella regained control of Spain, Muslims and Jews, who failed to escape to Africa, were killed or severely tortured until they accepted Christianity.It should be underlined that it was under 800 years of Muslim rule, the Jews and Christians were free to live and practice their religions.This is not to justify the superiority of Islam over other religions in terms of the idea of tolerance, yet only to argue that Islam also had historical experiences of peaceful co-existence with Christians and Jews. IV. Jihad and Self-Defence Although Islam is very concerned with peace and tolerance, it allows and even endorses Muslims to fight for self-defence under special circumstances.The notion of self-defence cannot be isolated from jihad in Islam.The word "jihad" has been misunderstood and distorted in the West and among Muslim radicals as it tends to be associated with Muslim aggression, holy war, violence and terrorism.This is not surprising as jihad, a concept with multiple meanings, has been used and abused throughout Islamic history.Historically, jihad is used by resistance, liberation, and terrorist movements alike to legitimate their cause and motivate their followers.Therefore, we should make clear the meaning of jihad according to Islam and its relationship with the idea of self-defence. Semantically, the meaning of the Arabic term jihad has no relation to holy war or even war in general.The word is derived from the root j.h.d, the meaning of which is to strive, exert, oneself, or take extraordinary pains.Jihad is a verbal noun of the third Arabic form of the root jahada, which is defined classically as "exerting one's utmost power, efforts, endeavors, or ability in contending with an object of disapprobation."According to Firestone, such an object is often categorized in the literature as deriving from one of three sources: a visible enemy, the devil, and aspects of one's own self.This definition also should be related to the idea of lesser and greater jihad from the Prophet's saying.It is said that when Muhammad returned from battle he told his followers, "we return from the lesser jihad to the greater jihad".The Prophet's saying implies that the greater jihad is the more difficult and more important struggle against one's ego, selfishness, greed, and evil.Based on this definition, it is clear that there are many kinds of jihad, and almost have nothing to do with warfare except the idea of lesser jihad.However, the lesser jihad has gained more popularity in Muslim vocabulary throughout history rather than the greater jihad.There are long debates among Muslim jurists and scholars whether jihad only applies to self-defence or includes both defensive and offensive one.Both groups find their justifications from the Qur'an.However, if one read the Qur'an within the social and political contexts in which they were revealed, it is apparent that the group who defines lesser jihad as self-defence, not aggression, has a stronger position. As noted by Esposito, the earliest Qur'anic verses dealing with the right to engage in a "defensive" jihad were revealed shortly after the hijra (emigration) of Muhammad and his followers to Medina in flight from their persecution in Mecca.At a time when they were forced to fight for their lives, Muhammad is told: "Leave is given to those who fight because they were wronged-surely God is able to help them-who were expelled from their homes wrongfully for saying, 'Our Lord is God'".The defensive nature of jihad is clearly stressed in the verse: "And fight in the way of God with those who fight you, but aggress not: God loves not the aggressors."The verses suggest that it is lawful for Muslims to fight provided that they are in defensive position, not in vice versa.In this respect, war can be fought to avoid persecution and oppression or to preserve religious values and protect the weak from oppression. Apart from the defensive verses, there are verses in the Qur'an which seem to justify offensive jihad.One of them is: "When the sacred months have passed, slay the idolaters wherever you find them, and take them, and confine them, and lie in wait for them at every place of ambush."This verse is one of a number of Qur'anic verses that are cited by critics to demonstrate the inherently violent nature of Islam and its scripture.Besides, these same verses have been selectively used (or abused) by Muslim radicals including Al Qaeda and Jemaah Islamiyah to develop a theology of hate and intolerance and to legitimate unconditional warfare against unbelievers. The verse is actually not appropriate to be considered as justification for aggression or offensive jihad.Seen from political and historical context, the verse was revealed as a response to the attitude of the unbelievers of Mekka who betrayed the peaceful treaty of Hudaybiah.In addition, it was the Makkan unbelievers who initiated attacks against Muslims.Therefore, this verse is not relevant to justify war and aggression against the non-Muslims. In the view of Islamic law, all Shi'ite and most Sunni jurists today believe that jihad is legitimate only as defense mechanism and cannot be originated as aggression.They see jihad as a religious duty for individuals and the Islamic community to defend life, land, or faith and to prevent invasion or guarantee the freedom to spread the faith.In Sunni Islam, historically some jurists have ordered a jihad in an offensive mode based on an argument one might call "the best defense is an offense," but it was changed since 1950s when they came to agreement that the only jihad permissible is a defensive one.The implication of this is that the Muslim radicals have no religious basis to initiate attacks and aggressions against those they regard as the enemy of Islam.Despite Islam justifies war as self-defence mechanism, it still emphasizes the values of justice, peace, and equality toward the enemy.As highlighted by Esposito, the Qur'an provides detailed guidelines and regulations regarding the conduct of war: who is to fight (and who is exempted (48: 17, 9: 91), when hostilities must cease (2: 192), how prisoners should be treated (47: 4).Verses such as Qur'an 2: 294 emphasize proportionality in warfare: "whoever transgresses against you, respond in kind."Other verses provide a strong mandate for making peace: "If your enemy inclines toward peace then you too should seek peace and put your trust in God" (8: 61).Islam also forbids Muslims to kill non-combatants as well as women and children and monks and rabbis, who were given the promise of immunity unless they had taken part in the fighting.Some scholars interpret these guidelines and regulation as the concept of just war in Islamic tradition.All of these mean that the actions of radical Muslims who carry out violence, terrors against non-Muslims, killing civilians and children, contradict with the soul of Islam. V. Conclusion This artikel has counterbalanced the general opinion in the West that Islam promotes violence and terrorism.Exploring the basic sources of Islam, it has argued that Islam stresses peace and tolerance not only among Muslims but also to non-Muslims, especially Christians and Jews.It displays and discusses numerous verses in the holy Qur'an and prophetic tradition which encourage Muslims to uphold justice, pluralism, tolerance and peace.The values of peace and tolerance are emphasized as Islam is very concerned with the sanctity of human life, justice and humanity.In addition, the history of Islam has shown that in the classical era Muslims could coexist with non-Muslims in harmony and peace.The prophet Muhammad provided exemplary model for Muslims of how to interact and build peace, tolerance and justice to non-Muslims. The artikel has explored the idea of jihad which is generally associated with terrorism and violence in the West.It found that the idea of jihad has multiple meanings in Islam which is not confined to holy war.It is lesser jihad which is considered as holy war.However, holy war in Islam does not refer to military aggression as understood within the Christian tradition.Examining the jihad verses and the Prophet's experience, the artikel concludes that lesser jihad refers to the idea of self-defence.Indeed, Islam endorses Muslims to defend their lives, land and faith from aggression and oppression.In this respect, war could be carried out if Muslims are attacked or in the defensive position. Further, Islam suggests Muslims to follow guide lines and regulations in conducting war against the enemy.The rationale lies in the primary argument that Islam respects justice, peace, and equality even in war situation.
2017-09-07T05:15:06.593Z
2016-12-31T00:00:00.000
{ "year": 2016, "sha1": "17e31390317d18e760b0008b6e037fb8ea1d87d4", "oa_license": "CCBYNC", "oa_url": "http://journal.uin-alauddin.ac.id/index.php/jicsa/article/download/2352/2279", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "17e31390317d18e760b0008b6e037fb8ea1d87d4", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Philosophy" ] }
8142539
pes2o/s2orc
v3-fos-license
Toddler physical activity study: laboratory and community studies to evaluate accelerometer validity and correlates Background Toddlerhood is an important age for physical activity (PA) promotion to prevent obesity and support a physically active lifestyle throughout childhood. Accurate assessment of PA is needed to determine trends/correlates of PA, time spent in sedentary, light, or moderate-vigorous PA (MVPA), and the effectiveness of PA promotion programs. Due to the limited availability of objective measures that have been validated and evaluated for feasibility in community studies, it is unclear which subgroups of toddlers are at the highest risk for inactivity. Using Actical ankle accelerometry, the objectives of this study are to develop valid thresholds, examine feasibility, and examine demographic/ anthropometric PA correlates of MVPA among toddlers from low-income families. Methods Two studies were conducted with toddlers (12–36 months). Laboratory Study (n = 24)- Two Actical accelerometers were placed on the ankle. PA was observed using the Child Activity Rating Scale (CARS, prescribed activities). Analyses included device equivalence reliability (correlation: activity counts of two Acticals), criterion-related validity (correlation: activity counts and CARS ratings), and sensitivity/specificity for thresholds. Community Study (n = 277, low-income mother-toddler dyads recruited)- An Actical was worn on the ankle for > 7 days (goal >5, 24-h days). Height/weight was measured. Mothers reported demographics. Analyses included frequencies (feasibility) and stepwise multiple linear regression (sMLR). Results Laboratory Study- Acticals demonstrated reliability (r = 0.980) and validity (r = 0.75). Thresholds demonstrated sensitivity (86 %) and specificity (88 %). Community Study- 86 % wore accelerometer, 69 % had valid data (mean = 5.2 days). Primary reasons for missing/invalid data: refusal (14 %) and wear-time ≤2 days (11 %). The MVPA threshold (>2200 cpm) yielded 54 min/day. In sMLR, MVPA was associated with age (older > younger, β = 32.8, p < 0.001), gender (boys > girls, β = −11.21, p = 0.032), maternal MVPA (β = 0.44, p = 0.002) and recruitment location (suburban > urban, β = 19.6, p < 0.001), or race (non-Black > Black, β = 18.5, p = 0.001). No association with toddler weight status. Conclusions Ankle accelerometry is a valid, reliable, and feasible method of assessing PA in community studies of toddlers from low-income families. Sub-populations of toddlers may be at increased risk for inactivity, including toddlers that are younger, female, Black, those with less active mothers, and those living in an urban location. Background Pediatric obesity is a serious public health problem, beginning early in life. Being overweight/obese in toddlerhood is associated with increased risk for overweight/obesity in Kindergarten [1], and excess weight gain before age 5 is maintained through adolescence [2,3], increasing risk for obesity-related co-morbidities later in life and giving rise to the recommendation that obesity prevention programs begin early in life [4,5]. Effective pediatric obesity prevention strategies should include a focus on physical activity (PA) behaviors [6,7]. In addition to obesity prevention, PA is important in toddlerhood for the development of motor skills, bones/muscles, social skills, and cognitive growth/development [8]. The limited PA research on young children has focused on preschoolers (age 3-5 years), with relatively few studies focusing on toddlers (ages 12-36 months), despite the fact that active play contributes to toddler cognitive, physical, social, and emotional wellbeing among [9]. International PA guidelines have been developed for toddlers which recommend 180 min/day of total PA (light, moderate, and vigorous) [10][11][12], progressing toward 60 min of moderate-vigorous PA (MVPA) by age 5 [11]. These guidelines are based on limited empirical data, and studies are needed that objectively evaluate PA among toddlers in community settings in order to provide either support for existing guidelines or evidence for the need for updated guidelines. Accurate assessment of PA is necessary to examine adherence to guidelines, PA trends and correlates, and to evaluate the effectiveness of PA promotion programs. Accelerometry provides a non-invasive method for objectively assessing PA and avoids proxy-report biases [13], which is important when working with young children for whom proxy-report would be necessary. To date, accelerometer validation studies have focused primarily on preschool-age children, [14][15][16][17][18][19] with limited methodological studies on toddlers [20]. Four studies used Actigraph accelerometry in controlled settings to determine validity and establish MVPA thresholds for toddlers using the hip or wrist placement [21][22][23][24]. Although hip placement provides more valid and reliable estimates of activity energy expenditure compared to ankle or wrist placement [25][26][27], hip placement has raised concerns about data volume and integrity in community studies [28][29][30][31][32]. Ankle placement has not been extensively studied in laboratory or home/community settings, yet may overcome limitations of hip and wrist placements through continuous, 24-h data collection (without periodic removal, common in hip placement, which often involves removal during sleep, bathing, and swimming [33]) and the ability to capture locomotion (difficult with wrist placement). Due to the limited number of community studies of PA in toddlerhood, little is known about factors associated with PA in this population. Age is likely to be a primary correlate of toddler PA, given that during the transition from infancy to toddlerhood, children develop increasingly sophisticated gross motor skills, including walking and running. Later in toddlerhood and in the preschool years, children's attentional and cognitive skills increase, leading to a decline in gross motor play and an increase in exploratory and problem-solving play (e.g. puzzles, imaginary play) [34]. Among preschoolers, accelerometry studies have identified demographic correlates of PA including male gender [35][36][37][38][39] and Black race (versus White) [36]. Additionally healthy weight preschoolers (versus overweight/obese) have been shown to engage in more PA [36,37,39,40], PA associations between parents and their preschool-aged children are inconsistent [41,42], and the parent-toddler PA association is unknown. Little is known about neighborhood influences on young children's PA; a recent review found that school-aged suburban children were more active than urban children [43]. Studies are needed that objectively examine toddler PA in home/community environments to identify factors associated with PA. The Toddler PA Study (TPAS) combines laboratory and community components to examine three objectives. The first is to determine the device-equivalence reliability/criterion-related validity and develop threshold counts [sedentary, light, and moderate-vigorous PA (MVPA)] for Actical ankle accelerometry among toddlers. The second is to examine the feasibility of ankle accelerometry in a community study of toddlers from low-income families. The final objective is to test the hypothesis that five demographic and anthropometric correlates documented previously among preschoolers are related to greater time in objectively measured MVPA among toddlers from lowincome families: older age (versus younger), male (versus female), healthy weight (versus obese), greater maternal time spent in MVPA (versus less), suburban location (versus urban) and Black race (versus non-Black). Laboratory study Recruitment Toddlers were recruited through mothers groups, flyers at daycare centers, and word of mouth to participate in a 1-day PA study. Eligibility included age 12-36 months, walks independently, no health problems that interfere with PA, and parent able to read/understand English. Procedures were approved by the University Institutional Review Board (IRB). Parents provided written informed consent on behalf of their toddler and completed a demographic questionnaire. Accelerometry Two Actical accelerometers (Philips Respironics, Minimitter, Bend, OR) were strung side-by-side on one hospital bracelet and fastened superior to the lateral malleolus of the non-dominant ankle (the left ankle was chosen if the parent indicated that the toddler had not demonstrated dominance), per the manufacturer's instruction. One Actical was randomly assigned as primary. Accelerometer counts were collected in 30-s intervals. Child Activity Rating Scale (CARS) The CARS [44] was used to assess real-time toddler activity. Although the CARS was originally validated among 3-4 year-old children, it has been used successfully with toddlers [21]. Intervals of 30-s were observed. After attending a 1.5-h training, research assistants independently coded seven videos, 3-6.5 min in length, of toddlers engaging in activities (without pausing). Percent agreement and Kappa values were calculated when compared to a gold standard. Reliable raters (Kappa ≥0.75, Spearman correlation ≥0.8, % agreement >90 %) were selected. Protocol Protocol activities were chosen through a two-stage process. First, a survey of 13 toddler-typical activities was administered to parents of toddlers and health professionals (n = 10). Respondents indicated the intensity of each activity using the CARS intensity ratings. Nine survey activities, ranging in intensity, were selected for pilot-testing. The final protocol included six toddlertypical activities that represented a range of intensities which were chosen based on feasibility in the lab setting and ability to be maintained for approximately 6 min (the target observation time was 5 min, we extended this time by one minute in the protocol to allow for truncating when aligning data): (1) watching TV (sitting), (2) listening to a book (sitting), (3) table games, puzzles and play-doh (standing), (4) imaginary play, kitchen set and train table (walking), (5) ball games (running), and (6) chase/tag (running). One research assistant interacted with each toddler to maintain engagement and activity at a consistent CARS level for 6 min. A second research assistant rated the toddler's activity using the CARS scale. Data reduction and analysis Actical software (version 2.12) was used to download accelerometer data. CARS data were aligned with accelerometer data (activity counts/30-s interval from the two Actical accelerometers) by participant and activity, removing times between activities when CARS data were not gathered. Data were reduced to ensure that only the time periods when toddlers were fully engaged in activities were retained by truncating the first and last 30 s of each activity (retaining~5 min/activity). Statistical analyses were completed using SPSS version 20.0. Device-equivalence reliability was determined using the intraclass correlation (ICC) between activity counts from the two Acticals worn concurrently. Criterion-related validity was determined using Spearman correlations between activity counts from the primary Actical and CARS values for the 30-s intervals. Using the clean dataset (aligned activity counts/ CARS ratings for six activities), each 30-s interval was designated with an activity intensity threshold based on CARS rating [18]: Sedentary = CARS 1.0; Light = CARS 1.1-3.0; MVPA = CARS 3.1-5.0. The distribution of the means and standard deviations of the activity counts by intensity were examined graphically to identify clusters for each activity level [45]. Three researchers with accelerometry experience agreed upon thresholds for sedentary/light and light/MVPA to be tested. Thresholds were applied to the dataset, and sensitivity and specificity were calculated for each intensity level. Community study Recruitment Biological mothers and their toddler-age children (age 12-32 months, born at term, birth weight >2500 g, walks independently) were eligible. Subjects were recruited from two sites: a Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) clinic in a suburban mid-Atlantic county and an inner-city pediatric clinic serving low-income families, both of which serve families living in the surrounding communities. Mothers provided written informed consent and completed self-administered, computer-based questionnaires using voice-generating software. This study was approved by University and State Department of Health Institutional Review Boards. Evaluations were conducted during two visits, separated by one week. Mothers reported on their toddlers' birth date, gender, and race/ ethnicity and on their own birth date, marital status (categorical responses; dichotomized as married versus not married), education (categorical responses for highest degree; dichotomized based on some high school versus high school diploma/equivalent or higher), and number of household members/annual household income (used to calculate a poverty ratio based on household income and number of dependents using U.S. Census Bureau 2009 thresholds [46]). Anthropometrics Mothers undressed their toddler to a clean diaper/underpants. Weight (kg) was measured in triplicate using a TANITA 1584 Baby Scale (TANITA, Tokyo, Japan). Recumbent length (cm) was measured in triplicate using a Shorr Measuring Board (Shorr Productions, Olney, Maryland). Gender-specific weight-for-length percentiles were calculated using CDC growth charts [47]. Obesity was defined as ≥ 95th percentile weight-for-length. Maternal height (cm) was measured in triplicate using a Shorr Measuring Board, and body weight (kg) was measured in duplicate (TANITA 300GS, Tanita Corp., Tokyo, Japan). If two measures differed by more than a centimeter (length or height) or differed at all (weight) then subsequent measures were taken, with the final measures averaged. BMI was calculated as weight (kg)/height (m) 2 , with overweight and obesity defined as BMI 25-29.9 kg/m 2 and BMI ≥ 30 kg/m 2 , respectively. Accelerometry Both the toddlers and their mothers wore an Actical accelerometer, strung on a hospital bracelet and fastened superior to the lateral malleolus of the non-dominant or left ankle, per the manufacturer's instruction (next to the skin, under socks; once latched, the band cannot be removed unless cut off ). The accelerometers were to be worn for at least seven consecutive days without removal as toddlers and mothers engaged in their routine activities. The Actical is small, light, and waterproof, and can be worn during sleep, play, bathing or swimming. For both mothers and toddlers, activity counts were collected in 1-min intervals to provide direct comparison for mothers and toddlers. During the second visit, the Actical band was removed. Data reduction and analysis Accelerometer data for both mothers and toddlers were reduced using Actical software (version 2.12). Days with <80 cpm were treated as incomplete and removed. Only days with complete data (i.e. 24-h period) were included, therefore the first and last day of wear time were truncated. If the accelerometer was removed on the second day (≤2 days of wear time), then the data were considered invalid and excluded. Data were considered valid if at least one 24-h period day (12:00 am-11:59 pm) was recorded. For participants with >7 days, data were truncated after the 7th day. Thresholds were applied to the clean dataset to determine time spent in sedentary (including sleep), light, and MVPA for mothers [45] and toddlers (newly developed from the laboratory study). Feasibility of toddler ankle accelerometry was determined by the proportion of toddlers with valid accelerometer data. Reasons for incomplete data were recorded. T-test and Chi-square analyses were employed to compare how the sample differed on demographic variables by the presence/absence of valid accelerometry data. The thresholds from the Laboratory Study (based on 30-s intervals) were proportionately converted into 1min intervals (multiplied by 2) [45]. Time in each PA level was calculated for toddlers using thresholds from the Laboratory Study and for mothers from a previously validated threshold [45]. Demographic and anthropometric correlates of PA were examined using Spearman correlations, independent t-tests, and chi-square analyses. Variables associated with MVPA were tested for collinearity with the Variance Inflation Factor (VIF). Skewness of minutes in MVPA was examined. Step-wise multiple linear regression was used to identify variables associated with toddler MVPA in a single model. Laboratory study Twenty-four toddlers participated (mean age: 24.5 months, range 14.7-35.5), 45.8 % were <24 months, and 58.3 % were male. All toddlers completed the six activities, with a mean duration of valid data ranging from 5 min 15 s to 6 min 21 s. The selected activities increased in activity counts and CARS values as the motion changed, yielding a wide range of values ( Table 1). The device equivalence reliability of the Actical (ICC between two accelerometers on the same ankle concurrently) was 0.980. Criterion-related validity (correlation between primary Actical accelerometer and CARS ratings) was 0.749. Activity counts (mean ± standard deviation [SD]) were plotted by CARS intensity [18], merging data from all 6 activities (Fig. 1). As shown in Fig. 1, for the CARS threshold of 1.0 (sedentary) the mean activity count/30 s was 23. The threshold distinguishing sedentary activity from light activity was set at the approximate mean sedentary activity count (20 counts/30 s). The 20 counts/s threshold for sedentary activity had a sensitivity of 81.8 % and a specificity of 77.5 %. The threshold distinguishing light activity from MVPA was set at 1100 counts/30 s, based on the mean + SD for light activity (1055 counts/30 s) and mean-SD for MVPA (1171 counts/ 30 s, Fig. 1). Considering the threshold of >20 counts/30 s and <1100 counts/30 s for light activity, sensitivity was 61.7 % and specificity was 84.7 %. No differences between toddlers with/without data were observed by age, gender, race, or body size (weightfor-length), nor by maternal age, education, body size (BMI), number of household children, or poverty ratio. Toddlers with valid data were more likely to be suburban (76.4 %) versus urban (64.1 %, χ 2 = 4.7, p = 0.031), and to have mothers with valid data (92.1 % versus 7.9 %; χ 2 = 58.3, p < 0.001). Demographic characteristics of this sample (n = 191, Table 3) include toddler age from 12.0 to 31.9 months, with most (73.3 %) <24 months, over half (53.9 %) were male, 67.5 % were Black (survey response: "African American or Black"), and 11.5 % were obese. Mothers were, on average, 26.8 years, the majority were unmarried (71.7 %), most had a high school diploma or equivalent (83.8 %), and the majority were overweight/obese (71.7 %). Mothers engaged in <30 min/day in MVPA (26.5 min/day). Most families lived at or below the poverty threshold (66.0 %), over half were recruited from the urban site (56.0 %), and average family size was 2.5 children. Accelerometry thresholds applied in the Community Study were: 0-40 (Sedentary), 41-2200 (Light), ≥2201 (MVPA) counts/min, which, when applied, showed that toddlers were engaging in an average of 803 min/day in sedentary activity (including sleep), 582 min/day in light activity, and 54 min/day in MVPA (24 h periods, Table 3). To calculate total PA, light and MVPA are included together, yielding an average of 637 min/day. On average, toddlers had 5 days of valid data (ranging from 1 to 7). Few toddlers (5.8 %) had only one day of valid data. Associations between number of days of valid data and time in MVPA, examined continuously or categorically, were not observed ( Table 3). Correlates of time spent in MVPA (Table 3) show that older versus younger toddlers were more active (p < 0.001, toddlers aged 24-32 months engaged in nearly twice the amount of activity compared to toddlers 12-24 months of age), male toddlers were more active than female (~13 additional minutes MVPA/day, p = 0.027), and healthy weight toddlers were more active than obese (~15 additional minutes MVPA/day, p = 0.027). Non-Black toddlers were marginally more active than Black (~11 additional minutes MVPA/day, p = 0.062). Maternal time in MVPA was positively associated with toddler MVPA (r = 0.18, p = 0.017) and suburban toddlers were more active than urban (~12 additional minutes in MVPA/day, p = 0.045). In step-wise multiple linear regression models, minutes/ day in MVPA was marginally skewed (Skewness = 1.040), and not transformed for interpretation. Variables that were significantly or marginally associated with minutes in MVPA in the bivariate analyses included toddler age, gender, obese status, race, maternal PA, and recruitment location. The majority (90.7 %) of the urban sample was Black compared to 38.1 % of the suburban sample (χ2 = 59.3, p < 0.001). The VIF indicated marginal collinearity (1.5, Tolerance = 0.68), however given prevalence differences in race by location, two models were run, one including race and one including recruitment location. Both stepwise regression models excluded obese status; other variables were retained (Table 4). In both, age was included first, followed by recruitment location or race, then maternal PA, then gender. Each variable was significantly associated with toddler MVPA, in the same direction as the bivariate associations. The final models, with all included variables, had R 2 values of 0.28-0.29, indicating nearly 30 % of the variance in MVPA explained. Discussion This paper describes two studies that together validate ankle accelerometry among toddlers and describe lowincome toddler PA. As obesity prevention efforts focus on younger children for whom little is known about PA, validated objective PA measures and descriptive/correlational studies are needed. The Laboratory Study showed that ankle accelerometry placement is valid for measuring variations in PA compared to direct observation and can reliably record movement. Second, sensitive and specific thresholds for sedentary, light, and MVPA were created, relying on CARS as a criterion method [18,19,21]. Varying CARS MVPA thresholds have been used in accelerometer validation studies, including ≥3.0 [21], ≥3.1 [18], and ≥4.0 [19,24]. We used the ≥3.1 threshold for MVPA because a CARS score >3.0 means that a portion of the interval included activity greater than light. By comparing activity counts to simultaneously CARS-generated observational activity intensity data, we demonstrated criterion- related validity of the toddler Actical ankle accelerometry while also generating intensity thresholds. This study adds to the new but growing literature on methods for objectively assessing PA among toddler-aged children [21][22][23][24] by providing support for an alternative accelerometer placement (ankle) and monitor (Actical). The Community Study yielded three findings. First, ankle accelerometry is feasible for measuring PA among toddlers from low-income families, demonstrated by the high acceptance rate (86.3 %) and proportion of valid data (69.0 %). Most missing data was due to participant noncompliance, including device removal within 48 h. Several strategies were used to reduce refusals and maintain compliance: (1) incentive for accelerometer return, (2) accelerometer band decorated with stickers, and (3) mothers and toddlers wore ankle accelerometers concurrently. Future studies should explore mechanisms for reducing refusals and enhancing compliance among this young age group. Much of the toddler PA research to date has involved only laboratory accelerometer validation studies [21,23,24] with little research conducted in community settings. A study by Van Cauwenberghe et al. examined the feasibility of hip accelerometry among a small sample (n = 47) of toddlers, 12-30 months of age, over~6 days and found this method to be feasible [22]. For this study, a minimum of 7.5 h/day of wear-time was considered valid (average wear-time was 9.4 h/day) [22]. The current study adds to the literature by providing evidence for a valid and feasible method of obtaining 24-h accelerometry data among toddlers in community settings. Second, toddlers were engaging in an average of 54.1 min of MVPA/day. Recent international PA guidelines set recommendations for total PA for toddlers (light and MVPA together) at 180 min/day, with a progression toward 60 min of MVPA/day by age 5 [10][11][12]. In this study, toddlers from low-income families, on average, were not reaching 60 min of MVPA/day, however older toddlers, ages 24-32 months, exceeded 60 min/day. Future studies should examine the progression of time spent in MVPA from toddlerhood through the preschool years. Third, activity was associated with older age, male gender, non-Black race, and suburban location. Based on motor skill development, we hypothesized that older toddlers would be more active than younger, which was supported. Older toddlers engaged in nearly twice the amount of MVPA compared to younger toddlers. We expected age to be a primary correlate of toddler PA, given the developmental changes occurring during this period. The large difference observed in this study warrants further examination, perhaps over a broader age range (i.e.: 0-5) or by following young toddlers longitudinally. We hypothesized that male toddlers would be more active than female, based on studies of preschoolaged children, which was supported, with male toddlers engaging in nearly 13 more minutes per day of MVPA. A gender difference in PA observed prior to the preschool years is an important finding. Factors that led to this gender difference and the long-term impact of an early difference in PA by gender should be examined further. Black toddlers were less active than White, however, most Black toddlers were recruited from an urban location and most White toddlers were recruited from a suburban location. Toddlers recruited from the suburban location were more active (~16 additional minutes/ day in MVPA) compared to urban toddlers in adjusted models. There is little research examining neighborhood environment and PA among young children, prior to school-age. Base on the findings from this study, additional research is needed to understand how the neighborhood environment relates to toddlers' PA, and the relation between race/residence and PA among very young children. Maternal PA was associated with toddler PA, as hypothesized. Longitudinal studies should examine the direction of this relation, specifically whether being an active mother leads to having active toddlers or if having an active toddler increases maternal PA or if the relationship is transactional. Finally, this study demonstrated that, when adjusting for covariates, there was no difference in PA by toddler weight status. Recently updated PA guidelines from three countries state that toddlers should engage in 180 min of total PA each day (including light and MVPA) [10][11][12]. The TPAS found that, using objective measures in a sample of toddlers from low-income families in the U.S., total PA averaged 637 min/day (over 3.5 times the amount stated in the guidelines). Few community-based studies of toddlers using objectively measured PA have been conducted to inform guidelines. Further, these guidelines describe total PA in toddlerhood as including light activities such as "standing up, moving around" [12] and "moving around the home" [11]. Findings from this TPAS suggest that the current PA guidelines may significantly underestimate the amount of time toddlers are already engaging in total PA. Also, a recent review of 14 studies including over 20,000 children age 4-18 years found that more time in MVPA was associated with cardiometabolic risk factors, regardless of sedentary time [48], which suggests that MVPA in toddlerhood alone should be considered in future studies. Additional objective community-based studies of PA in toddlerhood are warranted to inform/generate evidencebased toddler PA guidelines. It has been recommended that PA of very young children be measured in small intervals (i.e.: 15-s) to record short bursts of movement [20]. We chose to record data in 30-s intervals for the laboratory study and 1-min intervals in the community study for several reasons. First, the community study was designed such that the mother and toddler PA data were collected together, using the same interval of time (1-min), for comparison. Second, in a prior study we were able to aggregate thresholds based on Actical interval length up for longer durations of time [45], therefore we designed this laboratory study to capture Actical and CARS data in smaller increments (30-s) with the intention of aggregating up to 1-min for application in the community study. Finally, the standard protocol for the laboratory criterion measure (CARS) prescribes 1-min intervals [44], which we were able to reduce to 30-s and maintain data quality. There are several strengths and limitations associated with these studies. Strengths include ankle placement offering continuous, 24-h data collection, with minimal participant burden (i.e., no need to remove the device during bathing/sleep), while also capturing movements that involve translocation; however a limitation of this method is the inability to differentiate sedentary time from sleep, which may be particularly pertinent during toddlerhood. Using accelerometry to determine MVPA time and PA correlates is a strength, given that accelerometry avoids biases of proxy-reporting (necessary for young children). The Community study allowed for a unique examination of an array of PA correlates in a sample of toddlers from low-income families, including dual accelerometer comparisons of PA among mothers and their toddler-aged children. The choice to collect toddler data in one-minute epochs, for comparison with maternal accelerometer data, is also a limitation as current best practice is to measure PA of very young children in 15 s epochs. The current study, as conducted, is able to provide valuable information about toddler PA (including in relation to maternal PA), albeit the recorded intervals of time are longer than recommended. The MVPA threshold applied to the mothers' accelerometer data was derived from a study of adolescent girls [45] and not developed specifically for adult women; however no valid threshold is available specifically for Actical ankle accelerometry among adult women (when applying thresholds built into the manufacture's software time spent in MVPA is dramatically overestimated for the ankle placement [45]). The Community Study sample was exclusively low-income, a population at risk for obesity and inactivity; however, the homogeneity also reduces generalizability to other populations. Conclusions This study demonstrated that Actical ankle accelerometry is a valid and feasible method of assessing PA in community studies of toddlers from low-income families. Sub-populations of toddlers may be at increased risk for inactivity, and interventions should consider these when determining whom to target and how to design PA promotion interventions involving toddlers.
2018-04-03T01:04:48.214Z
2016-09-06T00:00:00.000
{ "year": 2016, "sha1": "d3dfdbd985aac34aa299e59a85683672b7c93b7f", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-016-3569-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d3dfdbd985aac34aa299e59a85683672b7c93b7f", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
143424946
pes2o/s2orc
v3-fos-license
Tracking Five Millennia of Horse Management with Extensive Ancient Genome Time Series (first two lumbar vertebrae) and spondylosis chronica deformans on four thoracic vertebrae and one lumbar vertebra. Further morphological analysis showed that the horse skeleton found in grave 4 (Nustar_4_1187) was probably five and a half to six years old at the time of death and had a slightly larger height to the withers of 143 cm, whereas Nustar_5_1187 reached an age of circa seven years and had average withers height of 139 cm (Vuki (cid:2) cevi (cid:3) c et al., 2017). in the occlusal plane and transversal enamel hypoplasia near to the dentinoenamel-junction. The latter is indicating unspecific stress at the end of enamel formation. fully-dressed remains of a Scythian warrior and two horse skeletons. The samples OlonKurinGol_OKG1_2367 and OlonKurinGol_OKG2_2367 originate from such horses. Along with the horse remains, the harness and saddle of the horses were found. The individuals UushgiinUvur_Mon37_3085, UushgiinUvur_Mon39_3085, Uushgiin UushgiinUvur_Mon40_3085, UushgiinUvur_ Mon41_3085, UushgiinUvur_Mon42_3130, UushgiinUvur_Mon43_3120, UushgiinUvur_Mon44_3085, UushgiinUvur_Mon45_3080, UushgiinUvur_Mon79_3085, UushgiinUvur_Mon87_3117 and UushgiinUvuur_Mon89_3085 all from petrosal bones from row from north south, duplicates of each library to determine the required PCR cycle number for amplification. Subsequently, libraries were amplified for 4-16 cycles (Table S4), following Gamba et al. (2016). Each PCR reaction included 1 unit of AccuPrime TM Pfx DNA polymerase, 3 to 6 m l of unpurified DNA library and 1-2 m l of 5 m M custom PCR primers, one of which containing a unique external 6-bp index used for post-sequencing sequence demultiplexing. PCR products were then purified using either Minelute columns (QIAGEN(C)) or Agencourt AMPure beads, eluted in 25 m l of EB (10 mM Tris-Cl, pH 8.5) supplemented with 0.05% Tween, and quan-tified on Tapestation 2100/4200 or Bioanalyzer instruments (Agilent technologies). Finally, sequencing was performed at the Danish National High-Throughput DNA Sequencing Centre, either on Illumina HiSeq2500 for the vast majority of samples (method A), or the Illumina HiSeq4000 for samples (Cantorella_UE2275x2_4791, ElAcequion_Spain38_4058, ElAcequion_Spain39_3993, ElsVilars_UE4618_2672) (method B). Sequence trimming, mapping, filtering and base calibration at damaged sites were carried out following the methodology from Gaunitz et al. (2018). In Brief Genome-wide data from 278 ancient equids provide insights into how ancient equestrian civilizations managed, exchanged, and bred horses and indicate vast loss of genetic diversity as well as the existence of two extinct lineages of horses that failed to contribute to modern domestic animals. INTRODUCTION Horses provided humans with the first opportunity to spread genes, diseases, and culture well above their own speed (Allentoft et al., 2015;Haak et al., 2015;Rasmussen et al., 2014). Horses remained paramount to transportation even after the advent of steam locomotion and until the widespread use of motor vehicles (Kelekna, 2009). Horses also revolutionized warfare, pulling chariots at full speed in the Bronze Age, providing the foundation for mounted battle in the early Iron Age, and facilitating the spread of cavalry during Antiquity (Drews, 2004). Today, horses remain essential to the economy of developing countries and to the leisure and racing industries of developed countries (Faostat, 2009). Genome Dataset To clarify the origins of domestic horses and reveal their subsequent transformation by past equestrian civilizations, we generated DNA data from 278 equine subfossils with ages mostly spanning the last six millennia (n = 265, 95%) (Figures 1A and 1B; Table S1; STAR Methods). Endogenous DNA content was compatible with economical sequencing of 87 new horse genomes to an average depth-of-coverage of 1.0-to 9.3-fold (median = 3.3-fold; Table S2). This more than doubles the number of ancient horse genomes hitherto characterized. With a total of 129 ancient genomes, 30 modern genomes, and new genomescale data from 132 ancient individuals (0.01-to 0.9-fold, median = 0.08-fold), our dataset represents the largest genomescale time series published for a non-human organism (Tables S2, S3, and S4; STAR Methods). Most specimens were genetically confirmed as horses (175 males, 70 females; Table S1; STAR Methods). Six belonged to other equine species, including three hemiones from Chalcolithic, Bronze, and Iron Age sites of Iran and three Roman and Byzantine donkeys (Figure 1A). A total of 27 specimens were genetically assigned to mules (the offspring of a donkey jack and a horse mare), which are difficult to identify in fragmentary fossil records using morphology alone (Schubert et al., 2017). The oldest mules identified are from the La Tè ne Iron Age site of Saint-Just (France), but they were also found in Roman and medieval Europe as well as Byzantine Turkey. Changes in Horse Management through Time and Their Impact on Diversity Previous work comparing the sequence variation present in modern horse genomes and the genomes of 11 ancient horses belonging to the Scythian Pazyryk culture suggested important changes in the management of available genetic resources within the last $2,300 years (Librado et al., 2017). Our thorough temporal genome sampling allowed us to delineate more precisely when these changes happened. We ensured accurate diversity estimates in ancient horses by only considering genomes sequenced at minimum 1-fold depth-of-coverage and implementing the three following approaches. First, enzymatic treatment against the most prevalent post-mortem DNA damage helped avoid inflating past diversity estimates (STAR Methods). Second, only sites least affected by damage, such as non-CpG dinucleotides and transversion sites, were considered. Third, we checked that diversity measurements were robust both to residual error rates and sequencing depth ( Figure S1; STAR Methods). All modern breeds investigated here showed an $16.4% median drop in individual heterozygosity levels relative to horses that lived prior to $200 years ago (Wilcoxon test, p value = 2.0 3 10 À13 ) (Figures 2C and S2;STAR Methods). This contrasts with steady heterozygosity levels during the previous four millennia, reflecting that earlier equestrian civilizations managed and maintained higher levels of genetic diversity. A similar trend was found in autosomal p diversity, which severely declined during the most recent time interval with sufficient data to enable calculations (i.e., the last $400 years). Autosomal p profiles also supported a demographic expansion from La Tè ne to Roman Europe, possibly pertaining to the growing demand for horses during Roman times (Figure 2A;STAR Methods). The recent decays of autosomal p diversity and heterozygosity suggest a severe reduction in horse breeding stock within the last few centuries, parallel to the significant changes in agricultural practices underpinning modern studs. This reduction in effective size is expected to have increased mutational loads genome-wide by reducing the efficacy of purifying selection (Cruz et al., 2008;Schubert et al., 2014a). To test this, we calculated conservative estimates for the mutational loads at homozygous sites within protein-coding genes and accounting for possible inbreeding differences (Librado et al., 2017) (calculations at heterozygous sites were proven impracticable, in agreement with Pedersen et al. [2017]) (Figures S3A and S3B). As expected, mutational load estimates correlated with reduced selection, as measured from differential diversity patterns at non-synonymous and synonymous sites, and from sites classified as deleterious and non-deleterious on the basis of their evolutionary conservation across multiple vertebrate species (STAR Methods). We found mutational loads increasing during the last $200 years, parallel to changes in breed reproductive management ($4.6% median load increment; Wilcoxon test, p value = 8.3 3 10 À12 ) ( Figure 2D). Our data therefore support the contention that reproductive strategies implemented in the last few centuries reduced the chance to eliminate deleterious variants from domestic horse stock. See also Tables S1, S2, S3, and S4. The Choice of Stallions for Reproduction and Its Impact in the Last 2,000 Years The Y chromosome diversity is extremely limited in modern horses (Lindgren et al., 2004) but was greater in the past (Librado et al., 2017;Lippold et al., 2011), indicating that specific stallion lines have become increasingly prominent. Previous work showed that this process started $900 BCE-400 CE, however, on the basis of only four polymorphic SNPs (Wutke et al., 2018). We thus leveraged our 105 stallions and the $1,500 orthologous polymorphic sites recovered at monocopy regions to gain further temporal resolution for this reduction in Y chromosome diversity (STAR Methods). We considered all past time intervals of 250 years represented by a minimum of 3 males in Asia and in Europe separately, to limit the impact of geographic structure. This revealed that Y chromosome nucleotide diversity (p) decreased steadily in both continents during the last $2,000 years but dropped to present-day levels only after 850-1,350 CE (Figures 2B and S2E;STAR Methods). This is consistent with the dominance of an $1,000to 700-year-old oriental haplogroup in most modern studs (Felkel et al., 2018; Wallner et al., (D) Conservative individual mutational loads from homozygous sites. Violin plots contrast the heterozygosity levels and genetic loads present in ancient (pink) and modern (blue) genomes belonging to the DOM2 lineage. See also Figure S3 and Table S5. 2017). Our data also indicate that the growing influence of specific stallion lines post-Renaissance (Wallner et al., 2017) was responsible for as much as a 3.8-to 10.0-fold drop in Y chromosome diversity. We then calculated Y chromosome p estimates within past cultures represented by a minimum of three males to clarify the historical contexts that most impacted Y chromosome diversity. This confirmed the temporal trajectory observed above as Byzantine horses (287-861 CE) and horses from the Great Mongolian Empire (1,368 CE) showed limited yet larger-thanmodern diversity. Bronze Age Deer Stone horses from Mongolia, medieval Auk stai ciai horses from Lithuania (C9 th -C10 th [ninth through the tenth centuries of the Common Era]), and Iron Age Pazyryk Scythian horses showed similar diversity levels . TreeMix Phylogenetic Relationships The tree topology was inferred using a total of $16.8 million transversion sites and disregarding migration. The name of each sample provides the archaeological site as a prefix, and the age of the specimen as a suffix (years ago). Name suffixes (E) and (A) denote European and Asian ancient horses, respectively. See Table S5 for dataset information. See also Figure S7. (0.000256-0.000267) (Figure 2A). However, diversity was larger in La Tè ne, Roman, and Gallo-Roman horses, where Y-to-autosomal p ratios were close to 0.25. This contrasts to modern horses, where marked selection of specific patrilines drives Y-to-autosomal p ratios substantially below 0.25 (0.0193-0.0396) (Figure 2A). The close-to-0.25 Y-toautosomal p ratios found in La Tè ne, Roman, and Gallo-Roman horses suggest breeding strategies involving an even reproductive success among stallions or equally biased reproductive success in both sexes (Wilson Sayres et al., 2014). Influence of Persian Lines Post C7 th -C9 th We next tracked evidence for animal exchange between past cultures by mapping genetic variation through space and time. We included all samples belonging to a particular archaeological culture, as long as they collectively accumulated a minimal genome depth-of-coverage of 2-fold (n = 186, Table S5). TreeMix reconstructions (Pickrell and Pritchard, 2012) revealed that modern Shetland and Icelandic ponies were most closely related to a group of north European horses including pre-Viking Pictish horses from C6 th -C7 th Britain, Viking horses, and one C9 th -C10 th horse from Estonia (Saardjave) (Figure 3; STAR Methods). This is in line with the historical expansion of Scandinavian seafaring warriors in the British Isles and Iceland between the late C8 th -C11 th (Brink and Price, 2008). These horses formed a sister clade to mainland European horses spanning the Iron Age to the C7 th and a number of cultures, including in the La Tè ne and (Gallo-) Roman periods. Other modern European native breeds (e.g., Friesian, Duelmener, Sorraia, and Connemara) were found to belong to yet another clade, first appearing in Europe at Nustar, Croatia in the C9 th , but not present at that time in northern Europe (Auk stai ciai, Lithuania). This suggests the introduction of new domestic lineages to the south of mainland Europe between the C7 th -C9 th , a time strikingly coincident with the peak of Arab Byzantine horses, compared to 11 Gallo-Roman and 11 Deer Stone horses. The underlying tree topology consists of three groups with sufficient data and representing pre-C7 th -C9 th horses in Asia and Europe and post-C7 th -C9 th horses descending from Sassanid Persians. We used non-overlapping 50 kb genomic bins, and genes underlying enrichment for functional categories related to vertebral changes are indicated. These include Sf3b1 and seven HOXB/C genes. Hoxc11, Hoxb7, Hoxb13, and Hoxc12 are not annotated as related to vertebral modifications, but embedded within the two independent clusters of HOXB/C genes. The MSTN speed gene, one selection candidate in Byzantine horses, is also highlighted. See also Figure S4 and Tables S6 and S7 for further information. raids on the Mediterranean coasts, including Croatia (Skylitzes and Wortley, 2010). This, and the earliest identification of this clade within two Sassanid Persian horses from Shahr-I-Qumis, Iran (C4 th -C5 th ), supports the growing influence of oriental bloodlines in mainland Europe following at least the C9 th . Moving focus to Asia, steppe Iron Age Pazyryk Scythian and Xiongnu horses appear related to Karasuk horses, locally present in the Minusinsk Basin of South Siberia during the late Bronze Age (Mallory and Adams, 1997). This lineage of horses survived at least until the C8 th in Central Asia at Boz Adyr, Kyrgyzstan. However, Mongolian horses from the Uyghur (C7 th -C9 th , Khotont_UCIE2012x85_1291) and the Great Mongolian Empire (C13 th ) clustered together with C9 th horses from Kazakhstan (Gregorevka4_PAVH2_1192 and Zhanaturmus_ Issyk1_1143) within the group descending from the two Shahr-I-Qumis Sassanid Persian horses. Therefore, the population shift observed in Europe during the C7 th -C9 th was also mirrored in Central Asia and Mongolia. Gait, Speed, and Selection We next aimed to identify possible differences in the traits selected prior to and after the C7 th -C9 th transition. Only one subset of horses provided sufficient data for calculating the Population Branch Statistics (PBS) (Yi et al., 2010) considering at least 10 individuals above 1-fold depth-of-coverage per archaeological site (Tables S6 and S7; STAR Methods). It consisted of 11 Bronze Age Deer Stone horses (representing the pre-C7 th -C9 th Asian group), 11 Gallo-Roman horses (pre-C7 th -C9 th European horses), and 17 Byzantine horses (post-C7 th -C9 th ). Enrichment analyses of the genes overlapping the top 1,000 50 kb windows revealed that functional categories related to cervical and thoracic vertebrae were over-represented in Byzantine horses (adjusted p values %0.05) ( Figure 4A; STAR Methods; Figure S4). Eleven genes within the HOXB/C clusters, instrumental for the development of the main body plan and the skeletal system (Pearson et al., 2005), featured among the windows showing the strongest PBS values ( Figure 4A). These findings were robust to the number of outlier windows considered and the significance threshold retained was conservative relative to neutral expectations (STAR Methods). Therefore, our results provide evidence for selection toward changes in the skeletal morphoanatomy of the post-C7 th -C9 th horses related to Sassanid Persians. We further explored temporal shifts in the traits that are commonly selected by modern breeders. We retraced allelic trajectories at key genomic locations associated with or causal for locomotion, body size, and coat-coloration phenotypes. We also tracked known variants underlying genetic disorders through time ( Figure S5; STAR Methods). Allele frequencies were calculated every 1,000 years (step size = 250 years) and restricted to the lineage leading to modern domesticates (DOM2) (Figures 4B and 4C). Mutations causing genetic disorders were extremely rare, including the GYS1 H allele responsible for a severe myop-athy in Quarter horses and other heavy and saddle horse breeds. This allele was almost absent across all archaeological sites and, thus, not particularly advantageous for past breeders despite the increased glycogen storage muscular capacity conferred in starch-poor diets (McCue et al., 2008). Spotted and dilution alleles also remained at low frequencies, in contrast to the MC1R chestnut coat-coloration allele, which was relatively common, except at the end of the Middle Ages ( Figures 4B and S6). The DMRT3 allele that causes ambling and improves speed capacity in Icelandic horses (Kristjansson et al., 2014) was first seen in a Great Mongolian Empire horse (TavanTolgoi_ GEP14_730) and slowly gained in frequency thereafter (Figure S5). Interestingly, the MSTN ''speed'' gene was among the PBS selection candidates in the post-C7 th -C9 th branch (Figure 4A). We found that a number of alleles involved in racing performance, including at MSTN and PDK4 and ACN9 (Hill et al., 2010), rose in frequency in the last 600-1,100 years (100-1,100 and 600-1,600 years ago) (Figure 4B). Allele frequencies at these three loci also varied significantly more through time than other mutations genome-wide ( Figure 4C). Altogether, this supports that speed capacity was increasingly selected in the last millennium. Discovering Two Divergent and Extinct Lineages of Horses Domestic and Przewalski's horses are the only two extant horse lineages (Der Sarkissian et al., 2015). Another lineage was genetically identified from three bones dated to $43,000-5,000 years ago (Librado et al., 2015;Schubert et al., 2014a). It showed morphological affinities to an extinct horse species described as Equus lenensis (Boeskorov et al., 2018). We now find that this extinct lineage also extended to Southern Siberia, following the principal component analysis (PCA), phylogenetic, and f 3outgroup clustering of an $24,000-year-old specimen from the Tuva Republic within this group (Figures 3, 5A and S7A). This new specimen (MerzlyYar_Rus45_23789) carries an extremely divergent mtDNA only found in the New Siberian Islands some $33,200 years ago (Orlando et al., 2013) ( Figure 6A; STAR Methods) and absent from the three bones previously sequenced. This suggests that a divergent ghost lineage of horses contributed to the genetic ancestry of MerzlyYar_ Rus45_23789. However, both the timing and location of the genetic contact between E. lenensis and this ghost lineage remain unknown. PCA revealed that native Iberian horses (IBE) from the 3 rd and early 2 nd mill. BCE cluster separately from E. lenensis, Przewalski's horses (and their Botai-Borly4 ancestors) and the lineage leading to modern domesticates (DOM2) (Figure 5A;STAR Methods). This indicates that a fourth lineage of horses existed during the early phase of domestication (Gaunitz et al., 2018;Outram et al., 2009). Members of this lineage possess their own distinctive mtDNA haplogroup (Figure 6A;STAR Methods) and are represented by two Spanish pre-Bell Beaker Chalcolithic settlements (Cantorella and Camino de Las Yeseras) and a Bronze Age village (El Acequió n), with archaeological contexts compatible with both wild and domestic status. Modeling Demography and Admixture of Extinct and Extant Horse Lineages Phylogenetic reconstructions without gene flow indicated that IBE differentiated prior to the divergence between DOM2 and Przewalski's horses (Figure 3; STAR Methods). However, allowing for one migration edge in TreeMix suggested closer affinities with one single Hungarian DOM2 specimen from the 3 rd mill. BCE (Dunaujvaros_Duk2_4077), with extensive genetic contribution (38.6%) from the branch ancestral to all horses (Figure S7B). This, and the extremely divergent IBE Y chromosome ( Figure 6B), suggest that a divergent but yet unidentified ghost population could have contributed to the IBE genetic makeup. To test this and further assess the underlying population history, we explicitly modeled demography and admixture by fitting the multi-dimensional Site Frequency Spectrum in momi2 (Kamm et al., 2018) (STAR Methods). The two best-supported scenarios ( Figure 5C) provided divergence time estimates on par with previous work, first $113-119 kya for the E. lenensis split (Librado et al., 2015;Schubert et al., 2014a), then $34-44 kya for that of Przewalski's horse and DOM2 lineages (Der Sarkissian et al., 2015). In both models, IBE and E. lenensis show strong genetic affinities, with no less than 93.2%-98.8% genetic input from the former into the branch ancestral to E. lenensis, some $285-333 kya. The magnitude of this pulse could suggest that the two lineages in fact split at that time, but that a more divergent ghost population contributed $1.2%-6.8% ancestry into IBE, pushing the momi2 estimate for the IBE divergence to deeper times ($539-1,246 kya). The strong genetic affinity between IBE and E. lenensis is consistent with the results of Struct-f4, a new method developed here leveraging all possible combinations of f 4 -statistics to provide a 3D representation of ancestral population relationships that is robust to lineage-specific genetic drift ( Figure 5B; STAR Methods), as opposed to PCA projections. Rejecting Iberian Contribution to Modern Domesticates The genome sequences of four $4,800to 3,900-year-old IBE specimens characterized here allowed us to clarify ongoing debates about the possible contribution of Iberia to horse domestication (Benecke, 2006;Uerpmann, 1990;Warmuth et al., 2011). Calculating the so-called f G ratio (Martin et al., 2015) provided a minimal boundary for the IBE contribution to DOM2 members (Cahill et al., 2013) (Figure 7A). The maximum of such estimate was found in the Hungarian Dunaujvaros_ Duk2_4077 specimen ($11.7%-12.2%), consistent with its TreeMix clustering with IBE when allowing for one migration edge ( Figure S7B). This specimen was previously suggested to share ancestry with a yet-unidentified population (Gaunitz et al., 2018). Calculation of f4-statistics indicates that this population is not related to E. lenensis but to IBE ( Figure 7B; STAR Methods). Therefore, IBE or horses closely related to IBE, contributed ancestry to animals found at an Early Bronze Age trade center in Hungary from the late 3 rd mill. BCE. This could indicate that there was long-distance exchange of horses during the Bell Beaker phenomenon (Olalde et al., 2018). The f G minimal boundary for the IBE contribution into an Iron Age Spanish horse (ElsVilars_UE4618_2672) was still important ($9.6%-10.1%), suggesting that an IBE genetic influence persisted in Iberia until at least the 7 th century BCE in a domestic context. However, f G estimates were more limited for almost all ancient and modern horses investigated (median = $4.9%-5.4%; Figure 7A). Analytical predictions and population modeling with momi2 further confirmed that IBE contributed only minimal ancestry ($1.4%-3.8%) to modern DOM2 horses and well prior to their domestication ($34-44 kya). DISCUSSION Recent advances in ancient DNA research have opened access to the complete genome sequence of past individuals. These have so far mostly improved our understanding of the evolutionary history of our own lineage, based on hundreds of individual whole genomes and genome-scale data from thousands of individuals (Marciniak and Perry, 2017). Our study represents the first effort to apply the available technology at similar scales to a non-human organism. With 129 ancient genomes and genome-scale data from 149 additional ancient animals, our dataset unveils the past complexity of horse evolution, including the recent impact of humans by means of diversity management, selection and hybridization. We genetically identified two mules within the La Tè ne Iron Age site of Saint-Just (France). Mules represented invaluable animals to past societies, being more sure-footed, more resistant to diseases, and harder working than horses. They are, however, difficult to identify morphologically from fragmentary material. Our work gives definitive proof that mules have been bred since Figure 5. Genetic Affinities (A) Principal Component Analysis (PCA) of 159 ancient and modern horse genomes showing at least 1-fold average depth-of-coverage. The overall genetic structure is shown for the first three principal components, which summarize 11.6%, 10.4% and 8.2% of the total genetic variation, respectively. The two specimens MerzlyYar_Rus45_23789 and Dunaujvaros_Duk2_4077 discussed in the main text are highlighted. See also Figure S7 and Table S5 for further information. (B) Visualization of the genetic affinities among individuals, as revealed by the struct-f4 algorithm and 878,475 f4 permutations. The f4 calculation was conditioned on nucleotide transversions present in all groups, with samples were grouped as in TreeMix analyses (Figure 3). In contrast to PCA, f4 permutations measure genetic drift along internal branches. They are thus more likely to reveal ancient population substructure. (C) Population modeling of the demographic changes and admixture events in extant and extinct horse lineages. The two models presented show best fitting to the observed multi-dimensional SFS in momi2. The width of each branch scales with effective size variation, while colored dashed lines indicate admixture proportions and their directionality. The robustness of each model was inferred from 100 bootstrap pseudo-replicates. Time is shown in a linear scale up to 120,000 years ago and in a logarithmic scale above. at least $2,200 years ago, despite considerable cost implications of producing sterile stock (Laurence, 1999). We found that Y chromosome diversity in horses declined steadily within the last $2,000 years, with male reproductive success becoming skewed following the (Gallo-) Roman period. This indicates that breeders increasingly chose specific stallions for breeding from the Middle Ages onward, consistent with the dominance of an $700 to 1,000-year-old Arabian haplogroup in most modern studs (Felkel et al., 2018;Wallner et al., 2017). Together with the increasing affinity to Sassanid Persian horses detected in the genomes of European and Asian horses after the C7 th -C9 th , this suggests that the Byzantine-Sassanid wars and the early Islamic conquests significantly impacted breeding and exchange. The legacy of these historical events has persisted until now as the majority of the modern breeds investigated here clustered within a phylogenetic group related to Sassanid Persian horses. During the same time period, the horse phenotype was also significantly reshaped, especially for locomotion, speed capacity, and morpho-anatomy. Whether this partly or fully reflects the direct influence of Arabian lines requires further tests. Most strikingly, we found that while past horse breeders maintained diverse genetic resources for millennia after they first domesticated the horse, this diversity dropped by $16% within the last 200 years. This illustrates the massive impact of modern breeding and demonstrates that the history of domestic animals cannot be fully understood without harnessing ancient DNA data. Importantly, recent breeding strategies have also limited the efficacy of negative selection and led to the accumulation of deleterious variants within the genome of horses. This illustrates the genomic cost of modern breeding. Future work should focus on testing how much recent progress in veterinary medicine and the improving animal welfare have contributed to limit the fitness impact of deleterious variants. In addition to the two extant lineages of horses, we report two other lineages at the far eastern and western range of Eurasia, in Iberia (IBE) and Siberia (E. lenensis). Their genomes suggest the presence of other yet unidentified ghost populations. The IBE and E. lenensis lineages are now extinct but lived at the time horses were first domesticated. None of them, however, contributed significant ancestry to modern domesticates. Interestingly, Upper Paleolithic cave paintings in Europe have often been proposed to depict Przewalski's horses due to striking morphological resemblance (Leroi-Gourhan, 1958). Our sample set included one horse from the Goyet cave, Belgium dated to $35,870 years ago. Although characterized at limited coverage (0.49-fold), D-statistics revealed closer genetic affinity to IBE and DOM2 than to the ancestors of Przewalski's horses (À15.5 < Z scores < À2.4). European cave painting is, therefore, unlikely to depict Przewalski's horses. It may instead represent the ancestors of the Tarpan, assuming that this taxonomically contentious lineage neither represents domestic horses turned feral nor domestic-wild hybrids but truly wild horses that went extinct in the late C19 th (Groves, 1994). Iberia was suggested as a possible domestication center for horses on the basis of both archaeological arguments (Benecke, 2006) and geographic patterns of genetic variation in modern breeds (Uerpmann, 1990;Warmuth et al., 2011). Previous ancient DNA data were limited to short mtDNA sequences of pre-Bronze Age to medieval specimens (Lira et al., 2010), and remained indecisive regarding the contribution of Iberia to horse domestication. Our work shows that IBE horses have not genetically contributed to the vast majority of DOM2 domesticates investigated here, ancient or modern alike, excepting one horse in Bronze Age Hungary, possibly following the Bell-Beaker phenomenon, and an additional one in Iron Age Iberia. Population modeling also confirmed limited contribution within modern domesticates, largely pre-dating domestication. Therefore, IBE cannot represent a main domestication source. Given that other candidates in the Eneolithic Botai culture from Central Asia do not represent DOM2 ancestors (Gaunitz et al., 2018), the origins of the modern domestic horse remain open. Future work must focus on mapping genomic affinities in the 3 rd and 4 th mill. BCE, especially at other candidate regions for early domestication in the Pontic-Caspian (Anthony, 2007) and Anatolia (Arbuckle, 2012;Benecke, 2006). Finer mapping of the Persian-related influence at around the time of the Islamic conquest and the diversity hotspots in place prior to modern stud breeding will also improve our understanding of the source(s) and dynamics underlying the makeup of modern diversity. STAR+METHODS Detailed methods are provided in the online version of this paper and include the following: ACKNOWLEDGMENTS We thank the reviewers for insightful comments and suggestions that helped improve the manuscript. We thank the staff of the Danish National High-Throughput DNA Sequencing Center for technical support; Rachel Ballantyne, B A Figure 7. Influence of Native Iberian Horses within DOM2 Domesticates (A) Estimates of native IBE ancestry in DOM2 horses, based on the fraction of polymorphisms shared between IBE and DOM2 horses relative to Botai and Borly4 horses, and the level of polymorphisms shared between two IBE horses relative to Botai and Borly4 horses. The ratio of these values approximates a minimal boundary for the fraction of genomic ancestry present in DOM2 genomes pertaining to IBE or a closely related lineage. Consistent estimates are retrieved when replacing Botai with Borly4 horses, an $5,000 years-old group directly descending from Botai. DECLARATION OF INTERESTS The authors declare no competing interests. CONTACT FOR REAGENT AND RESOURCE SHARING Further information and requests for resources and reagents should be directed to and will be fulfilled by Lead Contact, Ludovic Orlando (ludovic.orlando@univ-tlse3.fr). EXPERIMENTAL MODEL AND SUBJECT DETAILS Following low-depth sequencing, a total of 278 ancient equids reached a nuclear DNA coverage higher than 0.01X and were investigated for sex and species identification using the methodology implemented in Zonkey (Table S1) (Schubert et al., 2017). Additionally, 87 pure horses recovered from museum and/or private collections, palaeontological and archaeological sites spread across Eurasia, showing moderate to high endogenous DNA content 0.06-0.78 were selected for whole-genome sequencing at a depth-of-coverage higher than 1-fold. The following section describes the archaeological contexts associated with all equids sequenced in this study (Tables S1, S2, and S3). The full name of each specimen is composed of the excavation site, followed by the sample name and age (in years ago from 2017), as estimated from direct radiocarbon dating (Table S1) or inferred from the archaeological context. Belgium (Goyet A1) The sample Goyet_Vert311_35870, a proximal fragment of a metatarsus, was unearthed from the first bone horizon (A1) of the third cave of Goyet, which was excavated for the first time in 1868 by Edouard Dupont. The paleolithic Goyet cave is part of a larger cave system located in the Belgian Mosan basin. Cave bear, reindeer and wild horse remains are the most common represented animals at (Germonpré , 2004). Excavations also revealed samples Goyet_Vert293_UpperPalaeolithic, Goyet_Vert300_31750 and Goyet_Vert304_UpperPalaeolithic. China The sample Fengtai_Fen4_2820 originates from the multilayered dwelling of the Kayue site of Fengtai (Province Qinghai), which is located at the rim of a large valley and consists of two phases. An early phase consisting of mainly wooden houses, dated to $1190-920 BCE and a later phase composed of mud brick constructions, dated to $980-750 BCE. The presence of permanent houses and the substantial amount of remains of domesticated plant grains like wheat and barley found at the site indicate a relative advanced mixed agropastoral economy. Croatia (Bapska, Nu star, Otok) The late Neolithic settlement of Bapska-Gradac is located in Eastern Croatia, 4.5 km south of the Danube river. The site consists of different layers that have been associated with the Sopot and Vin ca cultures and potentially also the Star cevo culture based on tools and pottery finds. Radiocarbon dating of individual BapskaGradac_BAPSKA_1305 revealed that the remain was intrusive and dated to $1305 years ago (C7 th -C8 th ). The late Avar period cemetery of Nu star is located in continental Croatia and is dated to the C8 th and C9 th . Samples Nustar_4_1187 and Nustar_5_1187 were unearthed during a rescue excavation in 2011 from one of the only two graves containing human and horse remains. Both graves were found near the southwest edge of the cemetery and were oriented east-west. Within the burial, the human remains were located on the right side, leaving the horse remains on the left side. Numerous grave goods such as iron knives, bronze belts decorated with floral motifs and horse equipment were found. Some pathologies were detected on Nustar_5_1187, and are exclusive to the spine, including thoracolumbar transitional vertebrae, spondylitis ankylopoetica (first two lumbar vertebrae) and spondylosis chronica deformans on four thoracic vertebrae and one lumbar vertebra. Further morphological analysis showed that the horse skeleton found in grave 4 (Nustar_4_1187) was probably five and a half to six years old at the time of death and had a slightly larger height to the withers of 143 cm, whereas Nustar_5_1187 reached an age of circa seven years and had average withers height of 139 cm (Vuki cevi c et al., 2017). The sample Otok_OTOK16_1307 was found at the archaeological site of Otok-Gradina, near the city Vinkovci during an excavation carried out in 1970. Based on the grave goods, the site was dated to the late C8 th -early C9 th . Among the 22 graves found, there were only two burials (Grave 4 and 16) containing the remains of humans and horses. The sample used in this study was found within grave context 16 alongside a 40 to 50 years old male skeleton. The morphology of the bones identified the horse as a mare, which was confirmed genetically. Further morphological analyses estimated the withers height of the mare to 139 cm (Vuki cevi c et al., 2017). Estonia (Otepä ä hill-fort, Ridala, Saadjä rve) The sample Estonia_Ote2_1184 originates from the archaeological site of Otepä ä Hill-fort, which is located on an eponymous upland area in the south of Estonia. This site covers a time period ranging from the Iron Age to the Middle Ages. However, the majority of the archaeological bone material and artifacts found at the site are associated with the late Iron Age. The site produced numerous tools (needles, knife handles, spinning-whorls, combs), weapons (arrowheads), ornaments (tusks, pendants, brooch), toys (die, toggles) and other unidentified objects. Those artifacts have been mostly made out of bones and antler of domestic and wild animals, but teeth (mainly canines) were also used to manufacture pendants. The bone material found at Otepä ä Hill-fort consists of cattle, pig, elk, bird, sheep, goat and horse remains. Among the horse bone material, there were also two bones showing signs of processing. The sample Ridala_Rid2_2717 was recovered from a fortified settlement on a moraine ridge close to the coastal zone of Saaremaa Island (west Estonia), which was at the time of the settlement a coastal island. Excavations were carried out in 1961 by Aita Custin and in 1963 by Artur Vassar. The archaeological site covers an area of around 4500 m 2 and was dated to the C8 th -C7 th . Until now, approximately one tenth (435 m 2 ) of the area has been excavated. A total of 2020 bone fragments have been recovered, 75% of which belonging to domestic animals (sheep/goat, pigs, cattle, and horses) and 25% to wild animals (seals). Sheep and goat bone fragments are the most frequent while horse bones represent the least frequent of all faunal remains. The horse remains recovered from this site belong to eight different individuals. Two were foals, two were slaughtered before the age of three and the other four were between two and four and a half years old. The presence of remains from exclusively juvenile individuals suggests their use for food consumption (Lang, 2012). Sample Saadjarve_Saa1_1117 was excavated in 1984 at the settlement site of Saadjä rve in eastern central Estonia, which is located 17 km north of the city of Tartu. Next to the remains of elks, cattle, goats, sheep, pigs, foxes, beavers, water voles, pikes and breams, fourteen horse remains were recovered together with a range of the remains of freshwater fish, such as perch and burbot (Lõ ugas, 1997). La Maladrerie Saint-Lazare, located in Beauvais, Northern France, was a leper colony founded during the late C11 th or early C12 th . It remained in activity until the French revolution, when it was closed and then sold to the French State. Horse Beauvais_GVA122_417 was sampled from a petrosal bone excavated in 2013 from a latrine transformed into a waste pit (US 8057). The site from rue de L'Isle-Adam at Beauvais is a former convent, excavated in 1992, dating back to the C15 th . A deep hole containing equine bones, including sample Beauvais_GVA375_567, was found next to the church of the convent. The bone remains include pieces of rachis and various dislocated anatomical parts. Boinville-en-Woevre is an ancient Gallo-Roman villa, located in Meuse, France. Fifteen pits containing the remains of some large equids have been found in the pars rustica of the villa. Some of those pits contained several individuals, potentially buried simultaneously. In total, 22 individuals dating back to the C2 nd and C3 rd have been discovered, including Boinville_GVA125_1817, an 8-to-10-year-old male, genetically identified as a donkey (Schubert et al., 2017). Boves « chemin de Glisy » corresponds to a large archaeological area of Northern France. Several settlements dated from Iron Age and Roman periods have been studied. The excavation of a large pit (6.80 m deep), corresponding to a Gallo-Roman quarry (C3 th ) revealed several carcasses of animals (sheep, equids), including individual Boves_GVA191_1717. The site of Roseau, located in Capesterre Belle-Eau (Guadeloupe), is associated with both some European remains dating back to the C16 th -C17 th , and some pre-colonial Amerindian remains dating back to the C11 th -C15 th . Individual Capesterre_LIS2_417 is associated with the European settlement. The archaeological site of Dé muin is located in Somme, France and revealed an occupation period extending from the C9 th to the Renaissance. The equine bones represent 2.8% of more than 5,000 remains collected (Jonvel, 2014). Individuals De-muin_GVA401_917 and Demuin_GVA402_917 were sampled from isolated bones discovered in two grain silos transformed into trash. The site ''clos-au-Duc 3 rue de la liberation'' in Evreux, France, is a funeral site dated to the C1 st to C3 rd . Several excavated pits contained parts of equid skeletons, including samples Evreux_GVA130_1817, Evreux_GVA132_1817, Evreux_GVA133_1817, Evreux_GVA135_1817 and Evreux_GVA140_1817. There does not seem to be any ritual connection between humans and equids since dead horses have been shown to represent waste. However, the site is connected to a rendering activity. The archaeological site of Longueil-Annel, located in the middle valley of Oise, between the towns of Noyon and Compiegne, has been associated with different occupations throughout the Neolithic, Bronze Age and Iron Age. However, individual LongueilAnnel_GVA129_267 comes from a skeleton found in a modern pit dating back to the C18 th . The ''Rue Rambuteau'' site is situated in the outskirts of the antique city of Mâ con, Eastern France, and dates back to the C3 rd . The excavated area is a large dirt quarry containing a great amount of animal remains, mainly equids, but also cattle, dogs and pigs. In total, 1,497 parts of dead equids were thrown in this waste area, representing at least 16 individuals, Macon_GVA201_1767. Individual Metz_GVA321_492 was unearthed from the ''Place de la Ré publique'' site in Metz, France. The excavation, spreading over 1,375 m 2 , has allowed the identification of numerous archaeological levels and remains, from the C1 st to the C19 th . The excavations at Saint-Claude ''Cité de la connaissance'' (Basse-Terre, Guadeloupe), have revealed some pre-columbian occupations and some ancient sugar refineries. Some equine remains could be identified in levels dating back to the second half of the C18 th , including SaintClaude_GVA381_242, sampled from a petrous bone belonging to a 7-to-10-year-old individual. Actiparc at Saint-Laurent-Blangy, Northern France, is a large area of 300 hectares excavated as part of the construction of a craft activity area. Excavation campaigns revealed several types of settlements, including rural habitats, indigenous farms and a necropolis. The time periods represented range from the ancient La Tè ne period in the Iron age, to the Roman period (3 rd century BCE -4 th century CE). The bones of horses Actiparc_GVA124_2143, Actiparc_GVA307_2127, Actiparc_GVA308_2312, Actiparc_GVA309_2302 and Actiparc_GVA311_2253 all come from waste pits. The gallic sanctuary of ''Les Rossignols'' in Saint-Just-en-Chaussé e (Oise, France), excavated in 1994-1995, was occupied from the final La Tè ne period in the Iron Age (D1-D2) up until the Roman High Empire. The most remarkable remains were ditches filled with horse bones with pieces of chariots and harnesses, and some human bones. Samples SaintJust_GVA212_2162, SaintJust_GVA219_2162 and SaintJust_GVA242_2250 were recovered from such ditches. Saint-Quentin is an archaeological site in Aisne, France, from which a large number of bovine and equine samples have been recovered. However, the precise nature of the assemblages has not been defined yet, it is therefore unsure whether the site can be associated with some cultural activities or was rather used as a ditch. Individuals SaintQuentin_GVA237_1917 and SaintQuentin_GVA238_1917 were sampled from petrosal bones dated to the C1 st -C2 nd . The excavations at Vermand, in the department of Aisne, Northern France, were carried out in 2015 and delivered remains of an ancient Roman way, as well as the remains of equines, including individual Vermand_GVA199_1742, which was excavated from structure 15 of the site. Georgia (Dariali) Tamara fort is situated in the Dariali Gorge of Northern Georgia, next to the Russian border, on a raised landform of the west bank of the Tergi river. It was occupied from the Sasanian period to the Medieval period. Excavations at the site indicate several occupations mainly between c. 400-1000 CE and a late reoccupation between the late C13 th and early C15 th . Individual Dariali_Georgia2_317 most likely dates back to this medieval period or a post-medieval one. Germany (private collections, Schloßvippach) The samples Mainz_Mzr1_1373 and FrankfurtHeddernheim_Fr1_1863 were both sampled from a private collection of Prof. emer. Helmut Hemmer (Johannes Gutenberg-University Mainz). It was compiled between the 1960s to 1980s, mainly consisting of stray finds of single loose bones found during excavations of Roman sites of the Rhine-Main area, Germany. Sample Mainz_Mzr1_1373 is a calvarium found at the construction site of the Mainz University clinic and was radiocarbon dated to 1,373 years ago. Sample FrankfurtHeddernheim_Fr1_1863 is a calvarium found in Frankfurt-Heddernheim (Nida) in a soil filled well shaft and was radiocarbon dated to 1,863 years ago. Schloßvippach is an Early Bronze Age site located in Germany, dating back to 1600-2200 BCE and composed of a settlement of long dwelling houses and a burial site with a large number of graves. Excavations have revealed some ceramics, bronze tools and jewels, as well as animal bone remains, including the horse sample Schloßvippach_Svi6_3917. Addendum to (Gaunitz et al., 2018), regarding the Roman horse from Augsburg-Haunstetten (Haunstetten_1979): Archaeozoological analysis revealed a 5-8-year-old stallion with a withers height of 124 ± 3 cm according to Kiesewalter. The specimen shows splint, a premature ankylosis between the inside splint bone and the cannon bone of the right foreleg. This was probably caused by too early and too much exercise. The teeth are showing cementum hypoplasias in the occlusal plane and transversal enamel hypoplasia near to the dentinoenamel-junction. The latter is indicating unspecific stress at the end of enamel formation. Iceland (Berufjö rður and Granastaðir) The sample Berufjordur_VHR102_1067 was excavated in 1898 by Daniel Bruun and Brynjú lfur Jó nsson at site of Berufjö rður in Barðastrandasý sla (Westfjö rds), Iceland and is dates to the Viking Age (ca. 850-1050 CE). Due to the incomplete documentation of the site the exact association of the horse and human burials at the site is not clear. Only a few horse teeth with fragments of a maxilla were kept from this particular horse burial, the horse was 5-to-7-year-old at death. The tooth sampled for this study was a maxillary molar. The tooth has been radiocarbon dated to cal. (2s) 890-1015 CE. Individual Granastadir_VHR031_1057 was sampled from a maxillary molar from Granastaðir, an early Viking farmstead in Northern Iceland. The molar has been radiocarbon dated to cal. AD (2s) 895-1025. The site was excavated by Bjarni F. Einarsson between 1987and 1991(Einarsson, 1995. A collection of animal bones was recovered from the site both from a midden and from within the excavated buildings mostly representing domestic animals. Iran (Belgheis, Kulian Cave, Sagzabad, Shahr-i-Qumis, Tepe Hasanlu, Tepe Mehr Ali) The citadel of Belgheis is located three kilometers away from the modern city of Esfarayen, in North East Iran. It covers an area of 180 hectares and was occupied from the beginning of the Islamic period until the C18 th . Individual Belgheis_TrBWBX116_485 sequenced in this study was sampled from a left third lower molar recovered from Tr BWBX116, Unit 12, Depth 515-235, dated to the Seljukid-Ilkhanid periods (C11 th -C14 th ). Radiocarbon dating, however, further indicated that the specimen belonged to the C16 th . Kulian Cave is located near the city of Rawansar, about 52 km northwest of Kermanshah, in west Iran. The site contains Pleistocene and Holocene archaeological deposits (Biglari and Taheri, 2000). The cave is about 20 m long and consists of two chambers. A petrosal bone of a horse sequenced for this study, KulianCave_MV178_1694, was found in the inner chamber. Animal bone accumulation in this chamber most likely originated from carnivore activity and natural death. The equid belongs to a female individual and was dated to the time of reign of Sasanian king Shapur II (309-379 CE). Samples Sagzabad_SAGS27_3117 and Sagzabad_SAGxPit22_3117 were excavated from the archaeological site of Sagzabad, located in the central district of Buin Zahra in the Qazvin plain, 140 km west of Tehran, Iran. Multiple archaeological campaigns have evidenced a continuous occupation from the Late Bronze Age to the Iron Age II (Negahb an, 1974). The large animal assemblage is composed of more than 10,000 identified bones and shows the importance of domestic herbivores ovi-caprines and cattle, followed by an important contribution of domestic equids. Shahr-i-Qumis is a site in North East Iran, consisting of several isolated mounds spread across an area of 28 km. It dates back to the Parthian and Sasanian periods, although some recent radiocarbon datings of faunal remains tend to show a longer period of occupation, from the 8 th century BCE to the 8 th century CE (Hansman et al., 1970). The site has been identified as Hekatompylos, the capital of the Parthian Empire and major hub of the Silk Road and Great Khorasan Road. Excavations at Shahr-i-Qumis revealed a very large quantity of equine skeletons, including sample ShahrIQumis_AM115_1557 (Hansman and Stronach, 1970). The radiocarbon date obtained for this sample place it either during the kingdom of Yazdegerd II (438-457 CE) or his brother Peroz I (457-484 CE). At the beginning of the C5 th , nomadic group and in particular the Hephthalites or White Huns attacked Persia several times, invading parts of eastern Persia for several years. These events may have had also an impact of the equine population. A large set of equine bones from Shahr-i-Qumis has been studied during these last years at the British Institute of Persian studies in Tehran and currently a morphometric geometric project is ongoing on this material. Tepe Hasanlu is a fortified site located in Solduz Valley of Western Azerbaijan province, Northwestern Iran. The site was occupied from the Late Neolithic to the Iron Age and consisted of two distinct parts: a High Mound and a Low Mound (Dyson, 1989). A total of four individuals sequenced in this study originate from this site. Two horse samples were recovered from the citadel of Iron Age II (1,050-800 BCE) associated with the Mannaean kingdom and destroyed by the Urartians during a battle around 800 BCE. While individual TepeHasanlu_3461_2930 was unearthed from a rough soil deposit, TepeHasanlu_3394_2808 was found together with thousands of artifacts and faunal remains, within the deposit and collapse of buildings, likely used as horse stables (Dyson, 1989). Individuals TepeHasanlu_1140_2682, TepeHasanu_3459_2667 and TepeHasanlu_V31E_2667 date back to the Urartian occupation period that followed the destruction of the citadel. After a hiatus, period IIIa related to the Achaemenid Dynasty (550-330 BCE), for which no substantial architectural remains have been found. Period II is also a debated issue but generally assigned to the Seleucid or Parthian period, post-Achaemenid (Dyson, 1999). These historical periods were very short at Hasanlu, chronologically between 400 to 270 BCE. Four samples, including TepeHasanlu_2327_2352, TepeHasanlu_2529_2352, TepeHsanlu_2689_2352 and TepeHasanlu_3398_2352, belong to this Historic Era. Tepe Mehr Ali is located in the province of Fars, Southwest of Iran. The site belongs to the Lapui culture, dated to the Chalcolithic (6 th -4 th mill. BCE), and shows an over-representation of domestic animals such as cattle, sheep and goats but also a significant number of wild herbivores, such as hemiones and gazelles (Sheikhi Seno et al., 2012). Individual TepeMehrAli_Trj12x31_CopperAge was genetically identified as a pure hemione specimen (Schubert et al., 2017). Kazakhstan (Belkaragay, Halvai) Belkaragay is a Copper Age site located in the Kostanay region, Kazakhstan. The area of excavation spreads over 1,000 m 2 and is composed of ten house-like structures, in which many different animal bones were discovered. Those were attributed to various species, including the wolf, the Saiga antelope, the fox, the hemione and the horse, such as samples Belkaragay_NB13_CopperAge and Belkaragay_NB15_CopperAge included in this study (Kosintsev, 2015;Logvin and Shevnina, 2015). Sample Halvai_KSH4_4017 was excavated from Kurgan Halvai 5 (pit number 4), which located on the left bank of the Tobol branch of the Karatomar Reservoir in Northern Kazakhstan (Kostanay Region), located 500 m to the north-east of the Sintashta kurgan Halvai 3. The kurgan was 30 m in diameter and 80 cm in height. Pit number 4, which is associated with the Sintashta culture of the Bronze Age, was located directly in the center of the kurgan. The horse skull was found close to the edge of the southern wall of the pit. In addition to the horse skull, the pit also contained human remains, belonging to a female, and other grave goods, such as a zoomorphic stone altar, stone tips, pebble fragments and fragments of a vessel. Another individual, Halvai_KSH5_2542, was excavated from Kurgan Halvai 3 (pit number 8A). The horse skeleton was found together with the remains of an approximately 50 years old woman and a sheep. Based on the position of the skeleton and stratigraphic information, the burial can be assumed to have been constructed during the Early Iron Age. Kyrgyzstan (Boz-Adyr) The Boz-Adyr burial ground is located on the slope of the Ak-Bakshy mountain range, located in the Issyk-Kul region, Kyrgyzstan. The burials are located in mounds, which is characteristic for the funeral traditions of the nomadic and semi-nomadic populations of Tien Shan and Semirechye, especially during the C12 th -C15 th . During the excavation in 2014, three burial mounds showing next to human remains also the skeletons of horses were discovered (burial mounds number 10, 16 and 19). The burial rites associated with those graves are characteristic for the Turkic period of the C6 th -C9 th . Burial mound 16 was a swampy rock-earthen embankment of a circular shape with a diameter of 5 m. At a depth of 150-160 cm, the remains of a decapitated adult male, accompanied by a horse, were discovered. The skeleton of horse sample BozAdyr_KYRH8_1267 was supported by two boulders, with its legs bent and the head pointing west. The neck was bend facing north. Alongside with the horse remains, bit wear, iron stirrups and the remains of a wooden saddle were also discovered. Burial mound number 19 is located 15 m to the west of burial mound number 16. At a depth of 140 cm, the undisturbed remains of a human and horse, BozAdyr_KYRH10_1267, were discovered. The human skeleton was found lying on its back with the head pointing to the east, and the legs bent pointing to the right. The horse was positioned on it, with the abdomen turned to the right and the head pointing to the west. Lithuania (Marvel _ e cemetery) Marvele is a large medieval cemetery located in Kaunas, Lithuania, and dates back to the C8 th to the C11 th . It consists of 211 human burials that also contain about 250 horse remains, either whole skeletons, head and forelegs only or scattered remains (Berta sius and Daugnora, 2001). The vast majority of buried horses are young adults under ten years and can be associated with some ritual offerings, which seemed to be common in the Baltic Auk stai ciai people during that era. Individuals Marvele_01_1138, Marvele_02_1138, Marvele_05_1138, Marvele_16_1138, Marvele_18_1189, Marvele_21_1087, Marvele_22_1138, Marvele_27_1138 and Marvele_32_1144 were all excavated from Marvele cemetery. The Scythian burial mound or the so-called kurgan of Arzhan represents the youngest of its kind and can be associated with the Aldy-Bel culture, which dates back to the boundary of the 7 th and 6 th century BCE. The elite funeral complex is located only nine kilometers away from Arzhan I in the Uyuk hollow, from which ArzhanI_Arz3_2767 was excavated. The undisturbed kurgan is 80 m in diameter and 2 m-high and consists of 27 graves, from which individuals ArzhanII_Arz15_2642 and ArzhanII_Arz17_2642 were unearthed. Additionally, a special burial including fourteen harnessed horses was found during the excavation expedition from 2001 to 2003, which includes individuals ArzhanII_Rus9_2500 and ArzhanII_Rus11_2500. Individual Balagansk_Rus19_2017 was excavated from Balagansk, a village located close to the Angara river in the Irkutsk region, flooded by the Bratsk reservoir. It represents a Central Asian trading settlement associated with the Ust-Talka culture. The sample Bateni_Rus16_3318 was retrieved from a right distal metatarsal bone and belongs to the Late Bronze Age Karasuk culture (1,500-800 BCE), which followed the Andronovo culture in the South of Siberia. It covered an area from the Aral Sea to the Yenisei river on the East and to the Altai mountains and Tien Shan in the South. Karasuk communities are known to be farmers who practiced a mixture of agricultural and stockbreeding of cattle, sheep and horse. They are especially known for their metallurgy, in particular for their daggers and knives (Mallory and Adams, 1997). The bone sample was excavated close to the Bateni settlement, Republic of Khakassiya. The excavation site is not accessible anymore due to the flooding of the Krasnoyarsk Reservoir. Derkul is a settlement dating back to the Neolithic, situated in the Orenburg region of Russia, and from which only one house-like structure has been excavated. Both Neolithic artifacts and animal bones, including remains of individuals Derkul_NB2_Neolithic and Derkul_NB4_Neolithic, could be unearthed from this small site. Kokorevo I is a late Upper Palaeolithic site associated with the Kokorevo culture, located by the Upper Yenisey river, Russia. The horse sample Kokorevo_Rus3_14450 was found in level 4, and is thus thought to be slightly older than level 3, dated to 14,450 years BP. This Pleistocene horse most likely represents a wild individual. Horse sample KrasnayaGorka_Rus48_1446 was sampled from a left humerus unearthed from the Krasnaya Gorka burial mound, Republic of Tyva, Russia, and dating back to the C6 th . No further information could be recovered regarding this site. Individual LebyazhinkaIV_NB35_Neolithic was unearthed from the settlement of Lebyazhinka IV, Samara region, Russia. Based on excavated artifacts, this site has been associated with the Neolithic and the Copper Age. Animal bone remains found on site have been associated with beavers, horses, Siberian roe deers, aurochs and elks. The Paleolithic horse MerzlyYar_Rus45_23789 was sequenced from bone fragments recovered from an outcrop in the Todza depression, close to the village of Seiba, Republic of Tuva, Russia. It is not associated with any archaeological context. Oktyabrsky is a village located on the lower Volga river, in Ustin district, Kalmyk Republic of Kalmyk, Russia. The burial ground is associated with the Sarmat culture, but as many steppe cemeteries, it contains a number of additional younger graves. Individual Oktyabrsky_Rus37_830 was sampled from a left metatarsal bone of a complete horse carcass, unearthed from grave 1 of the burial mound number 16. A second individual, Oktyabrsky_Rus38_659, was unearthed from grave 1 of burial mound 3. These burial grounds are associated with Sarmatian time in general, but some bone remains, including the two individuals included in this study, date back to more recent times, namely the early post-Khazar and the late Golden Horde periods. Sintashta is a late Bronze Age archeological site located in Chelyabinsk Oblast, Russia, and is associated with the eponymous Sintashta culture. This site includes five cemeteries and no less than 40 graves. The horse samples Sintashta_NB44_3577 and Sintashta_NB45_3577 were recovered from grave number 19, together with a chariot and two other horses. Individual PotapovkaI_1_3900 was sampled from the left mandible of mare excavated from Kurgan 3 of the settlement of Potapovka I, by the Sok River, in the Samara region, Russia. This site is associated with the Bronze Age Potapovka culture and has revealed a large number of pieces of metalwork. The horse skull was placed over the body of a decapitated woman, potentially as an offering. The city of Sayanagorsk in the Republic of Khakassiya is located in the south part of the Minusinsk basin at the left bank of the Sayan Mountains. Sample Sayangorsk_Rus41_2677 originates from the right tibia of a horse which was unearthed during the excavations from the site Ai-Dai 1 (mound 3, grave 1). This site is associated with the Tagar culture (7 th to 2 nd centuries BCE) in South Siberia, which was preceded by the Karasuk culture. Slovakia (Sebastovce) Individual Sebastovce_131_1317 was sampled from a tooth, excavated from the medieval Avarian_Slavonic cemetery of Sebastovce, Slovakia, dating back to the C7 th -C8 th . Spain (Camino de las Yeseras, Cantorella, Capote, El Acequió n, Els Vilars) The large settlement of Camino de las Yeseras, located in San Fernando de Henares, Madrid, Spain, was occupied between the beginning of the 3 rd and the 2 nd millennia BCE. Domestic structures such as hut pits have been associated with both pre-Beaker phase and Bell-Beaker culture. Because of its remarkable size and strategic location -near rivers and flint mines, it is thought to have played a major role in central Iberia (Blasco et al., 2011). The central area of Camino de las Yeseras is a big structure with a sunken floor of circa 600 m 2 , with more than 50 adjunct structures and a stratigraphy of about 2 m. One of the excavated units yielded 1772 faunal remains, including both domestic and wild animals. The horse CaminoDeLasYeseras_CdY2_4678 was excavated from the central area of the site and radiocarbon dated to 2,861-2,496 BCE, which is associated with the Pre-Bell-Beaker phase of the site (Liesau, 2017). The Cantorella settlement is situated in the Corb valley, Catalonia, Spain, discovered in March 2010 and dated to 3,670-1,800 BCE. It was inhabited twice in prehistory, once during the Final Neolithic -chalcolithic and once during the Bronze Age, and a distance of 200 m separates the two settlements. Although Cantorella faunal archaeological records are scarce compared to Iron Age Catalan sites, horse remains are very abundant relative to other animals, especially in the Late Neolithic -Chalcolithic occupancy (Abad et al., 2011). A number of 15 silos of that period revealed the presence of Equus sp., which might represent an autochthonous species of horse. Horse remains were present in approximately 50% of the final Neolithic structures but absent from all but two Bronze Age silos. Cantorella_UE2275x2_4791 was sampled from a petrous bone unearthed from silo SJ-191 that contained two horse skulls from Final Neolithic -Chalcolithic period. The sample Capote_Cap102_2464 is an upper molar tooth. It was found in a bone depot, inside of a house at the hill-fort of Castrejó n de Capote, a Celtic fortified village of the Iberian Late Iron Age (4 th -1 st century BCE). This house, ''HE-A,'' is an archetypical household of two rooms, the first bigger and used for several functions (cooking, eating, etc.) and the second and smaller, for storing and sleeping. In this pattern of household, the first room used to be a hearth and a quern, and in this room, there is a central big hearth that was stratigraphically dated to the middle of 2 nd century BCE. Below the layer sequence of this depot, there is an older sequence dated from the 4 th century to the first half of the 2 nd century BCE, and above, a later sequence from the second half of the 2 nd century BCE to the first half of the 1 st century BCE. The three sequences show similar household remains but only the intermediate one has faunal remains, mainly from horses, cows and pigs. Although this hill-fort is well-known in Celtic archaeology for an important very well-preserved shrine, where collective banquets were hold by the time of the sequence IIb, the charcoal and bones remains of HE-A are believed to belong to the household field. The two individuals belonging to the Bronze Age village of El Acequió n were sampled from petrous bones (ElAcequion_ Spain38_4055 and ElAcequion_Spain39_3993). El Acequió n is located on the margins of an eponymous drained endorheic lake. Excavated features in the inner precinct include huts, pavements and dump yards, where most of the faunal remains derive. An extraordinary number of horse bones have been found on site, many of them exhibiting chop and hack marks, which gave rise to the hypothesis that, despite its chronology, some of these horses could represent domestic animals (Liesau, 2005). No human remains have been recorded. The fortress of Els Vilars, located in the Segre valley, Catalonia, Spain, is a complex defensive site, occupied between 750 BCE and 325 BCE. The fortress has been built and developed throughout four distinct periods: Vilars 0 (775-700/675 BCE), Vilars I (700-675-550 BCE), Vilars II (550-425 BCE), Vilars III (425-375/350 BCE) and Vilars IV (375/350-325 BCE). The pre-Iberian phases (Vilars 0 and I) yielded little faunal archaeological record and very few equids (representing only $0.8% of animal remains) as they were most likely associated with a pastoral economy based on subsistence activities. Both the quantity of domestic animal and the proportion of horse remains excavated increased throughout the following periods, with horses representing more than 10% of excavated animal bones of Vilars III (Nieto Espinet, 2016). Intriguingly, people inhabiting Els Vilars started burying horse fetuses by Vilars I, an unprecedented ritual practice that seemed to have become more common throughout Vilars II. Individual ElsVilars_UE4618_2672 was sampled from an adult bone in a domestic unit associated with Vilars I. The sample Vicerrectorado_VIR175_1717, a superior premolar, comes from the excavation developed in a plot of the Roman city of Lucus Augusti, capital of the Conventus Lucensis and administrative center of northern Gallaecia in the northwest of the Iberian Peninsula. In this place, a long occupational sequence has been recognized, but the stratigraphic unit (UE-2101) in which this sample was recovered is assigned to low-imperial moments (C4 th ) considering the archaeological material present. In addition to equine remains, the fauna corresponds mainly to domestic species (cattle, pigs, sheep, goats, dogs) as well as some wild animals, mainly deer. Sweden (Uppsala) Individual Uppsala_Upps02_1317 was sampled from a tooth excavated from a medieval site dating back to the C7 th -C9 th , and situated in Uppsala, Sweden. Switzerland (Augusta Raurica, Stein am Charregass, Solothurn Vigier) The Roman archaeological site of Augusta Raurica is located on the Rhine bank 20 km east of Basel, Switzerland, and was an important Roman trade center throughout the C1 st , C2 nd and C3 rd . Individual AugustaRaurica_JG160_1817 was recovered from an underground fountain in Insula 8, an ancient residential and artisanal area of the site (Schmid et al., 2011). The fountain was established around 80 CE together with an 11 m-deep well and an access tunnel typical of a Roman province architecture. The well was then likely used as a disposal structure and filled on several times with artifacts, tools and animal remains throughout the C2 nd and C3 rd . Individuals AugustaRauricaSchmidmatt_NBxK9279_1717 and AugustaRauricaSchmidmatt_NBxP9261_1782 were unearthed from building Schmidmatt in Augusta Raurica, which represents one of the best preserved Roman buildings of Switzerland. The site of Stein am Rhein Charregass was a Roman military camp on a hill close to the river Rhine. It was built as defense at the border of the Roman Empire at the end of the C3 rd to the end of C4 th . Large amounts of animal bones were retrieved from a ditch around the camp, including sample Charregass_NBxRa849_1667. The site Solothurn Vigier was a Roman vicus and trading post during the C1 st -C4 th by the river Aare. It is located along the axis of the Roman regional capital Aventicum and Augusta Raurica at the river Rhine. A bridge indicates a changing place for transport animals, which is supported by the presence of cattle and many equine remains, such as SolothurnVigier_NB63_1867 SolothurnVigier_NB175_1817 and SolothurnVigier_NB699_1867. Turkey (Yenikapi) Yenikapi is a site located in present-day Istanbul that used to be the major Byzantine harbor of Theodosius, founded by emperor Theodosius I during the 4 th century BCE. Yenikapi thus represents the major harbor of the Byzantine period, and a prominent Late Antiquity trade hub in the Mediterranean basin. Excavations revealed 36 wrecked ships, many artifacts and tools, and over 20,000 animal bones, mostly horses, mules and donkeys, but also cattle, sheep and pigs. Horse skeletons, representing $32.6% of all animal skeletal remains, mainly belong to young male individuals no older than ten years (Onar et al., 2013). These include 12 stallions sequenced in this study, all sampled from petrosal bones: Yenikapi_Tur140_1289, Yenikapi_Tur141_1430, Yenikapi_Tur142_1396, Yenikapi_Tur145_1156, Yenikapi_Tur146_1730, Yenikapi_Tur150_1443, Yenikapi_Tur167_1443, Yenikapi_Tur168_1443, Yenikapi_Tur169_1443, Yenikapi_ Tur170_1443, Yenikapi_Tur171_1689, Yenikapi_Tur173_1443, Yenikapi_Tur175_1443, Yenikapi_Tur176_1443, Yenikapi_Tur181_1443, Yenikapi_Tur189_1443, Yenikapi_Tur191_1443, Yenikapi_Tur193_1443, Yenikapi_Tur194_1360, Yenikapi_Tur206_1443, Yenikapi_ Tur229_1443, Yenikapi_Tur243_1443, Yenikapi_Tur244_1443, Yenikapi_Tur246_1443, Yenikapi_Tur271_1443, Yenikapi_Tur273_1443, Yenikapi_Tur276_1443 and Yenikapi_Tur277_1443. United Kingdom (Brough of Deerness, Quoygrew, Whitehall Roman Villa and Witter Place) Brough of Deerness is a Pictish (pre-Viking Age) and Viking Age settlement set atop a roughly 30 m-high sea stack located in the East Mainland of Orkney. All horse samples analyzed in this study (BroughOfDeerness_VHR010_1417, BroughOfDeernees_ VHR011_1367, BroughOfDeerness_VHR037 and BroughOfDeerness_VHR062_1417) are of Pictish date (C6 th -C7 th ) and originate from excavation area C ( Barrett and Slater, 2009). The sample Quoygrew_VHR017_1117 was excavated from the site of Quoygrew, a rural settlement situated on the island of Westray in Orkney, Scotland, that was occupied from the C10 th until the 1930s. The sample was from Context C010 (of C11 th -C12 th date) within excavation area C of a coastal shell and fish-bone midden (Barrett, 2012). Individual WhitehallRomanVilla_UK08_1667 was sampled from a petrous bone of a horse excavated from a Roman settlement near Nether Heyford, Northamptonshire, United Kingdom, dating back to the Roman times (C3 rd and C4 th ). Witter Place is a post-medieval/pre-modern site, dating back to the C17 th to the C19 th and located just outside the northern walls of the city of Chester, United Kingdom. Excavated in 2001, the site shows a very large number of animal remains, mainly of cattle and horse (representing 93% of identifiable remains) but also dog, and to a smaller extent goat, sheep, pig and other wild mammal species, such as rabbit or hare. Witter place most likely represents a butchery site involved in animal-based industry and a possible tanning complex as suggested by the discovery of tanning pits. Most horses were old individuals over 20 years, showing some hack and chop marks for $3% of them. While all parts of the horse skeleton could be recovered, the distribution of bones was uneven, with a lot of long bones but very few phalanxes. Individuals WitterPlace_UK15_217, WitterPlace_UK16_217, WitterPlace_UK17_267, Witter-Place_UK18_267, WitterPlace_UK19_267 and WitterPlace_UK20_217 were recovered from petrosal bones found on site. Uzbekistan (Yerqorqan/Erkurgan) Sample Yerqorqan_YER28_2853 was recovered from Yerqorqan (also referred to as Erkurgan), an ancient city surrounded by walls, located in Southern Uzbekistan, North of Qarshi. It was first settled during the second half of the 2 nd millennium BCE, then occupied until the Sassanian period and destroyed by the Turks during the C6 th . Excavations revealed the presence of a large palace located within a citadel, as well as temples and a mausoleum, thus showing the political and religious influence of the ancient city. Moreover, excavated blacksmith tools, pottery and currencies tend to suggest that this site used to be a major trading hub. Museum Individual Museum_Earb6_89 is an English Thoroughbred racehorse stallion, known as Dark Ronald, preserved in a museum in Halle, Germany. He was born in 1905 and shot to death in 1928, when old and suffering from colic. Early in his life, he fathered many thoroughbred and sport horses, and hence has had a critical influence on the whole horse warmblood breeding industry until now. DNA was sequenced from a petrosal bone of Dark Ronald's skull. Individual Museum_Earb5_105 was sampled from a petrosal bone of an English Thoroughbred stallion, also from a museum in Halle, Germany, that was born in 1891 and died in 1912. METHOD DETAILS DNA extraction and genome sequencing Drilling and DNA extractions of osseous material were carried out in the ancient DNA facilities of the Centre for GeoGenetics, University of Copenhagen (Denmark), and laboratoire AMIS CNRS UMR 5288, Université de Toulouse III Paul Sabatier (France). DNA was extracted from 115-730 mg of bone or tooth powder, following method Y from Gamba et al. (2016), with slight modifications. The powder was first digested for 1h at 37 C, in 4 mL of lysis buffer, composed of EDTA 0.45M, N-lauryl Sarcosyl 0.5% and Proteinase K 0.25 mg/ml. Following this pre-digestion step, the recovered pellets underwent a second digestion overnight at 42 C in an identical fresh lysis buffer. DNA was then concentrated and purified from the supernatant fraction of this second digestion. The vast majority of DNA extracts were also incubated with USER TM enzyme mix (NEB â , 0.235 units/mL) at 37 C for 3 hours in order to remove uracil residues and thus reduce the impact of nucleotide mis-incorporations due to post-mortem cytosine deamination, typical of ancient DNA (Briggs et al., 2007). DNA extracts were subsequently constructed into Blunt-End DNA libraries, following Meyer and Kircher (2010) as modified in Gamba et al. (2016) (method A) or for a limited number of samples (Cantorella_UE2275x2_4791, ElAcequion_Spain38_4058, ElAcequion_Spain39_3993, ElsVilars_UE4618_2672), a slightly different procedure (method B) that differed from method A in the addition of 7-nucleotides-long unique indices within each P5 and P7 adapters prior to ligation to DNA, the sequence of which was obtained from Rohland et al. (2015). A quantitative real-time Polymerase Chain Reaction (qPCR) was then carried out on 20X dilution duplicates of each library to determine the required PCR cycle number for amplification. Subsequently, libraries were amplified for 4-16 cycles (Table S4), following Gamba et al. (2016). Each PCR reaction included 1 unit of AccuPrime TM Pfx DNA polymerase, 3 to 6 ml of unpurified DNA library and 1-2 ml of 5 mM custom PCR primers, one of which containing a unique external 6-bp index used for post-sequencing sequence demultiplexing. PCR products were then purified using either Minelute columns (QIAGEN(C)) or Agencourt AMPure beads, eluted in 25 ml of EB (10 mM Tris-Cl, pH 8.5) supplemented with 0.05% Tween, and quantified on Tapestation 2100/4200 or Bioanalyzer instruments (Agilent technologies). Finally, sequencing was performed at the Danish National High-Throughput DNA Sequencing Centre, either on Illumina HiSeq2500 for the vast majority of samples (method A), or the Illumina HiSeq4000 for samples (Cantorella_UE2275x2_4791, ElAcequion_Spain38_4058, ElAcequion_Spain39_3993, ElsVilars_UE4618_2672) (method B). Sequence trimming, mapping, filtering and base calibration at damaged sites were carried out following the methodology from Gaunitz et al. (2018). Multiple independent amplifications carried out for most DNA libraries to limit PCR duplicates, aiming at limiting sequencing costs. In total, one to seven libraries were generated for each sample selected for whole-genome sequencing, among which the vast majority were USER TM -treated (322/326, representing $98.8%). The list of these indexed libraries with corresponding sequencing effort and clonality can be found in Table S4. Radiocarbon dating Unless indicated otherwise, AMS-radiocarbon dating of the samples was performed at the Keck Carbon Cycle Accelerator Mass Spectrometry Laboratory, UC Irvine. Bone or tooth pieces between 1.01 g and 2.24 g were sampled in the bone laboratory in the ancient DNA facilities of the Centre for GeoGenetics and sent for subsequent dating of ultrafiltered collagen. Sample preparation backgrounds were estimated, and subtracted, based on measurements of 14 C-free mammoth and whale bone. Calibration was carried out using OxCalOnline (Ramsey, 2009) and the IntCal13 calibration curve. Calibrated dates are provided in Table S1. Collagen yields obtained for the samples Ridala_Rid2_2717, Marvele_01_1117, Saadjarve_Saa1_1117, Dariali_Georgia2_317, Halvai_KSH4_4017, Halvai_KSH5_2542, were insufficient for radiocarbon dating. Their respective age was, thus, determined based on their archaeological context. Read alignment, rescaling and trimming We generated one to seven DNA libraries per sample, the majority of which were built on DNA extracts treated with USER TM enzyme mix (322/326, $98.8%). This enzyme mix was used to limit the impact of mis-incorporations at deaminated cytosines in downstream analyses (Briggs et al., 2010). For each library, sequencing reads were parsed through PALEOMIX version 1.1.1 (Schubert et al., 2014b) with default parameters, except seeding that was disabled. Through this pipeline, reads were trimmed for low quality termini and known adaptor sequences, then filtered out if shorter than 25 nucleotides, and finally aligned against the horse mitochondrial (GenBank Accession number NC_001640) (Xu and Arnason, 1994), and the horse nuclear reference sequence (EquCab2) (Wade et al., 2009) appended for 2,797 Y chromosome contigs (Wallner et al., 2017). AdapterRemoval2 (Schubert et al., 2016) was used to trim reads and BWA version 0.5-9-r26-dev (Li and Durbin, 2009) to map reads, excluding alignments whose mapping qualities were inferior to 25. Finally, PCR duplicates were removed and reads were locally realigned around indels using the IndelRealigner procedure from GATK (McKenna et al., 2010). The software mapDamage2 (Jó nsson et al., 2013) was used to check for the presence of nucleotide mis-incorporation profiles characteristic of ancient DNA data at the library level, randomly selecting 100,000 reads. We observed the expected increase of C to T (G to A) mis-incorporation rates at read starts (read ends) for both USER TM -treated and non-USER TM -treated data. Furthermore, genomic positions preceding read starts were higher in purines in non-USER TM read alignments, consistently with postmortem DNA fragmentation being depurination-driven. In USER TM -treated read alignments, these positions were enriched in cytosine residues, in line with the excision of deaminated cytosines by the sequential activities of Uracil DNA glycosylase and Endonuclease VIII enzymes present in the USER TM mix. In order to limit the impact of remnant mis-incorporations in downstream analyses, we applied the computational procedure combining end trimming and base quality rescaling based on post-mortem DNA damage profiles, as described in Gaunitz et al. (2018). Uniparental markers Mitochondrial DNA Mitochondrial haplotypes were called following the procedure from Gaunitz et al. (2018), restricting to positions showing at least 3-fold coverage after mapDamage rescaling and trimming, and a minimal base quality score (post-rescaling) of 25. The sequence data were divided into six independent partitions, including first codon positions (partition 1; 3,802 sites), second codon positions (partition 2; 3,799 sites), third codon positions (partition 3; 3,799 sites), the control region (partition 4; 961 sites), ribosomal RNAs (partition 5; 2,555 sites) and transfer RNAs (partition 6; 1,518 sites). We then constructed a dataset including all mitochondrial sequences analyzed by Gaunitz et al. (2018) plus all novel sequences reported in this study (dataset 1, 393 sequences). This dataset was used for phylogenetic reconstruction in RAxML (Stamatakis, 2014), using, for each partition, the GTRGAMMAI substitution model. This model was determined following modelgenerator v0.85 under the Bayesian Information Criterion (BIC) (Keane et al., 2006) as the best implemented in the software for partitions 1, 4, 5, and 6 (the best models implemented in the software for partitions 2 and 3 were GTRINV and GTRGAMMA and were nested within the GTRGAMMAI model). Node support was estimated using 100 bootstrap pseudo-replicates. The best-ML phylogenetic tree reconstructed is provided in Figure 6A. Y chromosome Read alignment, rescaling and trimming were carried out following the same methodology as described by Gaunitz et al. (2018). The Y chromosome haplotype was reconstructed from the high-quality reads aligning against the concatenated Y chromosome contigs described by Wallner et al. (2017) and three non-repetitive parts of the Y chromosome (GenBank Accession Nb. AC215855.2 and JX565703), following the filtering criteria presented by Gaunitz et al. (2018), relaxed here to include samples showing at least 25% of the total number of sites covered in the most-covered individual (Icelandic_0144A_0). This resulted in the selection of a total number of 139 individuals. In order to ensure orthology, phylogenetic reconstructions on the Y chromosome data were restricted to the set of contigs present as single copy (i.e. excluding collapsed Copy Number Variants), as identified by Wallner et al. (2017). Additionally, singletons as well as sites not covered in at least half of the total number of males were disregarded in order to reduce the impact of sequencing errors and missing information on the reconstruction, respectively. The phylogenetic analyses were performed using a GTRGAMMA substitution model in RAxML (version 8.2.4) (Stamatakis, 2014). The GTRGAMMA model was the second best-supported model identified in ModelGenerator v0.85 (Keane et al., 2006) under both Akaike and Bayesian Information Criteria, and the best of those implemented in RAxML. Figure 6B shows an easy-to-read collapsed version of the tree rooted on the domestic donkey, which was based on 4,996 variable sites. The overall topology was in agreement with those reported by Gaunitz et al. (2018) based on a more limited number of samples. Node support was estimated using 100 bootstrap pseudo-replicates. Autosomal and sex chromosomes For all datasets considered in subsequent analyses of autosomal and sex chromosomes, we applied the following filters using ANGSD ( We also used ANGSD (Korneliussen et al., 2014) to generate two different datasets based on autosomal sequencing data. The first dataset (a) consists of pseudo-haploid genomes (hereafter referred to as the pseudo-haploid dataset), generated by random sampling of reads, following the methodology used in Gaunitz et al. (2018). Briefly, from the pileup of reads passing all quality filters, we sampled a single read at every genomic site for every horse. This dataset contains 160 samples that, prior to read random sampling, reached a sequencing depth-of-coverage equal to or greater than 1-fold, including E. africanus somaliensis as an outgroup (Table S5). The second dataset (b) was prepared following the filtering of nine samples (Berel_BER04_D_2300, Friesian_0296A_0, Garbovat_ Gar3_3574, Berel_BER07_G_2300, Berel_BER12_M_2300, Oktyabrsky_Rus37_830, Saadjarve_Saa1_1117, TachtiPerda_ TP4_3604, Yerqorqan_YER28_2853) showing high overall error rate estimates (noCpG error estimate; ε R 0.0005 errors per site) (Figure S1). In contrast to dataset (a), which was based of pseudo haploid calls, dataset (b) is based on posterior probabilities of genotypes ('posterior probability genotype dataset') of 126 modern and ancient samples with a depth-of-coverage equal to or greater than 1-fold. The 126 samples fall within the second domestic clade (DOM2), as defined by both the bootstrapped neighborjoining tree and TreeMix results. Allele frequencies of at least 60 out of the 126 individuals were used as prior (-doMajorMinor 1 -do-Maf 1 -beagleProb 1 -doPost 1 -GL 2). This dataset was used to estimate individual heterozygosities, autosomal nucleotide diversity profiles across cultures and through time (Table S5). Additionally, we used ANGSD (Korneliussen et al., 2014) to generate pseudo-haploid calls for the two sex chromosomes, conditioning on males with a depth-of-coverage of at least 1-fold. We used these datasets to calculate nucleotide diversity profiles across cultures and through time. We excluded the nine samples with high overall error rates (noCpG error estimate; ε R 0.0005 per site), to ensure cross-comparison with the analyses based on dataset (b). Type-specific and overall error rate estimates We estimated type-specific and overall error rates following the procedure developed in Orlando et al. (2013), which leverages a test genome, an outgroup genome (E. africanus somaliensis), and a so-called 'perfect' genome (Icelandic_0144A_0). For every genomic site covered by all three samples, a random allele was sampled from the pile of reads passing the quality scores per individual. This matrix of counts was then used to estimate the individual error rates. Briefly, the method uses a maximum likelihood approach to estimate the excess of derived mutations in the test genome, compared to the perfect genome. We estimated the error rates for all samples included in dataset (a) showing a depth-of-coverage equal to or greater than 1-fold. Three different filtering approaches were used to estimate the error rates, (i) using all covered genomic sites, (ii) masking CpG dinucleotide sites in the reference genome (EquCab2.0) (Wade et al., 2009), (iii) forcing observed transitions from procedure (i) to zero prior to estimation of the overall error rate, in order to recover an error rate based on transversions only. The distribution of error rates in modern and ancient horses following each of the three filtering approaches is shown in Figure S1A. For every individual in dataset (b), we also estimated type-specific and overall error rates using the same high-quality genome and outgroup. For every site, we sampled an allele according to its corresponding genotype posterior probabilities as weights. As for dataset (a), we applied the three filters described above. Distributions of error rates in modern and ancient horses following these filters of dataset (b) are shown in Figure S1B. Genetic Distance and f3-outgroup statistics We followed the methodology in Gaunitz et al. (2018) to assess the amount of shared genetic drift between a pair of individuals using the f3-outgroup statistics (Patterson et al., 2012) in the form f3(X, Y; outgroup), where X, Y represent all possible pairwise permutations of horses. We used dataset (d), which consists of 160 samples (including ass E. africanus somaliensis as outgroup) with a sequencing depth-of-coverage equal or higher than 1-fold, and an Upper Palaeolithic individual, Goyet_Vert311_35780 (Table S5). To limit the possible impact of post-mortem DNA damage, we computed pairwise genetic distances on the basis of nucleotide transversions only (n = 50,757,656) using ngsDist (Vieira et al., 2016) and estimated node support from 100 pseudo-replicate bootstraps. In total, we computed 12,561 f3-outgroup statistics based on those 50,757,656 nucleotide transversion sites ( Figure S7A). Principal Component Analysis (PCA) Using the same dataset as for the calculation of genetic distances (dataset (a)), we conducted a Principal Component Analysis (PCA). We excluded the outgroup sample (Somali_0226A_0) and positions that segregated as singletons, which represented a total of 45,944,755 nucleotide transversions. The remaining 4,812,870 were used to conduct a PCA with plink v1.90b4.9 ( Figure 5A). The fraction of the variance explained by the first three components was 11.6%, 10.4%, and 8.2%, respectively. Genomic consequences of breeding: an increased genetic load Population genetics predicts that the efficacy of negative selection depends on s x Ne product, where s is the selection coefficient and Ne the effective population size. This product is often binned into four categories, representing nearly neutral mutations (s x Ne % 1), slightly deleterious (1 < s x Ne % 10), mildly deleterious (10 < s x Ne % 100), and strongly deleterious (s x Ne > 100) (Kousathanas and Keightley, 2013;Williamson et al., 2014). With Ne recently dropping to ca. 100 reproductive horses (Corbin et al., 2010;Hall, 2016), some breeds likely carry a substantial number of alleles with s < 0.01, which behave as nearly neutral owing to a limited efficacy of negative selection (s x Ne = 0.01 3 100 = 1). Such reduced efficacy of natural selection filtering out deleterious alleles results in an accumulation of harmful mutations (Charlesworth, 2009). Increased genomic load was indeed observed for domestic horses, relative to a limited ancient DNA dataset including 14 Scythian horse genomes dating back to $2.3 kya (Gaunitz et al., 2018;Librado et al., 2017). Leveraging the extensive time-series generated in this study (dataset (b)), we investigated the precise historical context in which the strong demographic declines resulting from breeding started to compromise the efficacy of selection, as potentially reflected by a burst in genetic load observed in each individual genome. Prior to empirical analyses, however, we first thoroughly evaluated available genomic load estimators. In our previous work, the preferred estimator was calculated from protein-coding sites, following the procedure detailed in Librado et al. where P(homozygous_alternative) i is the probability that site i is homozygous for the non-constrained nucleotide variant, as estimated by ANGSD (Korneliussen et al., 2014). PhyloP i is the evolutionary score for position i, which was used as proxy for the phenotypic impact of mutations at this position. PhyloP scores are available at http://hgdownload.cse.ucsc.edu/goldenPath/hg19/ phyloP46way/placentalMammals/ (Pollard et al., 2010). Since the denominator normalizes by the total amount of homozygous positions, calculations account for individual differences in inbreeding. This estimator should thus be interpreted as deleterious load per homozygous site. We here similarly define the genetic load in heterozygous positions We evaluated both statistics conducting forward simulations with SLiM 3.0 (Haller and Messer, 2017). In particular, we let an ancient population of 50,000 horses to evolve during 100,000 generations, until reaching mutation-selection-drift equilibrium. A hundred generations ago, the ancestral horse population split into three additional subpopulations, with Ne = 100, 200 and 500, respectively. This range was chosen to mirror recent population sizes estimated for modern horse breeds. Mutation and recombination rates were assumed to be 7.24x10 À9 mutations per generation and site, and 1x10 À8 crossovers per generation and site, respectively. Deleterious mutations were supposed to represent 2 / 3 of the protein coding sites, and were drawn from an exponential distribution with parameter À0.0001 (average selection coefficient against mutations = 0.0001). Five thousand 5 kb-long fragments were simulated (25 Mb in total), representing 5,000 protein-coding genes. Five individuals were sampled from each population. We found that the estimator based on homozygous positions scales well with Ne reductions, for all mutation categories, as expected ( Figures S3A and S3B). Although the heterozygosity dropped by ca. 12% (for Ne = 500), 27% (Ne = 200) and 44% (Ne = 100), the load per heterozygous position remained steady. Since simulations provided certain genotype calls with null error rates, a steady heterozygous load cannot reflect methodological challenges associated with calling heterozygous genotypes. Instead, it suggests that the more random loss or fixation of alleles owing to elevated drift, especially for variants that segregated as slightly deleterious in the ancient horse population, ultimately increased the levels of homozygous load. Therefore, given the limited power provided by heterozygous sites, we estimated genetic load from homozygous sites. We further validated the homozygous load estimator by investigating its relation to the composite s x t parameter, where s is the selection coefficient against deleterious alleles and t is the number of generations of selection. Inspired by the McDonald-Kreitman test (McDonald and Kreitman, 1991), we contrasted two classes of sites. The first class was assumed to be neutral, and was comprised of protein-coding sites with a phyloP score lower than 1.5. This threshold was shown to best discriminate zero-fold and four-fold codon sites, the latter of which induces synonymous changes in protein-coding regions, and are often considered to evolve under almost neutrality (Librado et al., 2017). Besides drift and inbreeding, the second class of sites was supposed to be also affected by negative selection, and pertained to protein-coding sites with a PhyloP score greater or equal to than 1.5. The rationale is that elevated drift, reflected as the loss of heterozygous sites, should result in an equally-balanced increase of both homozygous genotypes. With purifying selection, however, the deleterious homozygous genotype is often filtered out, resulting in the preferential inheritance of non-deleterious homozygous genotypes. More specifically, the genotype frequencies were modeled as: AA s = AA n AA n + Aa n 3 ð1 À shÞ t + aa n 3 ð1 À sÞ t Aa s = Aa n 3 ð1 À shÞ t AA n + Aa n 3 ð1 À shÞ t + aa n 3 ð1 À sÞ t aa s = aa n 3 ð1 À sÞ t AA n + Aa n 3 ð1 À shÞ t + aa n 3 ð1 À sÞ t where the s or n subindices denote selected and neutral sites respectively. The dominance coefficient is represented by h. This implies that the ratio of homozygous genotypes is: The s x t parameter was estimated from high-quality modern horse genomes. We also conditioned on sites that segregated as nucleotide transversions in our extensive panel of ancient horses; ie. at the time-scale studied encompassing a few hundred generations, the accumulation of deleterious alleles mainly reflects an increased impact of drift (measured as a loss of heterozygous sites), relative to a presumably-recent relaxation of purifying selection. Applying this validation method to the selected positions, we found a negative correlation between s x t and genetic load at homozygous sites (Spearman correlation; r = À0.8736; p-value = 5.2834x10 À8 ; Figure S3C). This confirms that a relaxation of negative selection (ie. reduction in s x t) drove the load increment observed in modern horse genomes. No relation to the average depth-of-coverage was found (p-value = 0.7796). Similarly, we calculated the substitution rate at functional and nearly-neutrally evolving sites. To simplify calculations, we relied on zero-fold (hereafter, 0d) and four-fold degenerate (4d) codon sites, where nucleotide changes always imply non-synonymous and synonymous changes, respectively. We conditioned on nucleotide transversions, to mitigate the impact of post-mortem damage in ancient DNA samples. For each horse in the DOM2, we calculated dN À dS where dN is the genetic distance, at 0d sites, between pseudo-haploidized version of a DOM2 and the donkey genome. The principle of this subtraction is analogous to the well-known dN/dS statistics. While dN -dS scores greater than zero reflect the action of positive selection, values lower than zero reveal evolution under negative selection. The dN -dS values are more robust than the dN/dS statistics to the presence of undetected sequencing errors, which is typically the case for genomes sequenced at low coverage. Assuming the error rate is approximately constant in 0d and 4d sites, modeling selective pressure as a subtraction allows canceling out the impact of errors on dN and dS estimates, in contrast to modeling through dN/dS statistics, where a constant error rate added to the numerator and denominator could considerable distort the ratio. Less efficient purifying selection is expected to elevate this dN -dS, owing to non-synonymous deleterious mutations being more often fixed. Therefore, if individual deleterious loads increased, we should find such ratios to be larger in modern DOM2 horses than in horses that lived prior to the C19 th . This was indeed observed ( Figure S3D). The correlation between the dN -dS statistics and the load estimator was actually significant (Pearson correlation; r = 0.67; p = 0; Figure S3E), despite exploiting considerably different information to estimate either the selective pressure or the accumulation of deleterious mutations, respectively. All together, these findings support that the genomic load only increased in the last centuries, following the horse population decline and population structuration implemented by modern breeding practices. Individual heterozygosity We calculated the individual heterozygosity level for all horses within the second domestic clade (DOM2) using dataset (b). DOM2 members were defined based on the TreeMix (Figures 3 and S7B). We summed up the posterior probability of being heterozygous per site per individual, excluding sites with missing data for each corresponding individual. To reduce the possible bias driven by misincorporations as a result of deamination in the historic and ancient samples, we conducted a series of analyses, (i) including all covered positions, (ii) masking all data observations overlapping a CpG in the reference genome, (iii) excluding all transitions (Figures S2A and S2B). We then performed the exact same analyses after subtracting mean error rates per site in order to mitigate the total contribution of sequencing errors ( Figures S2C and S2D). We next regenerated the dataset of posterior probabilities of genotypes restricting the analysis to individuals with a depth-ofcoverage greater than or equal to 4-fold downsampled to 4-fold to control for the possible effect of depth-of-coverage variation across samples. Individual heterozygosity levels with and without subtracting mean error rates per site were consistent (drop of heterozygosity: 18.5% (p value: 2.63x10 À14 ) prior to subtracting errors, and 13.2% (p value: 8.365x10 À15 ) after subtracting errors). Nucleotide diversity (p) profiles across cultures We computed the nucleotide diversity (p) of a set of predefined cultures in dataset (b), provided the number of samples is equal to or greater than three (Table S5). We estimated p for the autosomes, the Y chromosome, and the mitochondrial DNA. We used three different approaches to compute p, depending on the ploidy and depth-of-coverage. For autosomes, we extracted the individuals related to each culture from the 'posterior probabilities of genotypes' dataset as: where p is the ancestral allele frequency segregating within the samples of this culture, and n the corresponding sample size. For the Y chromosome, we computed the Site Frequency Spectrum (SFS) from pseudo haploid genomes for all males per culture (N_males R 3) and computed p from the SFS using an in-house script, following Librado et al. (2017). Finally, p per culture of the mitochondrial DNA was computed from multiple sequence alignment of haplotypes. To reduce DNA damage related biases for ancient samples, we excluded genomic sites in CpG dinucleotides context for both the autosomes and Y chromosome. Nucleotide diversity (p) profiles through time In addition to calculating p for a set of a priori defined cultures, we computed p through time, following the same approaches as described in section 'Nucleotide diversity (p) profiles across cultures', except that datasets were stratified according to the age of the individuals instead of culture. Samples younger than 400 years were represented within a single group, and from 400 years ago to the oldest sample in the second domestic clade, we grouped samples using a step size of 250 years. Groups with fewer than 3 stallions were excluded from the calculations. To reduce the possible bias introduced by different geographic substructure across the temporal windows, we estimated p in Asia and Europe separately (Table S5). As an additional caution, we computed p using the same set of stallions for the autosomes and Y chromosome in each time window. To minimize the impact of damage related biases, present in ancient samples, we excluded genomic sites in CpG dinucleotides for the autosomes and Y chromosome. Lastly, following Wutke et al. (2018), we also stratified the samples in the Asian clade into four time windows (0-400, 400-900, 900-2200, 2200-5000 years ago) and computed p for the autosomes and Y chromosome ( Figure S2E). We applied the same filters as for the time slicing described above. Selection targets Populations Branch Statistic (PBS) We applied the population branch statistics (PBS) (Yi et al., 2010) to a subset of cultures in the DOM2 domestic clade. PBS can identify genomic regions that underwent recent natural selection by comparing allele frequency changes between two sister populations and an outgroup population. We investigated selection targets in Byzantine horses, representative of Persian-related lines, in comparison to Gallo-Roman (representing European horses prior to pre-Islamic conquests) and Deer Stone (Asian horses prior to pre-Islamic conquests) horses. First, using mstatspop v.0.1beta (20180220) (available at https://codeload.github.com/CRAGENOMICA/mstatspop/zip/master), we calculated the fixation index (F ST ) between pairs of cultures in 50 kb genomic windows with a step size of 10kb, and transformed these into drift units -log(1-F ST ). Using the computed drift units, we then calculated the population branch statistics following the methodology in Yi et al. (2010). We excluded genomic windows with more than 25% missing sites as well as regions showing an excess of the transition/transversion (ti/tv) ratio. The upper boundary, delineating this ti/tv excess, was estimated from the distribution of ti/tv ratios across genomic windows. In particular, windows with a ti/tv greater than the mode of the distribution, plus the distance from the mode to the lowest ratio, were excluded. This was done to minimize the number of false positives due to post-mortem deaminations, which are reflected as nucleotide transitions. From the remaining windows, we selected the top-1,000 genomic windows according to their higher PBS score, as candidate regions for positive selection on each given branch/population. We confirmed that this arbitrarily-high threshold was conservative by estimating the distribution of (nearly-)neutral PBS scores from two categories of sites: (1) intergenic sites (defined as located at least 5kb from the closest gene annotation) and (2) fourfold-degenerate sites within annotated protein coding regions. Two null distributions of PBS scores under (nearly-)neutral evolution were generated by performing 5,000 bootstrap pseudo-replicates, which consisted of random sampling with replacement 50k sites pertaining to both categories ( Figure S4). We found, for all three branches, that the highest PBS score for neutral evolving sites was smaller than the lowest identified PBS score among the top-1,000 genomic windows. This indicates that the significance threshold that we selected is conservative, and that the top-1,000 genomic windows provide genuine candidate regions for positive selection. The same was true when considering the top-2,000 genomic windows. Genes whose protein-coding exons overlapped the top-1,000 or top-2000 PBS windows (Table S6) were submitted to functional enrichment analyses (Table S7), provided that they have a 1:1 ortholog relationship (single-copy genes) to human or mouse genes. Orthology was defined according to annotations in Ensembl Genes 92. Human and mouse orthologs were analyzed for functional enrichment, using the WebGestaltR script, on the following functional databases hosted by WebGestalt 2017 (Wang et al., 2017): geneontology_Biological_Process_noRedundant, geneontology_Cellular_Component_noRedundant, geneontology_Molecular_ Function_noRedundant, pathway_KEGG, pathway_Panther, pathway_Reactome, pathway_Wikipathway, disease_Disgenet, disease_GLAD4U, phenotype_Human_Phenotype_Ontology. Only functional categories significantly enriched, according to their adjusted p-value < 0.05 (Benjamini and Hochberg, 1995), are reported in Table S7. Mendelian traits We investigated 57 SNPs associated with key phenotypic traits, including coat-color variation, genetic disorders, body size configuration, as well as racing and locomotion skills. We explored their origin and evolution across past equestrian civilizations. We followed the same approach as in Librado et al. (2017) to chart the frequency of the causative allele in all 159 horses comprising dataset (b). Causative alleles supported by a single read were conservatively considered as absent ( Figure S5). A few alleles were not detected in ancient horses, but only identified in modern domesticates, including in PDK4 and PON1, improving racing capabilities. The DMRT3 allele, causing ambling gaits, was first identified in a heterozygous horse dating back 730 years ago (TavanTolgoi_ GEP13_73), from the Great Mongolian Empire. We next estimated allele trajectories over time, for the 57 SNPs, using a sliding window approach (step = 250 years, span = 1,000 years). For each horse within each time bin, we randomly sampled a single read, provided it passes all quality filters. This enabled us to calculate allele frequencies without genotype uncertainties. This process was repeated 100 times to approximate the sampling variance. The mean over the 100 replicates was plotted, with the shaded area representing twice the standard deviation, which approximately delimits the 95% confidence interval around the mean (Figures 4B and S6). TreeMix population tree We created an extended dataset (c) where specimens were grouped mainly by culture, as shown in Table S5. This dataset consisted of some individuals present in dataset (a) plus a number of other specimens for which genome-scale data could be generated, providing a minimum of 2-fold genome coverage per group considered. In total, dataset (c) included 186 horses and an outgroup. Intra-group allele counts (eg. within cultures) were calculated supplying the-within command to plink v1.90b4.9 (Purcell et al., 2007). The resulting output was subsequently transformed into TreeMix input format using the plink2treemix.py script delivered within the TreeMix package. Sites not covered in all groups were disregarded, yielding a total of 16,829,417 nucleotide transversions. TreeMix v1.13 (Pickrell and Pritchard, 2012) was run applying the same parameters as in Gaunitz et al. (2018), with a number of migration edges ranging from 0 to 10. The outgroup was forced to be placed in-between the wild ass and horses. The variance explained by each migration model ranges from 0.9989676 (0 migration edge) to 0.9989983 (10 migration edges), thus indicating a limited improvement for an increasing number of migration edges. For illustrative purposes, the tree with one migration edge was visualized using the R APE package (Paradis et al., 2004). Struct-f4 The D (Green et al., 2010) and related f (Patterson et al., 2012) statistics have been proven as useful tools to unravel dynamic population structures in the last millennia, including to track introgression from extinct lineages. Methods exploiting f 4 statistics have been already implemented, in ADMIXTOOLS and the more recent admixturegraph packages (Patterson et al., 2012). These two package implementations often require proposing a model to be fit to all f 4 permutations. Proposed models should be based on prior knowledge, in the form of potential hypotheses that can be subsequently contrasted. Owing to reduced sample sizes, prior knowledge can be however limited or even biased. Leveraging permutations of the f 4 statistics (Patterson et al., 2012), we here present an additional approach to infer fine-scale population affinities, in the form of 3D visual embedding, which is complementary to existing methods. These will be included in the Struct-f4 package, available upon request. The first step in the Struct-f4 pipeline is an efficient C implementation to rapidly calculate millions of f 4 permutations from a TreeMix or PLINK file in tped format. The output provides BABA and ABBA counts, alongside with f 4 values, standard errors and Z-scores. The second program performs 3D visual embedding. More specifically, the f 4 statistics is formally defined as: where p A , p B , p C and p O represent the allele frequencies in individuals/populations A, B, C and the outgroup, respectively. The term (p A -p B ) is then the change in allele frequency between individuals A and B. As drift path between two populations, it can be visualized as a Euclidean distance: q where x A , y A and z A are the x, y and z geometrical coordinates for individual A. Struct-f4 searches for the x, y and z coordinates that minimize the difference between the observed f 4 values and those predicted from the 3D geometrical embedding. Higher dimensionality can be also considered. Struct-f4 currently implements a cost function based on weighted least-squares: The s in denominator stands for the standard deviation of the corresponding f 4 estimate, so that f 4 values estimated with large variances contribute less to the overall cost function. This correction enables to include samples sequenced at very depth-of-coverage. They are implemented in the R programming language, and C++ for intensive calculations. Struct-f4 minimizes the cost function in two steps. First, f4_2_3D runs the SANN algorithm, as implemented in the optim R function, for 100,000 MCMC iterations. This provides a sub-optimal approximation to the global maximum. This is followed by ML refinement through the L-BGFS-B algorithm, with stringent convergence criteria (maxit = 50000, factr = 1e1). Figure 5B provides a visualization of Struct-f4. Modeling IBE contribution to DOM2 We used momi2 (Kamm et al., 2018) to model the population history underlying the main horse lineages characterized, as well as to further assess the possible contribution of IBE to DOM2. Momi2 leverages the multidimensional Site Frequency Spectrum (SFS) to identify the best population model among a pre-defined set, characterized by a series of demographic changes, split and admixture events. We first sub-selected modern horses to represent the DOM2 and Przewalski's horse lineages. The other lineages considered were represented by one high-coverage sample for E. lensensis (Taymyr_CGG10022_42758) and two ancient specimens sequenced to lower coverage for IBE (Cantorella_UE2275x2_4791 and CaminoDeLasYeseras_CdY2_4678). We prioritized high-coverage modern genomes whenever possible to perform strict genotype calling and avoid inflating split times and population size estimates, owing to post-mortem damage and sequencing errors. The coverage and age for the sample considered are provided in Tables S2 and S3. The donkey genome was used as outgroup to polarize alleles as ancestral or derived. Sites uncovered or heterozygous in the donkey genome were thus skipped. To estimate the 4D-SFS, we applied all filters described in section 'Autosomal and sex chromosomes' to calculate the genotype probabilities with ANGSD (Korneliussen et al., 2014). Based on these probabilities, we called the most likely genotype. We restricted to sites covered at least in all non-IBE samples, provided they did not exceed the depth-of-coverage threshold reported by the PALEOMIX depths file and consisting to the 99.5% quantile of the per site depth-of-coverage distribution. This intended to eliminate regions which may represent undetected CNVs and erroneously inflate local heterozygosity estimates. We also removed heterozygous sites where the alternative allele is supported by a reduced number of reads, such as sites with a heterozygous allele balance lower or equal than 15%. For instance: in cases where the sequencing depth was eight-fold and only one read supported the alternative allele, this site was skipped as potentially resulting from a sequencing error. The sequencing depth of the two IBE samples was not sufficiently high to perform accurate genotype calling (< 3.68x). Based on the new strategy proposed by Consensify (Barlow et al., 2018), we devised a random-sampling approach that limits the impact of sequencing errors in the calculation of 4D-SFS. Sequencing errors are indeed often reflected as singletons and introduce long terminal branches in the genealogy. This may lead to overestimating both population sizes and split times. Our strategy consisted of randomly sampling three, four and five reads per site and sample. If three out of these three randomly sampled sets agreed, the allele determined by these reads was called. Otherwise, the site was set as missing. This strategy seeks for removing errors solely represented by one or two reads, which are usually incorporated into the data frame when sampling a single read per site. This error-aware sampling strategy is feasible for these two samples because their average sequencing depth is not extremely low, but > 2.45x, warranting a sufficient number of sites covered three, four or five times. A pseudo-diploid IBE individual was reconstructed merging the two IBE samples that were pseudo-haploidized. After filtering, we retained a total of 1,228,613,653 sites. Of these, 2,183,704 correspond to nucleotide transversions and were used to build the 4D-SFS. For momi2, the transversion mutation rate was fixed to be 2.3728x10 À9 transversions per site and generation, following previous work (Orlando et al., 2013). We then used momi2 to contrast the fit of five evolutionary scenarios to the 4D-SFS. These models are presented below: d Model A consisted of the topology in the form (IBE, (E. lenensis, (Przewalski, DOM2))), disregarding admixture. Each branch in the population tree was allowed to have a different population size, except for the DOM2 demographic trajectories prior to 33,930 years ago, which was fixed according to the PSMC profiles reconstructed by Librado et al. (2015). d Model B1 was equivalent to Model A, except that an introgression event from DOM2 into IBE, constrained to occur after the split between the Przewalski's horse and DOM2 lineage. d Model B2 was equivalent to Model B1, except that the directionality of the introgression event was reversed, from IBE into DOM2. d Model C1 consisted of the topology in the form (Ghost, (E. lenensis, (Przewalski, (IBE, DOM2)))), with gene flow from the early diverging Ghost population into IBE, as suggested by TreeMix ( Figure S7B). d Model C2 consisted of the topology in the form (IBE,(E. lenensis,(Przewalski, DOM2)))), with three gene flow events: (i) from IBE to DOM2; (ii) from IBE to the branch ancestral to (E. lenensis, (Przewalski, DOM2)) and (iii) from DOM2 to Przewalski's horses. d Model C3 was the same as C2, except that an additional introgression event was introduced from the branch ancestral to DOM2-Przewalski split into the IBE lineage. Model A consistently recovers population trajectories previously published To avoid issues related to local minima during likelihood optimization, models were initialized with reasonable starting guesses (from previous work), as well as with three sets of random values. (Der Sarkissian et al., 2015). These N e also support population decay in IBE horses, possibly owing to partial isolation within Iberia, an area that could have served as horse refugium during the last glacial maximum (Warmuth et al., 2011). That the split times of E. lenensis and IBE are separated by only 40 kya with high N e implies reduced drift at the time of divergence and is consistent with their relative positioning on the PCA versus struct-f4 plots (i.e., IBE is closer to E. lenensis in the latter as this method is less sensitive to drift post-divergence; the PCA places IBE farther away from all other lineages due to recent population collapses). Limited but significant introgression from IBE into DOM2 Model B1 marginally improved the likelihood to À9345720.5512, as well as the KLdivergence (0.05209) over model A. Yet, the improvement detected was lower than that observed for Model B2 (likelihood: À9345653.5967; KLdivergence: 0.05206), suggesting that introgression preferentially occurred from IBE to DOM2, rather than in the reverse direction. Model B2 estimated gene flow from IBE to DOM2 to be $0.4%, and to have occurred 8,483 ya. We caution that the estimated date for introgression is likely imprecise, as the amount of the underlying gene-flow is extremely limited and thus barely impacts likelihood values. To further validate the directionality of gene flow, we polarized shared variants as ancestral (A) and derived (D), in order to efficiently exploit diagnostic allele configurations. We assumed the topology (( (((DOM2, Dunaujvaros_Duk2_4077), Botai),E. lenensis),IBE),DONKEY). Note Dunaujvaros_Duk2_4077 was not included in the momi2 analyses due to its low sequencing depth, but incorporated due to its greater affinities to IBE (Figure 7). Assuming an infinite-sites model (one mutation per site, at most), there is only one scenario whereby IBE and DOM2 (or Dunaujvaros_Duk2_4077) could share ancestral alleles, yielding ADDDAA (DADDAA) patterns. This scenario involves derived mutations happening in the lineage immediately ancestral to E. lenensis, and the ancestral allele re-appearing into Duk2 through introgression from IBE. Leveraging the panel of nucleotide transversions built for TreeMix and with Sintashta_NB46_4023 as representative of DOM2, we contrasted the amount of DADDAA versus ADDDAA patterns. We indeed found an excess of DADDAA (2,722 versus 1,817), revealing increased affinities between IBE and Dunaujvaros_Duk2_4077, relative to DOM2, consistent with Figure 7. The absolute difference is small, but revealed significant through 100 bootstrap pseudo-replicates (p-value < 0.01). This supports gene flow directionality from IBE to Dunaujvaros_Duk2_4077. Detecting the presence of a ghost lineage That IBE introgressed into Dunaujvaros_Duk2_4077 does not support TreeMix inference assuming one migration pulse. Under this TreeMix tree, IBE and Dunaujvaros_Duk2_4077 are placed as sister taxa, with input from a divergent unsampled (ghost) population into IBE ( Figure S7B). Using momi2, we evaluated such topology (Model C1), but found no likelihood improvement (À9351206.7300). More importantly, under this model, all the IBE ancestry was inferred to derive from the ghost population, suggesting that IBE should actually occupy the position of the ghost population, and thus that IBE is basal to all caballine lineages investigated here. Yet, IBE shows an extremely divergent Y chromosome ( Figure 6B). Based on the observed versus expected pairwise genetic distances as proxies for the residuals of the model fit, we propose an alternative explanation. This involves two additional admixture events (Model C2). The first consists of gene flow from DOM2 to Przewalski's horses, while the second from IBE to the ancestor of all remaining caballine horses. Implementing Model C2 substantially improved KLdivergence and likelihood values, to 0.01899 and À9273440.1542, respectively. Optimizing Model C2 parameters estimated that DOM2 branched off from Przewalski's 43.8 kya, and from E. lensensis 118.6 kya. Both such estimates are more in line with previous results based on F-statistics (Orlando et al., 2013). This model also indicates that DOM2 contributed 22% to the ancestor of Przewalski's horses ca. 9.47 kya, suggesting the Holocene optimum, rather than the Eneolithic Botai culture ($5.5 kya), as a period of population contact. This pre-Botai introgression could explain the Y chromosome topology, where Botai horses were reported to carry two different segregating haplogroups: one occupied a basal position in the phylogeny while the other was closely related to DOM2 ( Figure 6B). Multiple admixture pulses, however, are known to have occurred along the divergence of DOM2 and the Botai-Borly4 lineage, including 2.3% post-Borly4 contribution to DOM2, and a more recent 6.8% DOM2 intogression into Przewalski's horses (Gaunitz et al., 2018). Model C2 parameters accommodate all these as a single admixture pulse, likely averaging the contributions of all these multiple events. More surprisingly, IBE was inferred to have massively contributed to the ancestor of all caballine horses, 285.3 kya with 98.8% of its genetic ancestry. The split time between IBE and the rest of caballine horses is then pushed back 1.25 mya, which could be compatible with the very divergent Y haplogroup carried by IBE stallions. This 98.8% should be interpreted as if most of the IBE-DOM2 genomic regions coalesced 285.3 kya, while a 1.2% originated from an even more divergent population. The massive improvement driven by this admixture event strongly supports the existence of a divergent ghost population, which could have participated to the genetic makeup of the caballine horses investigated here. We caution however that the exact split time of 1.25 mya might vary depending on the existence of other unsampled populations. For example, the f 4 permutations involving Goyet_Vert_311 revealed that it likely represents a different lineage, with close affinities to both IBE and DOM2 at the same time. Although its low sequencing depth precludes its inclusion in momi2 analyses, its 35 kya radiocarbon date suggests it could descend from a population ancestral to the Przewalski-DOM2 split. More specifically, comparison of the observed versus predicted genetic distances (residuals) of Model C2 indicated that both Przewalski and DOM2 horses should be modeled as closer to IBE. We incorporated thus another admixture pulse in Model C3, from the branch ancestral DOM2 and Przewalski's horses to IBE. We found again significant improvement, with KLdivergence and likelihood values of 0.01738 and À9269933.1010, respectively. Model C3 predicts a 36.7% contribution into IBE, 65.2 kya. The split between IBE and all remaining horse lineages is then reduced to 539.1 kya, more in line with the scale of the Y chromosome tree. The N e of Przewalski's and IBE horses decreased, to ca. 3,798 and 8,848 reproductive individuals, respectively. Importantly, Model C3 is consistent with other models investigated, inferring an IBE to DOM2 introgression of only 1.4%. Analytical predictions corroborate limited IBE contribution into DOM2 We next analytically estimated whether the admixture proportions estimated from the f G parameter ( Figure 7A) represent reasonable estimates of the possible genetic contribution of IBE into the DOM2 lineage. The following derivations assume three well-established constraints: horses, which is slightly larger than the value retrieved for Przewalski's horses, but considerably smaller than those leading to both the E. lenensis and DOM2 lineages. d Under Model C3, the split time of E. lenensis was estimated to be $112.7 kya, in line with previous work (Orlando et al., 2013). With a generation time of 8 years, this means that IBE diverged at approximately 15,000 generations ago (kga). d As shown in Figure 7B, the introgression from IBE to DOM2 should have occurred after the split of Przewalski and DOM2, some 35.4 kya ($4.425 kga). Given the distance between the early IBE split and its late contribution to DOM2, we here demonstrate that an admixture fraction greater than 15% is highly incompatible with the f 4 values reported in Figure 7B, ranging from À0.00018 for Dunaujvaros_Duk2_4077 to À2.68933 À5 for Jeju_0275A. Durand and colleagues analytically showed that the expected number of ABBA-BABA events, in a (O,(P 3 ,(P 2 ,P 1 ))) configuration, can be expressed in terms of explicit coalescent parameters (Durand et al., 2011). Respecting their notation (the sign of f 4 is reverted because we calculated BABA-ABBA): Àf 4 = ABBA À BABA #SNPs = 3fdðN + tp3 À tGf À tÞ #SNPs Isolating the admixture fraction (f) f = #SNPs à Àf 4 3dðN + tp3 À tGf À tÞ where d is the probability of pairwise coalescence within P 3 . The denominator includes (i) N as the size of the population ancestral to P 1 , P 2 and P 3 ; (ii) tp3 the split time of P 3 , (iii) tGf the time of admixture between P 3 and P 2 , (iv) and t is the expected time of coalescence within P 3 , given that both lineages indeed coalesced within P 3 . Assuming a constant N 3 population size within P 3 : Our three constraints imply that tGf < 5.6 kga, tp3 > 10k and N 3 < < N. Indeed, while momi2 infers an N 3 of 8,848 IBE horses, the PSMC reconstructions 200-100 kya indicate that the horse population exceeded N = 100,000 horses (Librado et al., 2015). Based on this, we evaluated different realistic ranges for N (100-200k reproductive horses) and N 3 (5-50k), tp3 (10-62.5 kga) and tGf (0.5-6.2 kga). Within such parametric space, a f 4 = À0.0001 ( Figure 7B) is compatible with very small admixture proportions. The maximum possible admixture fraction is $12%, and is obtained under the less realistic values of N = 100,000, N 3 = 45,000, tp3 = 10,000 ga (80,000 ya) and t Gf = 6,000 ga (48,000 ya). For other parameter combinations, the predicted admixture fraction is $1%-2%, in line with the limited fraction modeled in Model C3. Coupled with such constraints, therefore, the small f 4 values in Figure 7B corroborate that IBE contributed to DOM2, but cannot represent the major genetic pool leading to the makeup of modern domesticates. Species and sex identification Preliminary sequence data obtained while screening the DNA libraries, typically considering 1-5 million sequencing reads, were processed with the Zonkey package, as part of PALEOMIX (https://github.com/MikkelSchubert/paleomix) (Schubert et al., 2017) in order to identify first-generation equine hybrids and determine the molecular sex of each individual. Among the 278 individuals included in this study, we detected three hemiones, three asses, 27 mules, 245 pure horses. We also identified 196 males (representing $70.5% of all individuals) and 82 females ($29.5%). Additionally, among pure horses, we identified 175 stallions and 70 mares. (A) Histogram of overall error rate estimates for samples in dataset (a) (n = 159), based on random sampling of a single allele per site. All horses were grouped into modern (turquoise) or ancient horses (red). Three approaches were used to estimate the individual error rates, ''All sites'' makes use of all sites for which sequencing data was present, ''All sites except CpG's'' masks data in CpG contexts according to the reference genome, and finally, ''Transversions Only'' excludes all sites observed as transitions. The vertical line represents the maximal allowed error rate (0.0005) cut off for analyses including ''All sites except CpGs.'' (B) Histogram of overall error rate estimates for samples in the DOM2 clade (dataset (b), n = 126), with the same filtering procedure as in (A), but based on posterior genotype probabilities. (E) Autosomal and Y chromosome nucleotide diversity p estimates through time. We stratified the samples in the Asian clade into four different time windows (0-400, 400-900, 900-2200, 2200-500 years ago) following (Wutke et al., 2018). Heterozygosity was calculated for the ''All except CpG sites'' dataset. For panels (A) and (B), mutations were classified as nearly neutral (0 < N e x s % 1), slightly (1 < N e x s % 10), mildly (10 < N e x s % 100) or strongly deleterious (1 < N e x s % 10), according to their simulated selection coefficient (s) and population size (N e ). The latter was determined in the ancestral population (referred to as PAnc), prior to the corresponding demographic collapses to N e = 100 (P100 population), N e = 200 (P200) and N e = 500 (P500). Five individuals were sampled from each population, and labeled as I1-I5. The homozygous load increased with stronger population declines, especially due to the random fixation of deleterious mutations that, in PAnc, were slightly deleterious (ie. in PAnc, they were less likely to be fixed because the efficacy of negative selection N e x s was higher). Despite heterozygosity levels dropped following N e reductions, the load per heterozygous site remained steady, indicating heterozygous load has limited power to identify demographic collapses. (C) Inverse correlation between mutational loads estimated at homozygous sites in modern horse genomes and the accumulated strength of purifying selection over generations, estimated as described in STAR Methods. (D) Individual differences between non-synonymous (dN) and synonymous (dS) substitutions in DOM2 horses. (E) Positive correlation between mutational loads estimated at homozygous sites and the difference between non-synonymous (dN) and synonymous (dS) substitutions. , where X and Y represent pairwise combinations (n = 12,561). The f3-outgroup statistic was computed including the outgroup species (E. africanus somaliensis) and with 50,757,656 nucleotide transversions, using an in-house C++ script. (B) TreeMix phylogenetic relationships assuming one migration edge (weight = 0.386). The tree topology was inferred using a total of $16.8 million transversion sites. The name of each sample provides the archaeological site as a prefix, and the age of the specimen as a suffix (years ago). Name suffixes (E) and (A) denote European and Asian ancient horses, respectively. See Table S5 for further information on the datasets used in each analysis.
2019-05-03T14:36:40.678Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "661e5b27bd840ee52070715506413618681c8f80", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S0092867419303848/pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "983ed073d08107675bca2ef9460ac39b1802188d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
227078902
pes2o/s2orc
v3-fos-license
FTIR Spectroscopy to Reveal Lipid and Protein Changes Induced on Sperm by Capacitation: Bases for an Improvement of Sample Selection in ART Although being a crucial step for Assisted Reproduction Technologies (ART) success, to date sperm selection is based only on morphology, motility and concentration characteristics. Considering the many possible alterations, there is a great need for analytical approaches allowing more effective sperm selections. The use of Fourier Transform Infrared (FTIR) may represent an interesting possibility, being able to reveal many macromolecular changes in a single measurement in a nondestructive way. As a proof of concept, in this observational study, we used a FTIR approach to reveal features related to sperm quality and chemical changes promoted by in vitro capacitation. We found indication that α-helix content is increased in capacitated sperm, while high percentages of the β-structures seem to correlate to poor-quality spermatozoa. The most interesting observation was related to the lipid composition, when measured as CH2/CH3 vibrations (ratio 2853/2870), which resulted in being strongly influenced by capacitation and well correlated with sperm motility. Interestingly, this ratio is higher than 1 in infertile samples, suggesting that motility is related to sperm membranes stiffness and lipid composition. Although further analyses are requested, our results support the concept that FTIR can be proposed as a new smart diagnostic tool for semen quality assessment in ART. Introduction Selection of good quality spermatozoa is of crucial importance for the outcome of Assisted Reproduction Technologies (ARTs) such as in vitro fertilization and intracytoplasmic sperm injection. Indeed, despite the worldwide spread of ARTs, their efficiency has still to be improved, since over years their successful results (i.e., the pregnancy rate) did not change drastically [1]. Actually, a critical point is the quality of spermatozoa, since in an in vivo condition sperm with major capabilities for fertilization and embryo development were selected during their journey through the oviduct [2]; meanwhile in ARTs, morphology and motility, after spermatozoa in vitro capacitation [3], are the sole parameters routinely used for qualitative assessment. The choice of the best capacitated sperm is then performed using an optical microscope, although with this approach subtle biochemical and genomic alterations cannot be disclosed [4]. To note, the risk of fertilizing the oocyte with defective spermatozoa is maximally increased when they come from men with impaired sperm parameters [5], since they have higher incidence of abnormalities, including DNA fragmentation, that could lead to developmental failure and even affect the offspring in the long term [6]. Different biochemical modifications can also contribute to diminishing sperm fertilizing potential. In particular DNA fragmentation is a significant concern in the field of ART, receiving increasing attention [7]. Numerous assays have been developed to monitor cell suffering, DNA damage [8] and, in general, for identifying various biochemical alterations. However, the idea to test all the potential assays that could be useful for selecting the best cells before ART procedures is impractical, since it implies high costs and long evaluation times. More importantly, the limited quantity of samples usually used in ART does not allow multiple evaluations. Therefore, the availability of non-destructive analytical procedures that simultaneously monitor different parameters would be highly advantageous for an effective selection of good quality semen. Interesting advanced setups of vibrational spectroscopy, mainly Raman and Fourier Transform Infrared (FTIR) spectromicroscopy, that were recently exploited to reach this aim are emerging [9][10][11]. In principle, it has been reported that sperm can be analyzed at the single cell level with Raman spectromicroscopy, revealing precious information on nuclear DNA status, identifying nucleotide bases damage and/or fragmentation [10] and requiring minimal sample preparation. We also recently demonstrated that DNA damage and modifications may be revealed by UV resonant Raman spectroscopy [12], despite the need of highly concentrated and purified nucleic acids [13]. However, Raman approaches are difficult on intact cells, since the laser induces photodamage thus limiting its direct use on ART specimens. FTIR spectroscopy is the most promising and effective tool in biomedical research, allowing the detection of sample vibrational features without external labels or laborious sample preparation, while providing comprehensive information on sample biochemical composition and macromolecular structure. Performing FTIR spectroscopy on biological materials, the most informative spectral regions are the fingerprint region (900-1350 cm −1 ) for nucleic acids and sugars, the amide I and amide II (amide I/II) region (1450-1700 cm −1 ) for proteins and the region between 2800 and 3500 cm −1 attributable to the stretching vibrations of S-H, C-H, N-H and O-H, and carbon skeleton fingerprint vibrations mainly belonging to lipids [14]. We already used FTIR to reveal oxidative stress mediated DNA damage, artificially induced by in vitro Fenton's reaction on hydrated sperm samples. Although DNA damage interpretation was partially compromised by experimental artifacts, we showed that strong oxidation of nucleic acids can be revealed through spectral changes in the 1150-1000 cm −1 infrared region, the signatures of phosphate stretching bands. In addition, under the oxidant condition, the analysis of other infrared spectral regions highlighted signs of lipid peroxidation, protein misfolding and aggregations [9]. To further deepen the potentialities of FTIR spectroscopy for sperm characterization, in the present work we used FTIR spectromicroscopy to evaluate the semen of human male subjects belonging from couples submitted to ART procedures with the aim of unraveling the specific macromolecular changes promoted by the in vitro capacitation process, and the features corresponding to good quality. Moreover, UV-Raman spectroscopy was exploited on sperm isolated DNA in order to reveal nitrogenous bases status. Sperm Preparation Six freshly ejaculated semen samples were collected at the Assisted Reproduction Unit of the Institute for Maternal and Child Health IRCCS Burlo Garofolo, Trieste, Italy. Patient samples obtained through masturbation after sexual abstinence (at least two days) were processed according to World Health Organization (WHO) guidelines [4]. All donors signed an informed consent, and the study was conducted in accord with the ethical standards of the Declaration of Helsinki (7th version 2013) and approved by the FVG regional ethical committee (5mille15D1, approval date: 22 October 2019). Sperm were washed in Sydney IVF gamete Buffer (Cook Medical, Bloomington, IN, USA) and an aliquot was prepared by swim-up as previously described [3]. Briefly, 1 mL of semen was washed with a double quantity of medium (Quinn's Advantage Medium w/HEPES, SAGE BioPharma™, Bedminster, NJ, USA), added with 0.5% human serum albumin (SAGE Assisted Reproduction Products™, CooperSurgical, Trumbull, CT, USA) and then centrifuged at 300× g for 10 min. The pellet was resuspended in 0.5 mL of medium in a tube and covered by additional 0.5 mL of medium, gently layered, then the tube was sloped at an angle of 45 degrees and incubated for at least 45 min at 37 • C. At the end, the tube was gently set upright and two aliquots were collected by gentle aspiration with a Pasteur pipette: the upper interface (fraction 1) corresponding to capacitated semen, and the pellet (fraction 3) corresponding to not capacitated cells. Washed base samples are referred to in the paper as the control. Small aliquots of control and fractions 1 and 3 were examined for sperm concentration and sperm motility. FTIR Spectroscopy FTIR measurements were carried out at the Chemical and Life Science branch of the infrared Beamline SISSI, Elettra Sincrotrone Trieste (Trieste, Italy), using a Hyperion 3000 Vis-IR microscope (15X condenser/objective) and a MCT (Mercury-Cadmium-Telluride) detector coupled with a Vertex 70 v interferometer (Bruker Optics GmbH, Ettlingen, Germany). Live sperm cells were measured upon washing of the sample and resuspension in NaCl 0.9%. FTIR measurements were taken, exploiting several points of the samples keeping them at room temperature, using a fluidic cell equipped with 0.5 mm CaF 2 windows, collecting at least 20 spectra (512 scans per each spectrum) with a resolution of 4 cm −1 . Background spectra were obtained for each fraction on regions where only the buffer is present with the same acquisition parameters. During all measurements, the interferometer was kept in vacuum, while the microscope chassis was purged with nitrogen flow. FTIR Data Processing and Analysis Raw IR spectra were corrected for the contribution of atmospheric CO 2 and water vapor. Additionally, in order to elude artifacts induced by local thickness variations, vector normalization in the entire spectral range was obtained by Atmospheric Compensation and Vector Normalization routines of OPUS 7.5 software (Bruker Optics GmbH, Ettlingen, Germany). Absorbance spectra of hydrated spermatozoa were obtained by subtracting the spectrum of the physiological buffer solution, collected close to the measured cell group. Each sperm spectrum underwent standard vector normalization for comparison purposes. Standard deviation of the average spectra was also calculated. A rubber band baseline was subtracted from the averaged spectra in order to reduce the Mie scattering contribution [15]. Second derivatives of subtracted cell spectra were computed for the average spectra using OPUS 7.5 (Bruker Optics GmbH) in the 3020-920 cm −1 spectral regimes (Savitzky-Golay filter, 13 smoothing points). Few spectral regions of interest were selected in order to evaluate possible modifications occurring in lipid content and of protein secondary structure population. Accordingly, the whole spectra were cut into two different spectral regions in order to work with a specific portion such as 3000-2800 cm −1 (lipids, proteins) and 1750-1480 cm −1 (proteins), respectively. The minima found using the second derivative approach were used as initial values of the Gaussian curves centers used in the fitting procedures, performed by a home-built software called "PlotIR". The centers of the Gaussian curves were kept free to variate within 4 cm −1 around their initial positions, while the Full-Width-Half-Maximum (FWHM) of the curves was let free to variate within 10. The positions of the infrared peaks found in the regions 1750-1480 cm −1 and 3000-2800 cm −1 , respectively, were collected. UV Resonant Raman (UVRR) DNA was extracted from the semen samples (base and capacitated sperm) using the salting out method [16] with some modifications. Briefly, about 1-2 millions of cells were lysed with proteinase K (0.15 mg/mL final concentration) in 428 µL of lysis buffer (composed by 10 mM tris, 0.01 M EDTA, 0.1 M NaCl and 3% SDS) for 2 h at 37 • C. Then the samples were precipitated with 224 µL of NaCl 6 M, after centrifugation the nucleic acid were precipitated with 1 mL of cold ethanol and the pellet resuspend after centrifugation and washing in distilled water. Five microliters of DNA solution were deposited by drop casting on an aluminum surface in order to perform UVRR) measurements, carried out at Elettra Synchrotron Radiation facility. A complete description of the experimental apparatus can be found elsewhere [17]. A 244 nm laser source was employed to excite the samples with a power approximately equal to 50 µW. The Raman scattering signal was collected in a backscattering configuration. To avoid photodamaging, samples were continuously shaken during acquisition. The Raman instrument was composed by a Czerny-Turner spectrometer with focal length of 750 mm, a holographic reflection grating of 1800 g/mm and a Peltier-cooled back-thinned CCD. Raman frequencies were calibrated on cyclohexane spectra and the spectral resolution was 5 cm −1 [18]. Final spectra were obtained averaging 12 spectra of 30 s, for a total integration time of 360 s. Spermiogram Sperm motility of the capacitated and pellet fractions of the six patients are shown in Table 1. The motility was calculated as the percentage of the total spermatozoa in each fraction that do not remain immobile. To note the patients 2, 5 and 6 had abnormal motility values, being lower of 40%. Other clinical parameters, including volume, concentration, morphology, presence of leukocytes and other cells (as epithelial cells) were in the normal range. FTIR Analysis In Figure 1 the FTIR spectrum of a patient sperm cells is presented, where the shadowed boxes highlight the most important spectral regions to be taken into account, namely the region (900-1300 cm −1 ), the amide region (1450-1700 cm −1 ) and the lipid one at higher wavenumbers (2800-3000 cm −1 ). The fingerprint region collects vibrational modes arising from cholesterol, DNA, fatty acids, whereas the other two regions are purely assigned to protein and lipid structures [19]. The FTIR fingerprint region of the different fractions seems to be less sensitive to capacitation () and due to the presence of overlapping bands, we could not quantify whether minor differences between band intensities correlate straightforwardly to spermatozoa motility [20]. For this reason, we decided to focus the study on the regions corresponding to amide bands and lipids. Additionally, possible nucleic acids modifications among the fractions were investigated by means of UV Resonance Raman spectroscopy at 244 nm, an appropriate wavelength to predominantly enhance the nucleic acids contributions avoiding that the spectra were affected by a fluorescence background. study on the regions corresponding to amide bands and lipids. Additionally, possible nucleic acids modifications among the fractions were investigated by means of UV Resonance Raman spectroscopy at 244 nm, an appropriate wavelength to predominantly enhance the nucleic acids contributions avoiding that the spectra were affected by a fluorescence background. Figure 1. IR spectra: FTIR spectrum of the control sample of patient 4. Colored boxes are used to highlight the spectral regions of interests, such as 1300-900 cm −1 , that of amide bands (1700-1480 cm −1 ) and, finally, that assigned mainly to lipid vibrations (3000-2800 cm −1 ). Protein Secondary Structure Amide bands (1700-1480 cm −1 ) and in particular amide I (1600-1700 cm −1 ) represented the most informative spectral features for the characterization of protein secondary structure. Amide I arose mainly from the C=O stretching of the peptide linkage, and its intrinsic broadness derived from its multicomposed nature. Interestingly, several minima were observed from the second derivative spectra in the amide I region. Each minimum corresponds to a particular protein arrangement in the 3D space. In Table 2 the positions of the sub-bands characterizing amide I are reported with their relative assignment. Despite the major sensitivity of amide I to modifications of protein secondary structures, amide I and II were fitted simultaneously to avoid underestimation of protein subpopulations contents due to the manipulation of the spectra (i.e., its cutting around 1600 cm −1 ). [19,25] In Figure 2 the amide bands of control, fractions 3 and 1 are reported. Amide I band shape of controls is similar to that of fraction 3, whereas that of fraction 1 is slightly perturbed by the capacitation process. The observed effect must be considered as an average change of the sperm proteome, instead of a specific protein alteration. All the spectra reported in Figure 2 were fitted by Gaussian curves as described in the Materials and Methods, in order to evaluate both the possible Figure 1. IR spectra: FTIR spectrum of the control sample of patient 4. Colored boxes are used to highlight the spectral regions of interests, such as 1300-900 cm −1 , that of amide bands (1700-1480 cm −1 ) and, finally, that assigned mainly to lipid vibrations (3000-2800 cm −1 ). Protein Secondary Structure Amide bands (1700-1480 cm −1 ) and in particular amide I (1600-1700 cm −1 ) represented the most informative spectral features for the characterization of protein secondary structure. Amide I arose mainly from the C=O stretching of the peptide linkage, and its intrinsic broadness derived from its multicomposed nature. Interestingly, several minima were observed from the second derivative spectra in the amide I region. Each minimum corresponds to a particular protein arrangement in the 3D space. In Table 2 the positions of the sub-bands characterizing amide I are reported with their relative assignment. Despite the major sensitivity of amide I to modifications of protein secondary structures, amide I and II were fitted simultaneously to avoid underestimation of protein subpopulations contents due to the manipulation of the spectra (i.e., its cutting around 1600 cm −1 ). [19,25] In Figure 2 the amide bands of control, fractions 3 and 1 are reported. Amide I band shape of controls is similar to that of fraction 3, whereas that of fraction 1 is slightly perturbed by the capacitation process. The observed effect must be considered as an average change of the sperm proteome, instead of a specific protein alteration. All the spectra reported in Figure 2 were fitted by Gaussian curves as described in the Materials and Methods, in order to evaluate both the possible secondary structure modification provoked by in-vitro capacitation and, secondly, if we can achieve an average protein composition linked to the motility value. Each patient is associated with a specific color: patient 1 is depicted in black, patient 2 in red, patient 3 in blue, patient 4 in pink, patient 5 in orange and patient 6 in green. In the insets, aimed at elucidating the major differences in protein secondary structure composition, we report the findings concerning two cases characterized by important differences in motility, being extreme phenotypes such as very high motility vs. very low motility. The overall secondary structure composition of each sample is reported in Figure 3. Control samples present a similar content of β-structures and α-helix, with a slight predominance of the former in half the analyzed samples. Fraction 3 samples were actually characterized by the predominance of β-structures, especially in patients 1, 2, 5 and 6. Differently to fraction 3 samples, fraction 1 samples were mainly marked out by a higher content of α-helix in all patients, except patient 2. Thus, the secondary structure variability visible in control and fraction 3 samples was reduced with capacitation, since five over six patients show a markedly enhanced presence of helixlike secondary structure. In the inset of Figure 3 the mean values of both β and α-helix structures are presented with their standard deviation. β-structures and α-helix mean contents in controls samples are 48% ± 5% and 44% ± 3%, respectively, whereas they reach 61% ± 14% and 34% ± 13% in fraction 3 samples. Capacitation overturns these percentages in fraction 1 samples to 39% ± 7% and 55% ± 7%, respectively. Each patient is associated with a specific color: patient 1 is depicted in black, patient 2 in red, patient 3 in blue, patient 4 in pink, patient 5 in orange and patient 6 in green. In the insets, aimed at elucidating the major differences in protein secondary structure composition, we report the findings concerning two cases characterized by important differences in motility, being extreme phenotypes such as very high motility vs. very low motility. The overall secondary structure composition of each sample is reported in Figure 3. Control samples present a similar content of β-structures and α-helix, with a slight predominance of the former in half the analyzed samples. Fraction 3 samples were actually characterized by the predominance of β-structures, especially in patients 1, 2, 5 and 6. Differently to fraction 3 samples, fraction 1 samples were mainly marked out by a higher content of α-helix in all patients, except patient 2. Thus, the secondary structure variability visible in control and fraction 3 samples was reduced with capacitation, since five over six patients show a markedly enhanced presence of helix-like secondary structure. In the inset of Figure 3 the mean values of both β and α-helix structures are presented with their standard deviation. β-structures and α-helix mean contents in controls samples are 48% ± 5% and 44% ± 3%, respectively, whereas they reach 61% ± 14% and 34% ± 13% in fraction 3 samples. Capacitation overturns these percentages in fraction 1 samples to 39% ± 7% and 55% ± 7%, respectively. Oxidative Stress: Lipid Peroxidation In order to observe if the extent of lipid peroxidation could be considered as an alternative evidence of spermatozoa mobility, we focused on the infrared region (3000-2800) cm −1 , which is mainly characterized by lipids vibrations [14], especially CH 2 and CH 3 symmetric and asymmetric stretching. A detailed assignment of the band is reported in Table 3 (lipid). Oxidative Stress: Lipid Peroxidation In order to observe if the extent of lipid peroxidation could be considered as an alternative evidence of spermatozoa mobility, we focused on the infrared region (3000-2800) cm −1 , which is mainly characterized by lipids vibrations [14], especially CH2 and CH3 symmetric and asymmetric stretching. A detailed assignment of the band is reported in Table 3 (lipid). [14,26,27] As reported in Figure 4, the shape of this region was maintained throughout spermatozoa capacitation, but a deepened analysis revealed that the relative intensity between two bands at 2853 cm −1 and 2870 cm −1 resulted influenced by the capacitation process. The 2853 cm −1 band corresponded to the symmetric vibration of CH2, while the 2870 cm −1 band corresponded to the antisymmetric vibration of CH3. Interestingly, the former was considered as a marker of lipidic character of a cellular compartment, while the ratio 2853/2870 cm −1 was linked to the extent of lipid peroxidation or, more [14,26,27] As reported in Figure 4, the shape of this region was maintained throughout spermatozoa capacitation, but a deepened analysis revealed that the relative intensity between two bands at 2853 cm −1 and 2870 cm −1 resulted influenced by the capacitation process. The 2853 cm −1 band corresponded to the symmetric vibration of CH 2 , while the 2870 cm −1 band corresponded to the antisymmetric vibration of CH 3 . Interestingly, the former was considered as a marker of lipidic character of a cellular compartment, while the ratio 2853/2870 cm −1 was linked to the extent of lipid peroxidation or, more in general, to biophysical modifications of lipidic chains composing cellular membranes [29,30]. In particular, the increasing of this ratio has been related to a higher acyl chain unsaturation level [31]. Similarly, by fitting the whole spectral region by a sum of Gaussian curves, we could calculate the ratio R: R = A(2853)/A(2870) as the ratio of the areas of the abovementioned bands. In Figure 5 the R ratios calculated for control, fraction 1 and 3 samples of all the patients are presented. The R value was strongly affected by capacitation and an increase of the ratio was observed in patients with low motile spermatozoa. In fact, almost all the fractions 3 had low motility with respect to the corresponding sample fraction 2 and an R value higher or close to 1. Where the sample was characterized by low motile spermatozoa and the artificial capacitation failed (for example in the case of patients 2), all the fractions analyzed had a R value close or higher than 1. Differently, patients 1, 3 and 4 were characterized by higher motile spermatozoa also in the control fraction and at least fractions control and 1 had a R value lower than 1. The effectiveness of capacitation was clearly manifested in those cases, where low motile spermatozoa were separated from the rest, remaining in the fraction 3 with an R value higher or close to 1. Additionally, when the mobility of spermatozoa in the control fraction was near 40% (patients 5 and 6) and the capacitation was able to select only the high motile ones, fraction 1 had a R value lower than 1, whereas fraction 3 had a R value close to 1. in general, to biophysical modifications of lipidic chains composing cellular membranes [29,30]. In particular, the increasing of this ratio has been related to a higher acyl chain unsaturation level [31]. Similarly, by fitting the whole spectral region by a sum of Gaussian curves, we could calculate the ratio R: R = A(2853)/A(2870) as the ratio of the areas of the abovementioned bands. In Figure 5 the R ratios calculated for control, fraction 1 and 3 samples of all the patients are presented. The R value was strongly affected by capacitation and an increase of the ratio was observed in patients with low motile spermatozoa. In fact, almost all the fractions 3 had low motility with respect to the corresponding sample fraction 2 and an R value higher or close to 1. Where the sample was characterized by low motile spermatozoa and the artificial capacitation failed (for example in the case of patients 2), all the fractions analyzed had a R value close or higher than 1. Differently, patients 1, 3 and 4 were characterized by higher motile spermatozoa also in the control fraction and at least fractions control and 1 had a R value lower than 1. The effectiveness of capacitation was clearly manifested in those cases, where low motile spermatozoa were separated from the rest, remaining in the fraction 3 with an R value higher or close to 1. Additionally, when the mobility of spermatozoa in the control fraction was near 40% (patients 5 and 6) and the capacitation was able to select only the high motile ones, fraction 1 had a R value lower than 1, whereas fraction 3 Thus, on the basis of the R value (i.e., to the extent of lipid peroxidation), we could distinguish fractions that were characterized by spermatozoa with low motility from those that had good quality values. The results of our analysis are summarized in Figure 6. Control, fraction 1 and 3 samples that had lower motility were characterized by a R value higher or close to 1. In particular, low motile samples (i.e., of patients 2 and 5 of figure motility) had an average R value of control, fraction 3 and 1 of 1.5 ± 0.7, 1 ± 0.1 and 1.2 ± 0.2, respectively. On the contrary, control, fraction 1 and 3 samples that had high motility were characterized by an R value lower than 1. In particular, control samples with high motility (i.e., of patients 1, 3 and 4) had an average R value of 0.5 ± 0.1; fraction 3 samples of 0.5 ± 0.2 and fraction 1 samples of 0.5 ± 0.2. Thus, on the basis of the R value (i.e., to the extent of lipid peroxidation), we could distinguish fractions that were characterized by spermatozoa with low motility from those that had good quality values. The results of our analysis are summarized in Figure 6. Control, fraction 1 and 3 samples that had lower motility were characterized by a R value higher or close to 1. In particular, low motile samples (i.e., of patients 2 and 5 of figure motility) had an average R value of control, fraction 3 and 1 of 1.5 ± 0.7, 1 ± 0.1 and 1.2 ± 0.2, respectively. On the contrary, control, fraction 1 and 3 samples that had high motility were characterized by an R value lower than 1. In particular, control samples with high motility (i.e., of patients 1, 3 and 4) had an average R value of 0.5 ± 0.1; fraction 3 samples of 0.5 ± 0.2 and fraction 1 samples of 0.5 ± 0.2. Figure 5. Lipid ratio: the 2853/2870 cm −1 area peak ratio (upper panel) and the motility (lower panel) of 6 patients. Each patient is associated with a specific color: patient 1 is depicted in black, patient 2 in red, patient 3 in blue, patient 4 in pink, patient 5 in orange and patient 6 in green. Horizontal dashed lines are used to mark when the area peak ratio between the 2853 and of 2870 cm −1 bands is equal to 1 and, in the lower panel, when the motility is 40%, which is considered as a threshold of fertility. Quantification of DNA Damaging by UV-RAMAN DNA vibrations strongly absorb in the region lying within 1250-920 cm −1 and, generally, the increasing of the 1013-1080 cm −1 -bands relative intensity has been reported as a good marker of DNA gradual damaging [32]. However, the overlap between DNA-related bands with amide III band and those addressed to carbohydrates and cholesterol make difficult the disentanglement between those contributions in FTIR spectra. To overcome these limitations, we took advantage of the use of UV Resonant Raman spectroscopy with the aim to selectively enhance the vibrational signals arisen from nitrogenous bases. This choice permits to enhance selectively only the DNA contribution, especially by adenine and guanine, which completely hides those of proteins and lipids. In Figure 7, it is shown the UVRR spectra of control and fraction 1 samples. Both spectra were characterized by the typical line shape associated with nitrogenous bases normal modes. More specifically, the vibration at 1655 cm −1 was due to the thymidine C=O stretching, the ones at 1578 cm −1 derived from both the adenosine NH 2 scissoring vibrational mode and from guanosine NH 2 scissoring and ring vibrations. Additionally, adenosine and guanosine internal ring vibration gives rise to a peak at 1484 cm −1 and at 1337 cm −1 . Finally, a small component at approximately 1250 cm −1 can be assigned to adenine, guanine and thymine internal ring vibrations combined with N-H bending. Finally the peak near 1530 cm −1 results in a cytosine contribution [33]. From Figure 7 we can see that UVRR did not detect any modifications of the chemical conformations of the nitrogenous bases upon in-vitro capacitation. i.e., motility higher or lower than 40%). Control samples are depicted in black, while fractions 1 and 3 in light blue and red. The standard deviation is indicated in black over the bars. As well-reported in the graph, each ratio reported in the graph is the average value obtained choosing the patients' fractions with a motility lower (on the left) or higher (on the right) than 40%. In particular, all the patients' fractions with a measured motility lower than 40% have a ratio higher or close to 1, whereas the average value of the fractions with a motility higher than 40% have a ratio lower than 1. Quantification of DNA Damaging by UV-RAMAN DNA vibrations strongly absorb in the region lying within 1250-920 cm −1 and, generally, the increasing of the 1013-1080 cm −1 -bands relative intensity has been reported as a good marker of DNA gradual damaging [32]. However, the overlap between DNA-related bands with amide III band and those addressed to carbohydrates and cholesterol make difficult the disentanglement between those contributions in FTIR spectra. To overcome these limitations, we took advantage of the use of UV Resonant Raman spectroscopy with the aim to selectively enhance the vibrational signals arisen from nitrogenous bases. This choice permits to enhance selectively only the DNA contribution, especially by adenine and guanine, which completely hides those of proteins and lipids. In Figure 7, it is shown the UVRR spectra of control and fraction 1 samples. Both spectra were characterized by the typical line shape associated with nitrogenous bases normal modes. More specifically, the vibration at 1655 cm −1 was due to the thymidine C=O stretching, the ones at 1578 cm −1 derived from both the adenosine NH2 scissoring vibrational mode and from guanosine NH2 scissoring and ring vibrations. Additionally, adenosine and guanosine internal ring vibration gives rise to a peak at 1484 cm −1 and at 1337 cm −1 . Finally, a small component at approximately 1250 cm −1 can be assigned to adenine, guanine and thymine internal ring vibrations combined with N-H bending. Finally the peak near 1530 cm −1 results in a cytosine contribution [33]. From Figure 7 we can see that UVRR did not detect any modifications of the chemical conformations of the nitrogenous bases upon in-vitro capacitation. In addition, the analyses did not reveal any significant variation at nitrogenous bases level in all the samples. Figure 6. Average_R value: the average value of the 2853/2870 cm −1 area peak ratio of the control samples, of fractions 1 and 3 of patients with high (on the right) and low motility (on the left; i.e., motility higher or lower than 40%). Control samples are depicted in black, while fractions 1 and 3 in light blue and red. The standard deviation is indicated in black over the bars. As well-reported in the graph, each ratio reported in the graph is the average value obtained choosing the patients' fractions with a motility lower (on the left) or higher (on the right) than 40%. In particular, all the patients' fractions with a measured motility lower than 40% have a ratio higher or close to 1, whereas the average value of the fractions with a motility higher than 40% have a ratio lower than 1. Discussion In ART laboratories the procedures of spermatozoa sample preparation should be simple and economic in order to fit in the routines, and at the same time they must ensure a quick selection of a In addition, the analyses did not reveal any significant variation at nitrogenous bases level in all the samples. Discussion In ART laboratories the procedures of spermatozoa sample preparation should be simple and economic in order to fit in the routines, and at the same time they must ensure a quick selection of a good number of high-quality cells. In physiologic condition, spermatozoa become hyperactivated and capacitated during their journey towards the oocytes within the women's reproductive tract. Approaching the oocyte, the spermatozoa prime and modify their plasma membrane to initiate the acrosome reaction. The capacitation of sperm is achieved through different factors, such as the presence of women cervical mucus, which is rich in capacitation factors, and the interaction with Fallopian tubes epithelium [34]. Lipid membrane modifications during spermatozoa maturation and capacitation is one of the first changes that occurs, and it is characterized by the removement of cholesterol and desmosterol from the surface by protein acceptor molecules like albumin, high-density lipoproteins and apolipoproteins. Intriguingly, cholesterol is a decapacitation factor and is required in order to stabilize the membrane during their journey and to avoid the interactions responsible for the sperm maturation. A low level of oxidative stress is needed for cholesterol efflux [35]. Nevertheless, a high level of oxidative stress, lipid peroxidation and abnormal distribution of polyunsaturated fatty acids (PUFAs) alongside sperm cells are contributing factors to male infertility since they slow down or inhibits the natural process of sperm maturation within male and female bodies [36,37]. In vitro fertilization techniques reproduce what is occurring alongside women's reproductive tract, and through sperm preparation techniques, the most motile sperm and capacitated sperm were selected for injection into the oocytes. In the swim-up procedure, which is one of the most used protocols, the incubation of sperm in a medium supplemented with albumin induces an efflux of cholesterol from membranes and an increase of membrane fluidity [35]. In our study, both basal and capacitated sperm were analyzed by using FTIR and UVRR techniques in order to reveal the vibrational spectral features of capacitation and the potential signatures to be monitored for a proper selection of good quality sperm. After in vitro capacitation of six human sperm samples through swim-up method, two different fractions of cells were analyzed, i.e., the capacitated sperm in the upper interface (designed as fraction 1) and the pellet (designed as fraction 3) representing the not capacitated and low-quality cells. From FTIR analyses we obtained important clues on the macromolecular changes, mainly lipid and protein modifications, induced by the in vitro capacitation process. Indeed, as shown in Figures 2 and 3, the shape of the amide I in the spectra revealed a modification in the protein secondary structures of sperm samples with a higher presence of β-structure over the α-helix in the pellet cells, and an opposite change in the capacitated cells (upper fraction). Meanwhile in the basal semen the percentages of α-helix and β-structure were similar. The increment of β-sheet in the pellet cells of fraction 3 may be expected since this conformation is a sign of protein aggregation [38]. Indeed, a high content of β-sheets proteins in cells and tissues and the conversion from an α-helix rich protein population towards a β-sheets-rich one has been previously used as a hallmark of a pathologic status [31]. In this view, the increase of the β-structures could be correlated to a higher percentage of poor-quality spermatozoa, i.e., low motility and not viable cells. In addition, to explain the abundance of α-helix in capacitated sperm, it is possible to speculate that this is related to the modifications that spermatozoa underwent during capacitation process, indeed it is widely known that alterations in the expression levels of different proteins occur [39]. Very interestingly, the most important feature that we detected in this work was related to the lipid peroxidation, measured by FTIR as CH 2 /CH 3 vibrations, that results strongly influenced by capacitation and correlated to sperm motility. While the shape of the lipid region (3050-2800 cm −1 ) [14] was maintained relatively constant throughout spermatozoa capacitation in the different samples, we decided to analyze the ratio of symmetric CH 2 /CH 3 vibrations (2853/2870) since the two related peaks appeared the most affected. The 2853 cm −1 band corresponded to the symmetric vibration of CH 2 , while the 2870 cm −1 band corresponded to the symmetric vibration of CH 3 . As reported extensively in the literature, lipid bands are commonly used to diagnose the presence of a biophysical modification of cell membranes such as lipid peroxidation, modification of lipids chain length, oxidative stress and degree of acyl chain saturation level [29]. Noteworthy, this ratio has been recently used also to monitor the benefic effect of cannabis in the treatment of schizophrenia, resulting in a minor disruption of lipid composition, in a reduction of lipid peroxidation and in an increased lipid membrane renewal rate [31]. Our results confirmed that the extent of sperm cell membrane peroxidation, the lower the better, when calculated from the ratio 2853/2870, can represent a fingerprint of spermatozoa motility, potentially applicable in clinical practice. Noteworthy, the CH 2 /CH 3 ratio could provide important insights regarding the average length of aliphatic chains composing of lipids and fatty acids. Since this ratio was higher than 1 in infertile sperm cells, it indicates that their membranes were mechanically stiffer and composed by a higher concentration of long chain CH 2 groups, which suggests a weakening of the cell membrane-skeleton structure [30]. In fact, on the basis of the R value we could separate the sperm samples with low or high percentage of motile cells. Patient samples with an R value higher or close to 1 displayed a low percentage of motile cells and intriguingly in some cases in vitro capacitation was able to recover the R ratio to 1 without significantly increasing motility. Instead, when basal specimens present an R value below 1 and close to 0.5, the ratio was not further improved by capacitation, while the proportion of motile cells in the fractions 1 was highly increased. Alterations in the lipid profile was indeed previously associated with sperm dysfunction resulting in infertility and previous studies already identified some of the lipids mainly involved [40]. It is generally known that the plasma membrane contains approximately 70% phospholipids, 25% neutral lipids and 5% glycolipids [41]. In sperm the definitive lipid patter is acquired after epididymal maturation and it is mainly constituted by significant levels of PUFAs being linolenic acid, linoleic acid and oleic acid, which are the most present ones. From gas chromatography-mass spectrometry and Percoll-selected spermatozoa experiments, Lenzi and colleagues determined that PUFAs consist of 36-52% of the total fatty acids in sperm cells and normal sperm cells possess a higher content of docosahexaenoic acid (DHA) [42,43]. PUFAs contribute to membrane fluidity and flexibility and are precursors of prostaglandins and leukotrienes. Their reduction and, in particular, the reduction of DHA underlies an alteration in the fatty acid definitive pattern with subsequent infertility. As a matter of fact, when the fatty acids distribution contains low content of PUFAs and DHA and the efflux of cholesterol is not efficient throughout the natural capacitation, sperm are characterized by low motility, low flexibility and fluidity and by an atypical morphology [42,43]. As reported by Aksoy and colleagues, asthenozoospermic and oligozoospermic samples are characterized by an anomalously higher content of monounsaturated fatty acids (MUFAs) and lower amount of PUFAs compared to normozoospermic samples, respectively [44]. In addition, both these pathological conditions are characterized by a higher content of saturated fatty acids (SFA) in sperm cells compared to normozoospermic samples [44]. Similarly, Chen et al. observed lower levels of DHA, as the proportion to total sperm lipid content, and a higher amount of ω-3 and ω-6 fatty acids in capacitated sperm from individuals with oligozoospermia or asthenozoospermia with respect to healthy individuals, being capacitation obtained by the discontinuous Percoll gradient [45]. Additionally, a low level of DHA and a high level of elaidic acid (SFA) are found in patients suffering of varicocele [46]. Furthermore, infertile patients exhibited alterations in stearic and polyunsaturated fatty acids in spermatozoa and seminal plasma [47]. Decrement of DHA was also correlated with low sperm motility in individuals affected by retinitis pigmentosa, which manifest this characteristic also in erythrocytes, suggesting that the overall lipid dyshomeostasis of the patients may strongly impact gametes functionality [48]. These observations are consistent also to the negative repercussion that obesity has on fertility [49]. Thus, from the data reported in literature, we could deduce that a higher content of PUFAs (especially DHA) and a lower amount of SFAs and MUFAs had a positive impact on the fertility of sperm cells. From a structural point-of-view, SFAs consist in an unbranched structure and their general formula consists in CH 3 (CH 2 )n R-COOH. Differently, MUFAs and PUFAs contain one or more double bonds alongside the lipidic chain, respectively. Noteworthy, a lower amount of CH2 (i.e., of SFAs and MUFAs) and a higher content of double bonds (i.e., of PUFAs) alongside the C-C chains was correlated to a higher motility and fertility of sperm cells as our FTIR and biochemical analysis revealed. These evidence could elucidate why pathological conditions of sperm cells were generally related to a low content of PUFAs and a higher content of saturated CH 2 -rich fatty acids. Notably lipids can undergo detrimental modifications as those induced by reactive oxidative stress (ROS) causing lipid peroxidation. Despite low levels of ROS actually being crucial for achieving spermatozoa fertilizing capability and for their maturation, for chemotaxis, for the acrosome reaction and for their binding to zona pellucida, excessive levels of ROS produce toxic effects. Even if spermatozoa have developed a sort of defense system based on enzymatic and non-enzymatic antioxidants [50], in several conditions the balance between the levels of ROS and of the antioxidants is not maintained, mainly leading to a loss of spermatozoa mobility. An overexposure of spermatozoa to ROS could result in excess lipid peroxidation, which, in turn, could damage the male germ cells' DNA [50]. We acknowledge a limitation of our FTIR study related to the unspecific identification of the lipid composition of the samples. Indeed, as a future perspective, a complete metabolomic study, including the lipidome, could better elucidate and complement our findings. Contrary to what was expected based from our previous work [9], FTIR analysis were insufficient to reveal DNA modifications related to the capacitation process, and significant differences among patients with different sperm motility. The main reason could reside in the high complexity of the fingerprint region (900-1300 cm −1 ), where vibrational modes derived from cholesterol, DNA, fatty acids and carbohydrates were mixed. Moreover, the small number and poor homogeneity of patients prevented the identification of promising features to be linked to motility. In order to assess the presence of major damaging of DNA after capacitation, we performed UVRR analysis on isolated DNA. No modification of the chemical composition of the nitrogenous bases was detected. The analyses here performed were related to a small number of patients, and could not be conclusive, however they are in line with the many evidence that lipids are important in the performance of gametes [40]. In conclusion, although we need a statistically robust experimentation, increasing the number of enrolled patients, in order to clinically validate the use of FTIR, our data indicate that R could be proposed as a promising tool to evaluate the quality of spermatozoa and a potential strategy to screen the semen samples, also in the clinical practice of ART centers. FTIR spectroscopy has been largely employed for the characterization of several cell types and body fluids, female gametes and semen as well, presenting the advantage of being nondestructive and allowing in vivo cell investigations due to the negligible sample heating induced by IR sources [9,51]. In addition, since compact and handy device based on IR spectroscopy are available, we can expect that FTIR spectroscopy will be introduced soon as a fast, inexpensive and easy-to-use diagnostic tool for semen quality assessment, based on the biochemical information contained in the IR spectra of both sperms and seminal fluid.
2020-11-19T09:17:30.014Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "0cfa51624a14dc7a4759a8170b8ba694d574b28c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/21/22/8659/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "501004e2f20c415913c38c6e2e4fac86e07bfd04", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
51873628
pes2o/s2orc
v3-fos-license
Non-reference image quality assessment and natural scene statistics to counter biometric sensor spoofing Non-reference image quality measures (IQM) as well as their associated natural scene statistics (NSS) are used to distinguish real biometric data from fake data as used in presentation/sensor spoofing attacks. An experimental study shows that a support vector machine directly trained on NSS as used in blind/referenceless image spatial quality evaluator provides highly accurate classification of real versus fake iris, fingerprint, face, and fingervein data in generic manner. This contrasts to using the IQM directly, the accuracy of which turns out to be rather data set and parameter choice-dependent. While providing very low average classification error rate values for complete training data, generalisation to unseen attack types is difficult in open-set scenarios and obtained accuracy varies in almost unpredictable manner. This implies that for each given sensor/attack set-up, the ability of the introduced methods to detect unseen attacks needs to be assessed separately. Introduction We have observed a drastic increase in biometric authentication techniques being applied in various applications, ranging from border control to financial services. This is done to either complement or even replace classical authentication techniques based on tokens or passwords. Of course, this increased usage has also caused fraudulent attacks being mounted more often against biometric systems. Besides injecting fraudulent data into the communication inside a biometric system or attacking the template database, attacks against the proper functioning of the biometric sensor gain increasing importance. Such attacks are usually termed 'presentation' -or 'sensor-spoofing' -attacks and are conducted by presenting artefacts mimicking real biometrics traits to the biometric sensor to be deceived or by replaying earlier captured biometric sample data on some suited device, thus also attempting to deceive the sensor ('replay attack'). Counter-measures to this type of attacks have of course been considered already and are typically termed as 'anti-spoofing' or 'presentation-attack detection' measures [1]. In this context, very different approaches have been followed. The first type of antispoofing approach targets the liveness of the presented biometric traits in a passive or active manner and is thus termed as 'livenessdetection'. For example, pulse can be measured from facial video or hippus can be determined from temporal high-resolution iris video (in both cases, passive liveness detection is conducted). An example for active liveness detection is to determine reaction to illumination changes in pupil dilation during data acquisition for facial, periocular, or iris recognition systems. Passive liveness detection is efficiently able to prevent attacks conducted by e.g. gummy fingers or facial masks; however, it can be fooled by a replay attack as signs of liveness are also present in recaptured video. Active liveness detection on the other hand is able to withstand both types of attacks. The second anti-spoofing approach directly focuses on the replay of previously recorded biometric sample data -as this attack involves the recapturing of previously recorded data by the biometric sensor the corresponding counter-measure is termed 'recapturing detection'. Techniques in this category include the detection of unnatural movement in video footage as indication of an attack, e.g. caused by hand motion when presenting a photo or display device to the sensor. Other approaches look into the interference between display refresh rate of the replaying device and temporal resolution of the video captured by the biometric sensor to detect an ongoing replay attack. Obviously, these methods are not able to detect attacks conducted with artefacts as they are directly and solely focused on the replay of the data. The third type of anti-spoofing approach is more generic and uses texture properties of real biometric trait data acquired by the biometric sensor to discriminate from either recaptured data or data resulting from presenting some spoofing artefact to the sensor. Contrasting to liveness-based methods, which are specific to the target modality, and recapturing detection, which is limited and has to be focused to specific sensor/display type (including print-outs of course) combinations, texture-based methods usually employ generic texture descriptors together with subsequent machine learning techniques to discriminate real biometric data from spoofed variants. Of course, for this purpose, training data for classifier training is required. For example, a large variety of local image descriptors have been compared with respect to their ability to identify spoofed iris, fingerprint, and face data [2] and of course highly successful texture descriptors like local binary patterns have been extensively used for this purpose. However, it is often cumbersome to identify and/or design texture descriptors suited for a specific task in this context. Therefore, also generative techniques like deep learning employing convolutional neural networks have been successfully applied to discriminate real from spoofed biometric data [3,4]. A very different way to identify spoofed data is to look into the quality of the imagery, assuming that the quality of the real biometric data is better or at least different from spoofed data. This of course can be seen as a specific type of texture-based discrimination approach. Related work considers two approaches in this context: first, the approach can be entirely agnostic of the considered modality by using general purpose image quality measures (IQM) [5,6], and second, image quality metrics can be tailored to the biometric modality under investigation (see e.g. [7] which use face-specific data quality in order to recognise spoofing attacks against face recognition systems). The major contribution of this paper is to employ general purpose non-reference IQM (also termed 'blind' IQM) as well as the underlying natural scene statistics (NSS) in biometric spoofing attack/presentation attack detection and to assess their corresponding performance in different application settings. Complementing earlier results [5], we use (i) a different and larger set of non-reference IQM (six instead of two) and (ii) do not fuse the results with full-reference IQM values but focus on using one or several fused blind IQMs as generic spoofing detection technique. Extending own prior work on using non-reference IQM for presentation attack detection [6,8], (i) we add support vector machine (SVM) as a second classifier, also avoiding datadependent parameter optimisation in its employment and thus achieving better result generalisability and present ISO/IEC 30107-3 compliant evaluation, (ii) we directly train blind/ referenceless image spatial quality evaluator (BRISQUE) NSS on our data instead of using IQM output as classification input features, and (iii) we experimentally evaluate a specific type of open-set classification scenario, where our presentation attack detection schemes are confronted with real sample data of different sensors (i.e. looking into cross-sensor spoofing detection) and fake sample data of unseen subjects. Section 2 introduces and explains the blind IQM as used in this paper. The databases specifically provided to test presentation attack detection techniques for iris, fingerprint, face, and fingervein recognition used in the present work are described in Section 3. Section 4 presents corresponding experimental anti-spoofing results in three distinct experimental set-ups, while Section 5 provides the conclusions of this paper. Non-reference image quality metrics Non-reference or blind IQM are easier in deployment when compared with full-reference or reduced-reference IQM as no information of the full-quality reference image is required for application. On the other hand, they are also harder to design as this lack of comparison data renders the design of these IQM much more difficult. There are different ways how to design blind IQM, depending on the necessity and type of training data used and the extent of generalisation potential of the admissible distortion types considered. Thus, depending on these design principles, we face some limitations. Among the techniques designed so far, we may distinguish opinion-aware (OA) IQM designs, where the IQM are trained on databases containing distorted imagery for which human annotations in terms of quality are available, and opinion-unaware (OU) IQM designs, which only rely on deviations from statistical regularities seen in natural images without the requirement of training on human annotated distortion databases. OA IQM are intrinsically limited as their assessment is limited to quality impairment resulting from distortion types they have been trained on. The examples for the first type, i.e. OA IQM, are distortion identification-based image verity and integrity evaluation (DIIVINE), blind image quality index (BIQI), and BRISQUE, while natural image quality evaluator (NIQE), blind image integrity notator (BLIINDS-II), and blind image quality assessment through anisotropy (BIQAA) are OU IQM. Systematic comparisons of non-reference or blind IQM (NR IQM) as considered subsequently in spoofing detection have been published on traditional IQM tasks [9,10]. Similarly, in nontrained [9] as well as in specifically trained manner [10], the correspondence to human vision is highly dependent on the target data set and on the nature of distortion present in the data. Thus, present studies did not identify a 'winner' among the techniques available concerning the correspondence to subjective human judgement and objective distortion strength. OU NR image quality metrics NIQE: The NIQE [11] is a spatial-domain IQM relying on an NSS model. The image is partitioned into patches for which sharpness is determined and only patches with sufficient sharpness are considered further. Those patches are pre-processed by local mean removal and divisive normalisation. From these data, for each patch, 36 NSS features are computed and these are fit to a multivariate generalised Gaussian (MVG) model. This MVG model is then compared to the 'natural' MVG model which is obtained by conducting the same procedure on natural images of good quality only. The extent of deviation from this model determines quality. BLIINDS-II: The BLIINDS-II [12] computes NSS from a local discrete cosine transform (DCT) domain. After partitioning the image into patches, a local 2D DCT is computed on each of the blocks. Subsequently, the DCT domain in each block is partitioned into a low-frequency, mid-frequency, and high-frequency DCT subband, respectively. Furthermore, the DCT block is partitioned into three differently oriented subregions. Subsequently, an MVG fit is computed for each of the DCT subbands defined in this manner. From these parameters, the quality is derived in comparison to corresponding MVG parameters computed from high-quality imagery. BIQAA: BIQAA [13] is the only NR IQM considered in this work which does not rely on NSS. In contrast, BIQAA measures the variance of the expected entropy of the image to be assessed in a set of predefined directions. Entropy is computed on a local basis by using a spatial-frequency distribution as an approximation for a pre-defined probability density function. For BIQAA, the generalised Renyi entropy and the normalised pseudo-Wigner distribution (PWD) are chosen in the used implementation. In this context, a pixel-by-pixel entropy value is computed enabling the generation of entropy histograms. The variance of the expected entropy is measured for different directions, and the differences are used to indicate anisotropy. Directional selectivity can be achieved by using an orientation-selective one-dimensional (1D) PWD implementation. OA NR image quality metrics BRISQUE: BRISQUE [14] operates in the spatial domain and uses virtually the same NSS as NIQE. The major difference to NIQE is the training on distorted images. For this purpose, similar kinds of distortions as present in the LIVE image quality database were introduced in each training image with varying strengths to create a set of the distorted images: JPEG 2000, JPEG, white noise, Gaussian blur, and fast fading channel errors. Subsequently, a mapping is learned from feature space to quality scores resulting in a measure of image quality. For that purpose, a SVM regressor is used. DIIVINE: The DIIVINE [15] employs a two-stage framework consisting of distortion identification with subsequent distortionspecific quality determination. DIIVINE considers three common distortion types, i.e. JPEG compression, JPEG2000 compression, and blur. In order to compute statistics from distorted images, the steerable pyramid decomposition is used. The steerable pyramid is an over-complete wavelet transform offering enhanced orientation selectivity when compared to using classical wavelet transform, as e.g. in BIQI. BIQI: The BIQI [16] is based on a two-stage framework like DIIVINE as well and employs a classical wavelet transform over three scales using Daubechies 9/7 wavelet biorthogonal wavelet basis. The computed wavelet subband coefficients are used to compute NSS parameters (again an MVG fit is conducted): The first step is image distortion classification (which is based on a measure of how the NSS are modified and uses five distortion types: JPEG, JPEG2000, WN, Blur, and FF), the second step is quality assessment, using an algorithm specific to the distortion identified. Natural scene statistics IQM applied to images result in a single quality score in a certain range ([0, 100] in our set-up) for each IQM. Typically, these scores are obtained by applying machine learning techniques to map NSS to quality scores based on human judgement of distorted images or undistorted images only. Thus, the actual quality score delivered by IQM is neither directly related nor does it necessarily fit well to our application case for discriminating real from spoof biometric data. An alternative solution to this drawback is to avoid the deviation via quality scores but to train NSS directly on the 'real' and 'fake' labels of our data. Doing this, we also avoid the dimensionality reduction to a 1D quality score but retain the full NSS information for training. of the BioSecure data set [17]. Four samples of each iris were captured in two acquisition sessions with the LG Iris Access EOU3000. Thus, the database holds 800 real image samples (100 irises × 4 samples × 2 sessions). The fake samples were also acquired with the LG Iris Access EOU3000 from high-quality printed images of the original sample. As the structure is the same as for the real samples, the database comprises 800 fake image samples (100 irises × 4 samples × 2 sessions). Fig. 1 displays example images. Used spoofing/presentation attack databases The data set has been used before in spoofing/presentation attack detection investigations, e.g. [2,5,18]. ATVS-FFp DB: The ATVS-FFp database consists of fake and real images taken from a human's index and middle finger of both hands. Those fingerprints can be divided into two categories: with cooperation (WC) and without cooperation (WOC). 'WC' means that acquisition assumes the cooperation of the fingerprint owner, whereas images taken 'WOC' are latent fingerprints which had to be lifted from a surface. Independent of the category, four samples of each finger were captured in one acquisition session with three different sensors: • flat optical sensor Biometrika Fx2000 (512 dpi), • sweeping thermal sensor by Yubee with Atmel's Fingerchip (500 dpi), • flat capacitive sensor by Precise Biometrics model Precise 100 SC (500 dpi). As a result, the database consists of 816 real/fake images (68 fingers × 4 samples × 3 sensors) samples taken WC and 768 real/ fake images (64 fingers × 4 samples × 3 sensors) samples taken WOC. Fig. 2 displays example images from this data set. IDIAP replay-attack DB [22]: The replay-attack database for face spoofing consists of 1300 video clips of photo and video attack attempts to 50 clients under different lighting conditions. All videos were generated by either having a real client trying to access a laptop through its webcam or by displaying a photo/video to the webcam. Real as well as fake videos were taken under two different lighting conditions: • Controlled: The office light was turned on, blinds are down, background is homogeneous. • Adverse: Blinds up, more complex background, office lights are out. To produce the attack, high-resolution videos were taken with a Canon PowerShot SX150 IS camera. The way to perform the attacks can be divided into two subsets: the first subset is composed of videos generated using a tripod to present the client biometry ('fixed'). For the second set, the attacker holds the device used for the attack with his/her own hands ('hand'). In total, 20 attack videos were registered for each client, 10 for each of the attacking modes just described: • four times mobile attacks using an iPhone 3GS screen (with resolution 480 × 320 pixels), • four times high-resolution screen attacks using an iPad (first generation, with a screen resolution of 1024 × 768 pixels), • two times hard-copy print attacks (produced on a Triumph-Adler DCC 2520 colour laser printer) occupying the whole available printing surface on A4 paper. As the algorithms used in our experiment are not compatible with videos, we extracted every Xth frame from each video and used them as test data in our experiment. Fig. 3 displays example images used in experimentation. The spoofing-attack finger vein database [23]: This data set is provided by IDIAP Research Institute, consisting of 440 index finger vein images (both real authentications and spoofed ones (i.e. attack attempts)) corresponding to 110 subjects. Two different types of samples are available (as shown in Fig. 4): full (printed) images and cropped images where the resolution of the full images is 665 × 250 and that of the cropped images is 565 × 150 pixel, respectively. This data set has been released in the context of the '1st Competition on Counter Measures to Finger Vein Spoofing Attacks' [23] and now it is the data basis for most research in finger vein sensor spoofing [24][25][26]. Experimental set-up For each image in the databases, quality scores were calculated with the IQM described in Section 2. We used the MATLAB implementations from the developers of BIQI, BLIINDS-2, NIQE, DIIVINE, BRISQUE (all available from http://live.ece.utexas.edu/ research/quality/) and BIQAA (available at https:// www.mathworks.com/matlabcentral/fileexchange/30800-blindimage-quality-assessment-through-anisotropy). In all cases, we used the default settings. We normalised the result data with the result that 0 represents a good quality and 100 the bad one which is already the default result in all cases except BIQAA. Originally, the data of BIQAA is between 0 and 1. However, the values are so small that we had to define our own limits for the normalisation. A thorough analysis shows that our values are all between 0.00005 and 0.05; therefore, we used these figures as our limits. Moreover, we had to change the 'orientation' of the BIQAA quality scores to be conforming to our definition. Summarising, the following formula (1) was built: 4.1.1 Experiment 1: training sensor/setting identical to evaluation sensor/setting: In the first stage of experiment 1, we only consider the distribution of the quality scores. Our aim was to eventually find a threshold between the values of the real data and the fake ones for the various IQM. Afterwards, in the second stage, we used the quality scores for a leave-one-subject-out cross-validation (training data is all data but the samples of the current subject to be classified, which is applied to each subject) to get an exact assertion about the classification possibility with NR IQM. To classify our data, we used k-nearest neighbours (kNN) as well as SVM classification. For kNN, our used k were 1, 3, 5, 7, and 9 (denoting the number of images with the closest feature vector considered) for this experiment and we exhaustively evaluate all combinations of IQM (i.e. resulting in different feature vector dimension and composition). Thus, we combined several quality scores of the different measures into one vector and used this for the kNN classification. The distance for the kNN-classification was the distance between the two vectors corresponding to the two images in question. The kNN results presented are the best ones, which means that we introduce a bias in the results here to see what is possible, but the best configuration in terms of IQM combination and k will be data dependent and will probably not generalise. For SVM, we use feature vectors consisting of all IQM scores (i.e. dimension 6) applying LIBSVM [27] with RBF kernel for training and thus, no bias by selecting certain IQM is introduced as all IQM are used. The grid-parameters (c, g) for the scalable vector regression were searched on a grid in logarithmic space. In order to conduct a fair evaluation in the used cross-validation, (c, g) are optimised within each training fold but then applied to the evaluation data. The quantitative performance of the different techniques is measured according to the metric developed in ISO/IEC 30107-3 in terms of: (i) attack presentation classification error rate (APCER), which is defined as the proportion of an attack presentation incorrectly classified as normal (or real) presentation (falsenegative spoof detection); (ii) normal presentation classification error rate (NPCER), which is defined as the proportion of a normal presentation incorrectly classified as an attack presentation (falsepositive spoof detection). Finally, the performance of the overall technique is assessed in terms of average classification error rate (ACER): Of course, the lower the values of ACER (as well as APCER and NPCER), the better is the performance of the spoofing detection. Experiment 2: training with BRISQUE NSS: In our experiment 2, we applied the BRISQUE NSS data and trained it on our labels. As first option, we applied kNN to the 36-dimensional BRISQUE NSS again using different values for k, presenting the best result achieved. As second option we applied SVM: The BRISQUE software does not only provide a pre-trained model delivering quality scores but also offers the option for training on different labels than quality scores using LIBSVM [27]. This is applied within the cross-validation evaluation. Experiment 3: training sensor/setting different from evaluation sensor/setting: As correctly pointed out in [28], sensor spoof detection can of course not be considered a closed set problem. This means, that in a real-world scenario, the training data for a specific sensor will never be complete as in general we do not know which artefacts will be used by an attacker -thus the classifier should also work on unseen spoof types. This of course raises the question how to train a classifier based on such incomplete training data, a typical case of open set binary classification. The fact that the performance of a classifier will decrease when testing with samples unseen in training has been well studied in machine learning and pattern recognition, e.g. related to the 'over-fitting problem'. This generalisation problem of data-trained classifiers has been discussed also in the context of general image classification [29] and in biometrics (see e.g. in gender classification [30]). The general open-set recognition problem has recently been addressed [31][32][33] and the developed open-set classification techniques have been successfully applied to soft biometrics (mark, scar, and tattoo classification [34]), camera attribution and device linking [35], and fingerprint spoof detection [28]. In the latter work, emphasis is set to detect also attacks with unseen spoofing artefact fabrication material. Contrasting to that, one aspect of experiment 3 covers the issue of cross-sensor or inter-database spoofing detection. This means that unseen attacks involve samples acquired with different sensors than those the antispoofing system has been trained on, a topic that has gained increasing importance. Recent work has considered this scenario with various presentation attack detection methods for fingerprint [36][37][38], face [39], speaker [40], and iris [41,42] recognition techniques, respectively. In experiment 3, we investigated two different settings to simulate open-set scenarios: first, the ATVS-FFp database contains classical fingerprint imprints (WC) and latent fingerprints (WOC). So far, we have strictly separated those two sets as for both types real and fake versions are available. In order to simulate the openset scenario, we trained the used classifier with classical imprints, while we evaluated on the latent fingerprint data. This can be seen as a special case of considering unseen fabrication material. As a second setting, we used real sample data captured by different sensors and investigate how the spoof detection techniques trained on the sample data used before do react. In this setting, it is not entirely clear what to count as correct or incorrect decision (i.e. how to define APCER and NPCER): a (real) sample captured by a different sensor could be rated as 'real' as it corresponds to data captured from a real finger; on the other hand, it could be rated as 'fake' as it has been captured by a different sensor and might be the result of a successful injection attack. We follow the first consideration also due to the possibility to consider cross-and multi-sensor spoof detection techniques. Thus, a real sample captured by a different sensor should be rated correctly as being 'real', thus accumulating errors (samples rated as 'fake') in NPCER. For iris samples, we used the SDUMLA-HMT data: This multimodal data set was collected during the summer of 2010 at Shandong University, Jinan, China. One hundred and six subjects, including 61 males and 45 females with age between 17 and 31, participated in the data collecting process, in which five biometric traits -face, finger vein, gait, iris, and fingerprint, are collected for each subject [43]. SDUMLA-HMT is available at http:// mla.sdu.edu.cn/sdumla-hmt.html. Every subject provided ten iris images, i.e. five images for each of the eyes. For fingerprint samples, we employed samples of 49 individuals from the CASIA-FingerprintV5 data set (http:// biometrics.idealtest.org/dbDetailForUser.do?id=7) also used in [44,45] differ between the two data sets). Thus, contrasting to the cases before, samples from this data set should be correctly classified as 'fake' and errors (i.e. samples rated as 'real') were counted in APCER. See Fig. 5 for an example of each data set. Note, that in experiment 3, we have an intrinsic separation of training and evaluation data (contrasting to experiments 1 and 2). Therefore, we did not apply a leave-one-subject-out crossvalidation but a direct classification of the evaluation samples based on the training data. In the second setting, involving the additional data sets containing real or fake data only, we only get NPCER or APCER results, thus ACER does not make sense and is omitted. Figs. 6 and 7, we display the distribution of single IQM values for real and fake data. For some cases, we notice a decent separation of the values almost allowing to specify a separation threshold. In the figures, we have depicted the threshold leading to the lowest ACER and have coloured the areas correctly classified in green. However, for most configurations, this simple strategy does not lead to useful results. Experiment 1 -results: In In many cases (see e.g. Fig. 7), we could not recognise any separation between the distributions because they exhibited a similar mean and spread for real and the fake data. That was the reason for employing training-based classification techniques and fusion techniques. In the case of kNN classification with only one IQM, we already obtain surprisingly good results [6,8]. However, we were not able to identify a single IQM specifically well suited for the target task. In contrast, it seems that the different distortions present in the spoofed data are quite specific in terms of the nature and characteristic of the distortions, which is the only explanation of different IQM performing best on different data sets. In fact, our results confirm the general results on IQM quality prediction performance [9,10] in that it is highly data set and distortion-dependent which IQM provides the best results. A further increase in classification accuracy (as computed by 100 − (APCER + NPCER)) is obtained by the combination of several IQM. Table 1 shows the best metric combinations in the case of kNN-classification for the considered databases from an exhaustive search. On average, we could improve our results by 7% compared to the single measure results [6,8] and so most of the results are over 90%. From the latter table, we notice that there is a trend of getting best results when combining a larger number of IQM, confirming earlier results in this direction [5]. In order to look into this effect more thoroughly (and to clarify the role of the k-parameter in kNN classification), we have systematically investigated the results of the exhaustive classification scenarios in [6,8]. We found that combining more metrics and choosing k large leads to better results on average, whereas the top results are achieved when using three to six metrics depending on the considered data set. For optimal values of k, we are not able to give a clear statement as k was also found to be 1 for three data sets in Table 1. In Table 2, we display ISO/IEC 30107-3 compliant results comparing kNN and SVM classification. For correctly interpreting these results, it is important to consider that for kNN classification, we present the best result in terms of ACER achieved when considering all admissible values for k and all possible combinations of IQM. For the kNN case (left table half), we also provide the corresponding k value and the number of employed IQM in this best configuration. For SVM, we do not introduce any bias by including all six IQM score values into the feature vector. The overall trend in terms of ACER is quite comparable for both kNN and SVM, as the databases exhibiting large and small ACER are identical for both techniques. For both kNN and SVM, there is no clear trend if APCER is usually larger as NPCER or vice versa. Also, there is no clear trend, which classification approach is better -SVM is superior for three data sets, while kNN is for six data sets. However, given that the kNN results come from a data-dependent parameter optimisation, SVM is strongly preferable as these results will generalise well as they are not at all fitted to the data. Overall, we face quite significant variations in terms of achieved ACER magnitude which implies that the methodology cannot be recommended as a general spoof detection approach but is restricted to suited data sets. Table 3, we show the results achieved when using BRISQUE NSS feature vectors instead of IQM ones, for both kNN as well as SVM classification. We observe identical behaviour with respect to the relation between APCER and NPCER as observed for IQM feature vectors (no clear trend which type of error is more frequent). When considering ACER, there is no clear improvement when changing from IQM to NSS feature vectors in the case of kNN classification. The situation is very different when considering the SVM results. NSS-based ACER values are clearly better for all but a single database (for which the values are identical) compared to IQM ones, partially considerably so. For example, ACER is reduced from 13.14 to 0.64 for the optical fingerprint data set (WC) and from 4.69/5 to 0.1 for both the capacitive fingerprint data set (WOC) and the full sized fingervein data set, respectively. Also, ACER values are superior for SVM compared to their kNN counterparts in the case of the NSS feature vectors. This is of particular interest, as the SVM results are expected to be highly generalisable due to the avoidance of data-specific bias. The significant superiority of SVM-NSS compared to kNN-NSS can probably be attributed to the significantly higher dimension of its feature vectors compared to SVM-IQM, for which SVN is much better able to exhibit its strengths as compared to kNN. As a consequence, we propose the employed SVM-NSS technique as a generic and rather accurate spoof detection methodology. Experiment 3 -results: The set of last experiments is devoted to the open-set topic, i.e. looking into effects in case the type of evaluated samples are not part of the available training set. Table 4 shows the results when classifying real and fake latent fingerprints (denoted as WOC) when classification is based on classical fingerprint data (WC). We compare all four considered classification techniques in this table. When comparing the obtained ACER results with the corresponding ones in Tables 2 and 3, we realise that in all four classification cases, ACER values are clearly worse in the 'openset' scenario. Interestingly, worst ACER results are now exhibited for SVM-NSS, the approach clearly performing best in the 'closedset' scenario. While this is surprising at first sight, it is not in fact. SVM-NSS is able to generate a very accurate model of the training data and thus performs quite well when working on seen spoof data. Contrasting, when confronted with unseen data very different from the training data, many errors do occur. Interestingly, not a single real sample is incorrectly classified as a fake one. However, almost every second fake sample is misclassified into a real one. This is also a very different behaviour as seen with the closedset scenario. In the open-set scenario, we observe significantly different magnitudes for APCER and NPCER, and the relation depends on the feature vector type. While for IQM-based feature vectors NPCER is clearly larger, the opposite is true for NSS-based ones. Finally, in Table 5, we display results when confronting our spoof detection methodology with samples from unseen sensors or unseen subjects. As explained earlier, we only present NPCER or APCER values, as the employed data set only contain real or fake samples, but not both. Again, we compare the four classification methodologies considered so far. Additionally, kNN-IQM ∅ denotes the NPCER/APCER averaged over all results varying the number of used IQM exhaustively and taking the minimal value for k = 1, 3, 5, 7, 9. The aim is to show that average behaviour of the kNN behaviour may significantly deviate from the best results presented so far in the results. The results in the table clearly confirm this -average results are clearly worse as compared to the best ones as shown in the first column. In some cases, the difference is small (e.g. fingervein full), in other cases results change from perfect spoof detection to entirely useless results like for iris when changing from the best result to the average behaviour. This also implies that data dependency for kNN is rather high which leads to poor generalisation potential for this approach. When looking at the results overall, we do hardly observe any general trends apart from the fact that results seem to strongly depend on the data sets considered and features/classification schemes employed. SVM-NSS, the classification scheme of choice for the closed-set scenario, performs perfectly for fingervein data, thus enabling cross-sensor spoof detection. On the other hand, for iris data, it does not work at all, classifying almost every real sample data as fake one while for fingerprint data every other real sample data is classified as fake. When looking at the actual pictorial data, it seems that fingervein data from different sensors is more similar than iris or fingerprint data from different sensors is (e.g. compare Figs. 4 and 5 for the fingervein case). NSS used with kNN classification exhibits the best overall results, with perfect classification for iris, three out of six fingerprint settings as well as for correctly detecting fake fingervein sample of unseen users but identical sensor. Applying SVM to IQM directly leads to consistent misclassifications in many cases, however, for two cases, the classification is almost perfect. The kNN results using IQM underpin the necessity of the k-parameter optimisation in case sensible results are expected. One might expect that corresponding fingerprint sensor types (i.e. optical versus optical) lead to lower error rates that different ones; however, the results do not reflect this behaviour. Overall, it is impossible to explain most effects in a sound manner, like the almost opposite behaviour for kNN and SVM on iris data for both feature vector types or the single outlier result of kNN-NSS for full fingervein data versus UTFVP. Also, the reasons for the entire failure of IQM feature vectors as opposed to NSS feature vectors for the full fingervein versus VERA data are hard to figure out. Conclusion We have found a high dependency on the actual data set/modality under investigation when trying to answer the question about the optimal settings when using non-reference IQM for biometric spoof detection. For some data sets, we obtain almost perfect separation of real and fake sample data, while for others, ACER values up to 10% can be observed. The situation changes considerably, when directly training NSS features (in our experiments those used by BRISQUE) on our data, especially when using SVM classification. In this setting, worst ACER values are bound by 3.8%, with a majority of computed ACER values being significantly <1%, which makes this approach an interesting candidate for a generic spoof detection methodology. In case the proposed spoof detection techniques are confronted with data from unseen sensors and/or subjects (modelling a more realistic open-set classification scenario incomplete training data), many results seem to be rather unpredictable. Thus, it seems to be advisable to apply recent open-set classification schemes to result in more stable and more generalisable results in case unseen data is to be expected. Acknowledgments This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no. 700259. Also, this work has been partially supported by the Austrian Science Fund, project no. 27776.
2018-07-31T13:02:56.267Z
2018-01-22T00:00:00.000
{ "year": 2018, "sha1": "a180376c0d91401a3334463ca52e1eff7685db5e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1049/iet-bmt.2017.0146", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "c9e78dd09a57683c79b99dac8edc4746faa877fd", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
52311600
pes2o/s2orc
v3-fos-license
Compounding of Wealth in Proof-of-Stake Cryptocurrencies Proof-of-stake (PoS) is a promising approach for designing efficient blockchains, where block proposers are randomly chosen with probability proportional to their stake. A primary concern with PoS systems is the"rich getting richer"phenomenon, whereby wealthier nodes are more likely to get elected, and hence reap the block reward, making them even wealthier. In this paper, we introduce the notion of equitability, which quantifies how much a proposer can amplify her stake compared to her initial investment. Even with everyone following protocol (i.e., honest behavior), we show that existing methods of allocating block rewards lead to poor equitability, as does initializing systems with small stake pools and/or large rewards relative to the stake pool. We identify a \emph{geometric} reward function, which we prove is maximally equitable over all choices of reward functions under honest behavior and bound the deviation for strategic actions; the proofs involve the study of optimization problems and stochastic dominances of Polya urn processes, and are of independent mathematical interest. These results allow us to provide a systematic framework to choose the parameters of a practical incentive system for PoS cryptocurrencies. Introduction A central problem in blockchain systems is that of block proposal: how to choose which block, or set of transactions, should be appended to the global blockchain next. Many blockchains use a proposal mechanism by which one node is randomly selected as leader (or block proposer ). This leader gets to propose the next block in exchange for a token reward-typically a combination of transaction fees and a freshly-minted block reward, which is chosen by the system designers. These reward mechanisms incentivize nodes to participate in the block proposal procedure, and are therefore critical to the security and liveness of the system. Early cryptocurrencies, including Bitcoin, overwhelmingly used a leader election mechanism called proof of work (PoW). Under PoW, all nodes execute a computational puzzle. The node who solves the puzzle first is elected leader; she proves her leadership by broadcasting a solution to the puzzle before the other nodes. Over the years, PoW showed itself to be extremely robust to security threats, but also extremely energy-inefficient. The Bitcoin network alone is estimated to use more energy than some developed nations [1]. An appealing alternative to PoW is called proof-of-stake (PoS). In PoS, proposers are not chosen according to their computational power, but according to the stake they hold in the cryptocurrency. For example, if Alice has 30% of the tokens, she is selected as the next proposer with probability 0.3. Although the idea of PoS is both natural and energy-efficient, the research community is still grappling with how to design a PoS system that provides security while also incentivizing nodes to act as network validators. Part of incentivizing validators is simply providing enough reward (in expectation) to compensate their resource usage. However, it is also important to ensure that validators are treated fairly compared to their peers. In other words, they cannot only be compensated adequately on average; the variance also matters. This observation is complicated in PoS systems by a key issue that does not arise in PoW systems: compounding. Compounding means that whenever a node (Alice) earns a proposal reward, that reward is added to her account, which increases her chances of being elected leader in the future, and increases her chances of reaping even more rewards. This leads to a rich-get-richer effect, causing dramatic concentration of wealth. For example, consider what would happen if Bitcoin were a PoS system. Bitcoin started with an initial stake pool of 50 BTC, and the block reward was fixed at 50 BTC/block for several years. Under these conditions, suppose a party A starts with 1 3 of the stake. Using a basic PoS model described in Section 2, A's stake would evolve according to a standard Pólya urn process [11], converging almost surely to a random variable with distribution Beta( 1 3 , 2 3 ) [16], (blue solid line in Figure 1). In this example, compounding gives A a high probability of accumulating a stake fraction near 0 or 1. This is highly undesirable because the proposal incentive mechanism should not unduly amplify or shrink one party's fraction of stake. Notice that this is not caused by an adversarial or strategic behavior, but simply due to the randomness in the PoS protocol, combined with compounding. In PoW, on the other hand, the analogue would be for party A to hold 1/3 of the computational power. In that case, A's stake after T blocks would be instead binomially distributed with mean (50 × T )/3 (black dashed line in Figure 1). Notice that the binomial (PoW) stake distribution concentrates around 1/3 as T → ∞, so if A contributes 1/3 of stake at the beginning, she also reaps 1/3 of the rewards in the long term. A natural question is whether we can achieve this PoW baseline distribution in a PoS system with compounding. We study this question from the perspective of the block reward function. Most cryptocurrencies today use a constant block reward function like Bitcoin's, which remains fixed over a long timespan (e.g., years). We ask how a PoS system's choice of block reward function can affect concentration of wealth, and whether one can achieve the PoW baseline stake distribution simply by changing the block reward function. This paper has five main contributions: 1. We define the equitability of a block reward function, which intuitively captures how much the fraction of total stake belonging to a node can grow or shrink (under that block reward function), compared to the node's initial investment. An equitable block reward scheme should limit this variability. This metric allows us to quantitatively compare reward functions. 2. We introduce an alternative block reward function called the geometric reward function, whose rewards increase geometrically over time. We show that it is the most equitable PoS block reward function, by showing that it is the unique solution to an optimization problem on the second moment of a time-varying urn process; this optimization may be of independent interest in the applied probability community. We further show that geometric rewards exhibit a number of desirable properties, including stability of rewards in fiat value over time. We note that despite optimizing equitability, geometric rewards do not achieve the PoW baseline stake distribution -this is the inherent price paid by the efficiency afforded by PoS compared to (the energy inefficient) PoW. The green histogram in Figure 1 illustrates the empirical, simulated stake distribution when geometric rewards are awarded over a duration of 1, 000 blocks, and the total rewards are the same as in the PoW example (i.e., 50 × 1, 000 units). These simulations are run over 100,000 trials. 3. Borrowing ideas from resource pooling in PoW systems, a plausible strategy of participants with small stakes in a PoS system is to collectively form larger stake pools. We quantify exactly the gain of such stake pool formation in terms of equitability, which proves that participating in a stake pool can significantly reduce the compounding effect of a PoS system. 4. We study the effects of strategic behavior (e.g. selfish mining) on the rich-get-richer phenomenon. We find that in general, compounding can exacerbate the efficacy of strategic behavior compared to PoW systems. However, these effects can be partially mitigated by carefully choosing the amount of block reward dispensed over some time period relative to the initial stake pool size. 5. Our analyses of the equitability of various reward functions provide guidelines for choosing system parameters-including the initial token pool size and the total rewards to dispense in a given time interval-to ensure equitability under a given block reward function. In particular, we show that cryptocurrencies that start with large initial stake pools (relative to the block rewards being disseminated) can mitigate the concentration of wealth, both for constant and geometric reward schemes. The rest of this paper is organized as follows. In Section 2, we present our model, and discuss the relation between it and real PoS cryptocurrencies. We also precisely define the constant and geometric block reward functions. In Section 3, we compare honest and geometric block reward schemes, showing that geometric rewards exhibit optimal equitability over all reward schemes. The resulting design decisions in choosing practical parameters of PoS system block reward schemes are discussed in Section 4. We use Section 5 to study the effects of strategic behavior on equitability -we find that neither constant nor geometric rewards provide robustness against selfish-mining-type attacks. The desired robustness to strategic behavior in PoS systems is perhaps designed via suitable incentive (and disincentive) mechanisms, as discussed in Section 6. Related work The potential of poor equitability of PoS systems has been explored in some detail in recent forum and blog posts in the cryptocurrency community [23,19,28], but no research has formally or quantitatively studied it (to the best of our knowledge). In this work, we quantify concentration of wealth through a new metric called equitability, which enables us to mathematically compare PoS to PoW, as well as different block reward schemes. As we discuss in Section 2, equitability is closely tied to the variance of a block reward scheme. Thus far, researchers and practitioners have reduced variance in block rewards through two main approaches: pooling resources (e.g., mining or stake pools) and proposing new protocols for disseminating block rewards. Resource pooling is a common phenomenon in cryptocurrencies. For example, since PoW mining requires substantial computational resources, few nodes are independently able to mine profitably. Mining pools democratize this process by allowing many nodes to participate in mining, while also sharing block rewards among those nodes [26,9]. In PoS systems, the analogous concept is stake pooling, where nodes aggregate their stake under a single node; block rewards are shared across the pool. Like mining pools, stake pools allow less wealthy players to participate in network maintenance. In Section 3.3, we show how much one can gain by participating in a stake pool in terms of equitability, and show that the proposed geometric reward function is still the most equitable even if some of the parties involved are forming stake pools. Consider a different scenario where some reward is deterministically dispensed to every participant of a PoS system at each block proposal according to some predeclared rules. In particular, the block proposer is treated no differently from any other participating party. There is no randomness in this system and hence no compounding effect. Under this assumptions, [5] studies a problem of an organic stake formation, where any participant is allowed to create a stake pool, where she acts as a leader of the pool at some cost. The PoS system designer can choose a reward function r(a, b) : R × R → R × R to be appleid to each pool, where a is the total stake of the pool, b is the stake that the leader holds, and r 1 (a, b) is to be shared equally over all participants of the pool according to their stake, and r 2 (a, b) is awarded to the leader. The goal of the system designer is to organically form a fixed target number k of stake pools, by choosing the reward function. Our work differs from [5] in three main respects: (1) While our work aims to optimize equitability, [5] aims to incentivize the formation of a target number of mining pools. (2) We study the effects of compounding on concentration of wealth, whereas [5] does not model compounding. (3) We study the dynamic setting as opposed to static setting. A second class of approaches for reducing variance actually changes the protocol for block reward allocation. Our work falls into this category. Two main examples of this approach are Fruitchains [21], which spread block rewards evenly across a sequence of block proposers, and Ouroboros [14], which rewards nodes for being part of a block formation committee, even if they do not contribute to block proposal. Both of these approaches were proposed in order to provide incentive-compatibility for block proposers; they do not explicitly aim to reduce the variance of rewards. However, they implicitly reduce variance by spreading rewards across multiple nodes, thereby preventing the randomized accumulation of wealth. In our work, instead of changing how block rewards are disseminated, we change the block reward function itself. Models and Notation We provide a probabilistic model for the evolution of the stakes under a PoS system, and introduce a measure of fairness, we call equitability. A Simple PoS model We begin with a model of a chain-based proof-of-stake system with m parties: A = {A 1 , . . . , A m }. We assume that all parties keep all of their stake in the proposal stake pool, which is a pool of tokens that is used to choose the next proposer. We consider a discrete-time system, n = 1, 2, . . . , T , where each time slot corresponds to the addition of one block to the blockchain. In reality, new blocks may not arrive at perfectly-synchronized time intervals, but we index the system by block arrivals. For any integer x, we use the notation [x] := {1, 2, . . . , x}. For all i ∈ [m], let S Ai (n) denote the total stake held by party A i in the proposal stake pool at time n. We let S(n) = m i=1 S Ai (n) denote the total stake in the proposer stake pool at time n, and v Ai (n) denotes the fractional stake of node A i at time n: For simplicity, we normalize the initial stake pool size to S(0) = 1; this is without loss of generality as the random process is homogeneous in scaling both the rewards and the initial stake by a constant. Each party starts with S Ai (0) = v Ai (0) fraction of the original stake. At each time slot n ∈ [T ], the system chooses a proposer node W (n) ∈ A such that Upon being selected as a proposer, W (n) appends a block, or set of transactions, to the blockchain, which is a sequential list of blocks held by all nodes in the system. As compensation for this service, W (n) receives a block reward of r(n) stake, which is immediately added to its allocation in the proposer pool. That is, The reward r(n) is freshly-minted at each time step, so it causes the total number of tokens to grow. We assume the total reward dispensed in time period T is fixed, such that T n=1 r(n) = R. Modeling Assumptions This model implicitly makes several assumptions. The first is that we assign a single leader (proposer) per time slot. Many cryptocurrencies have leader election protocols that allow more than one proposer to be chosen per time slot (e.g., Bitcoin, PoSv3, Snow White). If two leaders are elected at time n, for example, then each leader can append its block to one block at height n − 1; here the height of a block is its index in the blockchain. However, in these systems, only one leader can win the block reward since only one fork of the blockchain ultimately gets adopted. Assuming the final winner is chosen uniformly at random from the set of selected leaders, the dynamics of our Markov process remain unperturbed. Other cryptocurrencies (e.g., Qtum, Particl) choose the next proposer(s) as a function of the time slot and the preceding block. Again, this can lead to multiple proposers per time slot. This does not affect our results in the honest setting (for the same reason as above), but it does impact strategic behavior. In most blockchain systems, honest proposers always build on the head of the blockchain. However, in systems where the proposer's identity depends on the previous block, a strategic node can increase its chance of being a leader by appending to a block that is not at the head of the blockchain. If done repeatedly, the strategic player may eventually produce a chain that is longer than the honest chain (causing the honest nodes to switch over), which also contains mostly blocks belonging to the strategic player. This increases the player's reward, and is called a grinding attack. Such PoS systems are more vulnerable to strategic behavior than the system we analyze, where proposer election is a function of only the time slot. Despite this, we find that our model is drastically vulnerable to strategic behavior. Hence, the problem can only be worse in blockchains that use block contents to choose the next leader. We discuss the implications of this in Section 6.2. We have also assumed that users always re-invest their rewards into the proposer stake pool. We maintain that this is a reasonable assumption for two reasons: (1) In PoS systems where users explicitly deposit stake, existing implementations automatically deposit rewards back into the stake pool. For example, the reference implementation of Casper the Friendly Finality Gadget (a PoS finalization mechanism proposed for Ethereum) automatically re-allocates all rewards back into the deposited stake pool [24]. (2) In other PoS systems, the stake pool is simply the set of all stake in the system, and is not separate from the pool of tokens used for transactions [8]. Hence as soon as a proposer earns a reward, that reward is used to calculate the next proposer (modulo some maturity period); the user is not actively re-investing block rewards-it just happens naturally. Finally, we have chosen not to explicitly model node unavailability, e.g. due to hardware or network failures; in our context, node unavailability means that a selected proposer may forfeit its chance to propose, even though it was chosen. Assuming such node failures occur i.i.d. across draws from the proposer pool, such events do not alter our model dynamics. If a proposer is offline, the selection process is simply re-run; the slot in question is given to the next node, which is again chosen proportionally to the stake allocation in the proposer pool. Block reward choices Many cryptocurrencies have modeled their block reward strategy after Bitcoin's, which fixes the total supply of coins at about 21 million coins. To achieve this, block rewards are halved every 210,000 blocks (approximately four years) [2]. In between halving events, the block reward remains constant. Figure 2 illustrates this reward schedule in terms of our notation; if we let T i and R i denote the ith block interval and total reward, respectively, we can take T i = 210, 000 blocks, and R i = 50 · 1 2 i−1 · 210, 000. Several cryptocurrencies have similarly adopted block reward schemes that remain constant over extended periods of time, including Ethereum [3], ZCash [10], Dash [7], and Particl [12]. Note that choosing T i and R i is not our main focus; these parameters will likely be chosen based on economic considerations (we discuss this in Sections 3, 4 and 5).Below we aim to provide guidelines on how to choose r(n) once the T i 's and R i 's are fixed. Other cryptocurrencies have experimented with the block reward function. For example, Monero [18] has a block reward that decays with each block; this is intended to be a continuous interpolation of Bitcoin's piecewise constant reward function [29]. Peercoin [4], one of the first PoS cryptocurrencies, chooses the next leader based on the age and quantity of stake associated with a given public key. The PoS block reward is chosen as 1% of the product of a public key's stake quantity and stake age [15]; this differs from our model, where the block reward amount does not depend on the proposer who is selected. Systematic choice of reward functions In this paper, we revisit the question of how to choose r(n). A key observation is that r(n) is ultimately an incentive; it should compensate nodes for the resource cost of proposing blocks. Since this cost is roughly constant over time, many cryptocurrencies implicitly adopt the following maxim: On short timescales, each proposed block should yield the same block reward. Notice that this maxim does not specify whether the value of a block reward is measured in tokens or in fiat currency. As illustrated earlier, most cryptocurrencies today measure value in tokens; that is, they give the same number of tokens for each block. We call this approach the constant block reward function: A natural alternative is to measure the block reward's value in fiat currency. This approach depends closely on the cryptocurrency's valuation (and fluctuations thereof) over time interval [T ]. However, if we assume that the cryptocurrency's valuation is constant over [T ], then the resulting reward function should always give a constant fraction of the total stake at each time slot. We call this the geometric reward function, defined as follows: (3) Figure 3 shows geometric block rewards as a function of time if we use the same T i 's and R i 's as those in Figure 2, which were tailored to Bitcoin's block reward schedule. Note that a currency's valuation can change over the course of T i = 210, 000 blocks; these parameters were chosen simply to ease the comparison with Figure 2. Our assumption that valuation remains constant can be enforced by choosing a small enough value of T . We discuss this parameter choice in Section 4, but in short, we envision T being on the order of a day. Since T is measured in units of blocks, this implies anywhere between thousands to tens of thousands of blocks per time interval. Equitability To compare different reward functions, we define a metric called equitability. Consider the stochastic dynamic of the fractional stake of a party A that starts with v A (0) fraction of the initial total stake of S(0) = 1. We denote the fractional stake at time n by v A,r (n), to make the dependence on the reward function explicit. One desirable property of a PoS block reward function is that each node's fractional stake should remain constant over time in expectation. If A contributes 10% of the proposal stake pool at the beginning of the time, then A should reap 10% of the total disseminated rewards on average. Since randomness in proposer elections is essential to current PoS systems, this cannot be ensured deterministically. Hence, a straw-man metric for quantifying fairness is the expected fractional stake at time T . This metric turns out to be meaningless because most PoS systems elect a proposer (in Eq (1)) with probability proportional to the fractional stake; this approach ensures that each party's expected fractional reward is equal to its initial stake fraction, regardless of block reward function. Formally, ∀n ∈ [T ], This follows from the law of total expectation and the fact that Although all reward functions yield the same expected fractional stake, the choice of reward function can nonetheless dramatically change the distribution of the final stake, as seen in Figure 1. We therefore instead propose using the variance of the final fractional stake, Var(v A,r (T )), as an equitability metric. Intuitively, smaller variance implies less uncertainty and therefore a higher level of equitability. We make this formal in the following definition. Definition 1. For a positive vector ε ∈ R m , we say a reward function r : For two reward functions r 1 : [T ] → R + and r 2 : when both random processes start with the same initial fraction at each party of v Ai (0). The normalization in Eq. (5) ensures the left-hand side is at most one, as we show in Remark 1. It also cancels out the dependence on the initial fraction v A (0) such that the left-hand side only depends on the reward function r and the time T , as shown in Lemma 1. Remark 1. When starting with an initial fractional stake v A (0), the maximum achievable variance is where the supremum is taken over all positive integer T and reward function r : Proof. We first prove the converse, ) for all T and r. This follows from the fact that E[v A,r (T )] = v A (0), and v A,r (T ) is bounded below by zero and above by one. Maximum variance is achieved when all probability mass is concentrated on the boundary of zero and one. We prove the achievability, by constructing a simple constant reward function, with total reward R = T 2 is increasing super-linearly in T . From the variance computation of a constant reward function in Eq. (22), From the analysis of a time-dependent Pólya's urn model, we know the variance satisfies the following formula (see proof in Appendix A and also [22]). (Proof in Appendix A) Hence, although Definition 1 applies to an arbitrary number of parties, Lemma 1 implies that it is sufficient to consider a single party's stake. More precisely: with 1 denoting the vector of all ones. As such, the remainder of this paper will study equitability from the perspective of a single (arbitrary) party A. We will also describe reward functions as ε-equitable as shorthand for ε-equitable, where ε = 1 · ε. Note that even if the total reward R is fixed, equitability can differ dramatically across reward functions. In the example of Figure 1, the constant reward function is 0.5-equitable. On the other hand, the geometric reward function of Eq. (3) has smaller chance of losing all its fractional stake (i.e. v A,rg (T ) close to zero) or taking over the whole stake (i.e. v A,rg (T ) close to one). It is 0.05-equitable in this example. Equitability under Honest Behavior In this section, we analyze the equitability of different block reward functions, assuming that every party is honest, i.e. follows protocol, and the PoS system is closed, so no stake is removed or added to the proposal stake pool over a fixed time period T . Each party's stake changes only because of the block rewards it earns and compounding effects. We discuss the effects of strategic behavior in Section 5, and open systems in Section 6.1. The metric of equitability leads to a core optimization problem for PoS system designers: given a fixed total reward R to be dispensed, how do we distribute it over the time T to achieve the highest equitability? Perhaps surprisingly, we show that this optimization has a simple, closed-form solution. Theorem 1. For all R ∈ R + and T ∈ Z + , the geometric reward r g defined in (3) is the most equitable among functions that dispense R tokens over time T , jointly over all parties Intuitively, geometric rewards optimize equitability because they dispense small rewards in the beginning when the stake pool is small, so a single block reward cannot substantially change the stake distribution. The rewards subsequently grow proportionally to the size of the total stake pool, so the effect of a single block remains bounded throughout the time period. We emphasize that the geometric reward function only depends on R, S(0), and T , and in particular does not depend on how the initial stake is distributed among the participating parties. Hence, it is universally most equitable for all parties in the system simultaneously. Proof of Theorem 1 Lemma 1 and Remark 2 imply that in order to show joint optimality over all parties, it is sufficient to show that for an arbitrary party A, for all r ∈ R T such that T n=1 r(n) = R and r(n) ≥ 0 for all n ∈ [T ]. To this end, we prove that r g is a unique optimal solution to the following optimization problem: Using Lemma 1, we have an explicit expression for Var(v A,r (T )). After some affine transformation and taking the logarithmic function of the objective, we get an equivalent optimization of s.t. This is a concave maximization on a (rescaled) simplex. Writing out the KKT conditions with KKT multipliers λ and {λ n } T n=1 , we get ∀n ∈ [T ]: Among these solutions, we show that θ * = ((log(1 + R))/T ) 1 is the unique optimal solution, where 1 is a vector of all ones. Consider a solution of the KKT conditions that is not θ * . Then, we can strictly improve the objective by the following operation. Let i, j ∈ [T ] denote two coordinates such that θ i = 0 and θ j = 0. Then, we can createθ by mixing θ i and θ j , such thatθ n = θ n for all n = i, j andθ i =θ j = (1/2)θ j . We claim thatθ achieves a smaller objective function as log(2e θj − 1) < 2 log(2e θj /2 − 1). This follows from Jensen's inequality and strict concavity of the objective function. Hence, θ * is the only fixed point of the KKT conditions that cannot be improved upon. In terms of the reward function, this translates into S(n)/S(n − 1) = (1 + R) 1/T and r(n) Composition The geometric reward function does not only optimize equitability for a single time interval. Consider a sequence (T 1 , R 1 ), . . . , (T k , R k ) of checkpoints, where T i is increasing in i, and R i denotes the amount of reward to be disbursed between time T i−1 + 1 and T i (inclusive). These checkpoints could represent target inflation rates on a monthly or yearly basis, for instance. A natural question is how to choose a block reward function that optimizes equitability over all the checkpoints jointly. The solution is to iteratively and independently apply geometric rewards over each time interval, giving a block reward function like the one shown in Figure 3. Notice that when there is only one checkpoint, Theorem 2 simplifies to Theorem 1. This result implies that checkpoints can be chosen adaptively, i.e., they do not need to be fixed upfront to optimize equitability. One practical concern with concatenating checkpoints in this manner is that the change in block rewards before and after a checkpoint can be dramatic, as seen in Figure 3. This could cause other problems, such as proposers leaving the system. Hence a PoS system need not choose its block reward function based on equitability alone; it could also consider smoothness and/or monotonicity constraints, for instance. We leave such investigations to future work. Because of composition, we assume a single checkpoint for the remainder of this paper. Equitability of Stake Pools The participants have the freedom to form stake pools, as explored in [26,9,5]. We show in this section that stake pool formation reduces the variances of the fractional stake of all the members of the pool, and characterize exactly how much one gains. Consider a single party which owns v A (0) fraction of the stake at time t = 0. We know from Lemma 1 that the variance at time T is Consider a case where the same party now participates in a stake pool, where the pool P has v P (0) of the initial stake (including the contribution from party A), and every time the stake pool is awarded a reward for block proposal, the reward is evenly shared among the participants of the pool according to their stakes. The stake of party A under this pooling is denoted by vÃ(T ), and it follows from Lemma 1 immediately that The party A's variance reduces by a factor of (v )) by joining a stake pool of size v P (0). For example, if everyone in the system form a single pool, then there is no randomness left and the variance is zero. Note that the variance is monotonically decreasing under stake pooling. In practice, stake pools can organically form as long as this gain in equitability exceeds the cost involved in forming such stake pools. Applying the definition 1 to a single party A, equitability of a party improves by a factor of (v 0))equitability by forming a stake pool. Further, geometric reward function is still the most equitable reward function under the more general setting where the proposers are free to form stake pools. This follows from the fact that the effect of pooling is isolated from the effect of the choice of the reward function in Eq. (17), Practical Parameter Selection The equitability of a system is determined by four factors: the number of block proposals T , choice of reward function r, initial stake of a party v A (0), and the total reward R. We previously saw that geometric rewards optimize equitability over choices of the reward function; in this section, we study the dependence of equitability on T , S(0), and R. Recall that without loss of generality, we normalized the initial stake S(0) to be one. For general choices of S(0), the total reward R should be rescaled by 1/S(0). The evolution of the fractional stakes is exactly the same for one system with S(0) = 2 and R = 200 and another with S(0) = 1 and R = 100. Although these parameters may be chosen according to external considerations (e.g. interest rates, proposer incentives), we assume in this section that the system designer is free to choose the total reward R, either by setting the initial stake size S(0) and/or by setting the total reward during T . We study how equitability trades off with the total reward R for different choices of the reward function. Concretely, we consider a scenario where r and v A (0) are fixed and T is a large enough integer, and ask how many tokens we can dispense while maintaining a desired level of equitability ε. Geometric reward function For r g (n), we have e θn = (1 + R) 1/T . It follows from Lemma 1 that When R is fixed and we increase T , we can distribute small amounts of rewards across T and achieve vanishing variance. On the other hand, if R increases much faster than T , then we are giving out increasing amounts of rewards per time slot and the uncertainty grows. This follows from the above variance formula, which we make precise in the following. Remark 3. For a closed PoS system with a total reward R(T ) chosen as a function of T and a geometric reward function r g (n) = (1 + R(T )) n/T − (1 + R(T )) (n−1)/T , it is sufficient and necessary to set in order to ensure ε-equitability asymptotically, i.e. to ensure that This follows from substituting the choice of R(T ) in the variance in Eq. (18), which gives The limiting variance is monotonically non-decreasing in R and non-increasing in T , as expected from our intuition. For example, if R is fixed, one can have the initial stake S(0) as small as exp(− √ T /(log T )) and still achieve a vanishing variance. As the geometric reward function achieves the smallest variance (Theorem 1), the above R(T ) is the largest reward that can be dispensed while achieving a desired normalized variance of ε in time T (with initial stake of one). This scales as R(T We need more initial stake or less total reward, if we choose to use other reward functions. Constant reward function In comparison, consider the constant reward function of Eq. (2). As e θn = (1 + nR/T )/(1 + (n − 1)R/T ), it follows from Lemma 1 that Again, this is monotonically non-decreasing in R and non-increasing in T , as expected. The following condition immediately follows from Eq. (22). in order to ensure ε-equitability asymptotically as T grows. By choosing a constant reward function, the cost we pay is in the size of the total reward, which can now only increase as O(T ). Compared to R(T ) e √ T of the geometric reward, there is a significant gap. Similarly, in terms of how small initial stake can be with fixed total reward R, constant reward requires at least S(0) R/T . This trend gets even more extreme, for a decreasing reward function. Decreasing reward function Some cryptocurrencies use continuously decreasing reward functions. For instance, Monero dispenses block rewards as per Comparison of Reward Functions For a choice of S(0) = 1 and R = 10, Figure 4 illustrates the normalized variance for the three reward functions as a function of T , the number of blocks over which the reward is dispensed. As expected, variance decays with T and geometric rewards exhibit the lowest normalized variance. Similarly, for a fixed desired (normalized) variance level of ε = 0.1, Figure 5 shows how much the total reward can grow as a function of time T . Notice that under constant rewards, the reward allocation grows linearly in T , whereas geometric rewards grow subexponentially fast while still satisfying the same equitability constraint. These observations add nuance to the ongoing conversation about how to initialize cryptocurrency tokens that are not considered securities from a regulatory perspective. In the 2018 class action lawsuit of Coffey vs. Ripple [27], one of the primary complaints against Ripple was the fact that "all 100 billion of the XRP in existence were created out of thin air by Ripple Labs at its inception." Our results suggest that in a PoS system, a large initial stake pool can actually help to ensure equitability. Strategic Behavior In reality, proposers can be strategic to maximize their rewards. The most well-known strategic attack is selfish mining, proposed by Eyal and Sirer in the context of proof-of-work [9], and extended in [25,20]. In selfish mining, adversarial miners who discover blocks do not immediately publish them; rather they build a private side-chain of blocks. By eventually releasing a side chain that is longer than the main chain, the adversary can override blocks that were mined by the honest party. This has two effects: first, it gives the adversary a greater fraction of blocks in the main chain (and hence, block rewards) than they would get by mining honestly. Second, it forces honest parties to waste effort mining blocks that have a low chance of being accepted in the long term. Although selfish mining refers to a specific strategy that was designed for a PoW system, the concept of building side chains for increased profit can be applied to PoS systems as well. In this section, we show that such strategic attacks are exacerbated by the compounding effects of PoS. Contrary to the scenario where everyone behaves honestly, we empirically show that geometric reward functions do not mitigate the effects of compounding when strategic actors are present. Model We restrict ourselves in this section to two parties: A, which is adversarial, and H, which is honest. Note that this is without loss of generality, as H represent the collective set of multiple honest parties as their behavior is independent of how many parties are involved in H. The adversarial party A can also represent the collective set of multiple adversarial parties, as having a single adversary A is the worst case when all adversaries are colluding. Throughout this section, we use the terms adversarial and strategic interchangeably, to refer to the party that strategically deviates from honest behavior. Since A does not always publish its blocks according to schedule, we distinguish the notion of a block slot (indexed by n ∈ [T ]) and wall-clock time (indexed by t ∈ [T ]). It will still be the case that each block slot n has a single leader W (n)-in practice, this is determined by a distributed protocol-and a new block slot leader is elected at every tick of the wall clock (i.e., at a given time t, W (n) is only defined for n ≤ t). However, due to strategic behavior (i.e., the adversary can withhold its own blocks and override honest ones), it can happen that no block occupies slot n, even at time t ≥ n; moreover, the occupancy of block slot n can change over time. Thus, unlike our previous setting, if we wait T time slots, the resulting chain may have fewer than T blocks. This is consistent with the adversarial model considered in PoS systems (like Ouroboros [14]) that elect a single leader per block slot. Other PoS systems, like PoSv3 [8], choose an independent leader to succeed each block; such a PoS model can lead to even worse attacks, which we do not consider in this work. The honest party and the adversary have two different views of the blockchain, illustrated in Figure 6. Both honest and adversarial parties see the main chain B t ; we let B t (n) denote the block (i.e., leader) of the nth slot, as perceived by the honest nodes at time t. If a block slot n does not have an associated block at time t (either because the nth block was withheld or overridden, or because n > t), we say that B t (n) = ∅. Notice that due to adversarial manipulations, it is possible for B t (n) = ∅ and B t−1 (n) = ∅, and vice versa. In addition to the main chain, the adversary maintains arbitrarily many private side chains,B 1 t , . . . ,B s t , where s denotes the number of side chains. The blocks in each side chain must respect the global leader sequence W (n). An adversary can choose at any time to publish a side chain, but we also assume that the adversary's attacks are covert: it never publishes a side chain that conclusively proves that it is keeping side chains. For example, if the main chain contains a block B created by the adversary for block slot n, the adversary will never publish a side chain containing blockB = B, whereB is also associated with block slot n. Each side chainB i t with i ∈ [s] overlaps with the honest chain in at least one block (the genesis block), and may diverge from the main chain after some f i t ∈ N + (Figure 6). That is, Different side chains can also share blocks; in reality, the union of side chains is a tree. However, for simplicity of notation, we consider each path from the genesis block to a leaf of this forest as a separate side chain, instead of considering side trees. We use t and˜ i t to denote the chain length of B t andB i t , respectively, at time t: Notice in particular that the set of side chains grows exponentially in t. In practice, most systems prevent the main chain from being overtaken by a longer side chain that branches more than ∆ blocks prior to h t ; this is called a long-range attack. Hence we can upper bound the size of the side chain set by imposing the condition that for all i ∈ [s], h t − f i t ≤ ∆. Regardless, the size of the state space is considerably larger than it is in prior work on selfish mining in PoW [25], where the computational cost of creating a block forces the adversary to keep a single side chain. Objective. The adversary A's goal is to maximize its fraction of the total stake in the main chain by the end of the experiment, This objective is closely related to the metric of prior work [25], except for the finite time duration. Strategy space. The adversary has two primary mechanisms for achieving its objective: choosing where to append its blocks, and choosing when to release a side chain. If the honest party H is elected at time t, by the protocol, it always builds on the longest chain visible to it; since we assume small enough network latency, H appends to block B t−1 (h t−1 ). However, if A is elected at time t, A can append to any known block in B t−1 ∪ {B i t−1 } i∈[s] . The system must allow such a behavior for robustness reasons: even an honest proposer may not have received a block B t−1 (h t−1 ) or its predecessors due to network latency. The adversary can also choose when to release blocks. In our model, H always releases its block immediately when elected. However, an adversarial proposer elected at time t can choose to release its block at any time ≥ t; it can also choose not to release a given block. Late block announcements are also tolerated because of network latency; it is impossible to distinguish between a node that releases their blocks late and a node whose blocks arrive late because of a poor network connection. Notice that if A is elected at time t and chooses to withhold its block, the system advances to time t + 1 without appending A's block to the main chain. This means that the next proposer W (t+1) is selected based on the stake ratios at time t − 1. So the adversary may have incurred a selfish mining gain from withholding its block, but it lost the opportunity to compound the t th block reward. This tradeoff is the main difference between our analysis and prior work on selfish mining attacks in PoW systems. Drawing from [9,25], at each time slot t, the adversary has three classes of actions available to it: match, override, and wait. -The adversary matches by choosing a side chainB i t and releasing the first h t blocks. This means the released chain has the same height as the honest chain. In accordance with [9,25], we assume that after a match, the honest chain will choose to build on the adversarial chain with probability γ, which captures how connected the adversarial party is to the rest of the nodes. -The adversary overrides by choosing a side chainB i t and releasing the first h = h t + 1 blocks. The released chain becomes the new honest chain. -If the adversary chooses to wait, it does not publish anything, and continues to build on all of its side chains. Unlike [9,25], we do not explicitly include an action wherein the adversary adopts the main chain. Because our model allows the adversary to keep an unbounded number of side chains, adopting the main chain is always a suboptimal strategy; it forces the adversary to throw away chains that could eventually overtake the main chain. The primary nuance in the adversary's strategy is choosing when to match or override (rather than waiting), and which side chain to choose. Identifying an optimal mining strategy through MDP solvers as in [25] is computationally intractable due to the substantially larger state space in this PoS problem. Hence, in the following sections, we will discuss specific strategies that can increase the adversary's reward. Strategic selfish mining We show that the adversary can gain significantly by acting strategically, and this gain is exacerbated by the effect of compounding. Similar to selfish mining strategies originally introduced under PoW settings, an adversary can build side-chains to potentially take over the main chain. First critical difference is that an adversary can build arbitrarily many side chains branching from anywhere in the main-chain without additional cost (other than the memory required to store those side-chains). In PoW, this is prevented by the computational power required to create each additional side-chain. Secondly, the block rewards are also withheld for those adversarial blocks held aside to build side-chains. Under compounding, delaying the rewards of such side-chains costs the adversary in the following proposer elections, as the adversary is that much less likely to be elected a leader. An adversary needs to devise a strategy that balances the gain in keeping a long side-chain and potentially over taking a long main-chain, and the loss in those intermediate leader elections due to the withheld rewards. We propose a family of strategic schemes that we call Match-Override-k (MO-k). Under MO-k strategy, the adversary only keeps side-chains of length at most k. Concretely, the adversary uses the following strategy. Every time a new honest block is generated, this is appended to the main chain. At this point, the following actions are taken by the adversary. If there is a side-chainB i t such that h t = f i t and˜ i t ≥ t , then the adversary matches with the side-chainB i t with the smallest f i t , in which case the main chain length does not change, and the side chainB i t remains, and all other side chains are discarded. Otherwise, if there is no such chain to match, then the main chain remains as is, the adversary waits, and any side-chainB i t such that˜ i t < t are discarded, as those side-chains are too short and have little chance of taking over the main chain. This action is known as adopt in the original selfish mining strategy. Every time a new adversarial block is generated, the adversary appends this block to every side-chain she is managing currently. She also starts a new side-chain branching from the top of the main chain, if there is not a side-chain at the top already. At this point, one of the following actions are taken by the adversary. If there is only one side-chainB i t and it satisfies both h t = f i t and˜ i t ≥ t + k, then the adversary overrides with thisB i t , in which case the main chain increments by one adversarial block, and the side chainB i t remains. Otherwise, the main chain remains as is, and the adversary waits. In Figure 7, we can see that how much the adversary can gain in expected fractional stake, by using MO-k strategies. As the total reward R increases, the relative fractional stake approaches 3, which is the maximum achievable value as the expected fractional stake is normalized by v A (0) = 1/3. When the adversary is well connected to honest nodes, such that γ = 1.0, such attacks are effective with small length side-chains, such as k = 3 or 4. Further, there is no distinguishable difference in the reward function used. On the other hand, when the adversary has equal chance of matching honest chains, such as γ = 0.5, it is more effective to keep longer side-chains. Overall, the effect of strategic behavior is exacerbated by the compounding effects. Fig. 7: Average fractional stake of an adversary can increase significantly as the total reward R increases. We fix initial fraction v A (0) = 1/3, S(0) = 1, and T = 10, 000 time steps, and show for two values of network connectivity of the adversary γ ∈ {0.5, 1.0} defined in the strategy space subsection of Section 5.1 and varying total reward R. Upper bound We assume a constant reward function where a reward of c is dispensed to a proposer whose block is appended to the main chain. We begin with an upper bound on v A (t), the fraction of stake that can be achieved by the adversary. Always-Match-1 (AM-1): To show our upper bound, we analyze a random process called always-match-1 (AM-1). AM-1 is an urn process with state where as before, S A (t) denotes the number of tokens held by party A at time t. X H and X A can be thought of as the honest and adversarial stake, respectively; compared to S A and S H , they evolve under different dynamics, which are described below. We let v A (t) := X A (t)+X H (t) denote the fraction of the urn occupied by X A at time t. At each tick of a discrete clock, the state is updated as follows: Intuitively, if the honest X H wins a given draw, then the honest pool gains c unit of reward. If the adversarial pool X A instead wins a given draw, it negates c honest units, and adds c units to the adversarial pool. The following theorem shows that AM-1 gives a universal upper bound on v A (t) under any arbitrary strategic behavior by the adversary. We refer to Appendix C for a proof. Theorem 3. Under the constant reward function, for any adversarial strategy resulting in a stake fraction time series v A (t), the AM-1 random processṽ for all a ∈ [0, 1] and any t ∈ Z + . Figure 8 (left) shows that for small values of the total reward R ≤ 2S(0) and when adversaries are well connected to the honest nodes (γ = 1), the AM-1 upper bound is quite close to an achievable strategy of MO-4. The right panel show that when the adversaries are less connected (γ = 0.5), then the strategic behavior takes over less stake. We analyze an upper bound (inspired by AM-1), which reveals that a PoS system is less vulnerable against strategic attacks when initial stake S(0) is larger. Analytical upper bound We introduce and analyze a new random process called always-match-2 (AM-2), which is an upper bound on AM-1, but has the merit that the expected fractional stake is tractable in a closed form. Always-Match-2 (AM-2): Similar to AM-1, AM-2 is an urn process with state where as before, S A (t) denotes the number of tokens held by party A at time t. At each tick of a discrete clock, the state is updated as follows: The addition of 2c units of adversarial reward keeps the total change in urn size constant across time steps, which simplifies the analysis of this urn process. The following theorem shows that AM-2 gives an upper bound on the AM-1 process. We refer to Appendix D for a proof. We are interested in how much an adversary can gain by acting strategically. The above theorem provides a tool for characterizing an upper bound on any strategies, by analyzing AM-2. This is made formal in the following theorem. We refer to Appendix H for a proof. Theorem 5. Let v A (t) denote the fractional stake of the adversary under the AM-2 process, when the total initial stake is S(0), initial fractional stake of the adversary is v A (0), and the total reward dispensed over where η R/(S(0) + c). Under the assumption that R is less than the stake of the honest party to ensure that honest party's stake does not vanish to zero, the gain of an adversarial strategy over a honest strategy is bounded by , where we used the fact that when everyone is honest the mean fractional stake remains v A (0) over all t. This implies that having a small initial stake S(0) relative to the total reward R makes the system vulnerable against adversarial strategies. This justifies the common practice of starting a PoS system with large initial stake. Further, this analysis allows us to quantify the price of compounding under adversarial strategy. When there is no compounding effect, either under a PoW system or because the rewards are not automatically appended to the stake, an upper bound on adversarial strategy we consider in this paper has been analyzed in [25]. Translating the bound into the same notations as in Theorem 5, we get that when there is no compounding, an adversary's fractional stake is bounded by with high probability. For large enough T , with high probability. Compared to Eq. (29), when η is large, compounding allows the adversary's gain to grow linearly in η whereas the adversary's gain is a constant in η with no compounding. This shows that strategic parties can gain significantly over honest parties, under PoS systems with compounding effects. Discussion There are three main issues that relate to actually building a chain-based PoS system with geometric rewards. The first is how to choose the relevant parameters T and R, which has been discussed at length in Section 4. The second is how to deal with changing stake fractions that arise due to user-initiated transactions, e.g., selling their stake -discussed below in Section 6.1). The third discusses how to handle strategic behavior by block proposers in practice -discussed here in Section 6.2). Dynamic Proposer Stake One challenge in the analysis of PoS systems is the fact that stake can move rapidly between parties, e.g. if nodes choose to sell their stake. Computing the objective function in the optimization of equitability is tedious when accounting for the dynamic addition and removal of stake, and it is not clear that geometric rewards are robust to rapid stake transactions. However, in practice, PoS systems often restrict the timescale over which stake can be added or removed, precisely to add robustness. For example, Casper FFG constrains users to keep their stake in a validation pool for at least 4 months in order to participate [6,24]. In our system, an analogous stability constraint would be to impose that stake ratios should not change during each time interval of T blocks. If this constraint is met, then geometric rewards can be recalculated at each block interval T to account for dynamically changing stake pool. Theorem 2 implies that this strategy optimizes the overall equitability of the reward scheme, even if the stake transactions are not known a priori. Moreover, if we choose T on the order of days as suggested in Section 4, this constraint is relatively mild from a user's perspective. It is important to note that users need not explicitly deposit their funds into a common pot in order to enforce the proposed stability constraints. This can be enforced implicitly by programming the selection mechanism to only consider stake that has been associated with the same public key for some minimum time interval. Such a strategy has been suggested in several proposed PoS systems, including Ouroboros [14], Algorand [17], Casper [6,24]. Control selfish mining Strategic behavior is a significant concern in PoW cryptocurrencies [9,20,25], and even more so in PoS systems. In Section 5 we demonstrate the efficacy of a strategic attack through which a rational user can artificially boost her proportion of the block rewards. In a sense, the results from Section 5 are negative. Choosing a small reward (with respect to the initial fraction) at each time step does not fully solve the problem, and there may be economic reasons to give out larger block rewards within a given time period. Ultimately, we expect that this problem cannot be solved solely by changing the block reward function. Rather, it may be more effective to control the effects of strategic behavior than to identify a scheme under which strategic behavior is equivalent to honest behavior. For instance, the proposer selection protocol could choose only proposers whose fraction of proposed blocks in the last K blocks is commensurate with the proposer's stake (within some statistical error). Such a policy would detect nodes who produce more than their fair share of blocks, and limit their ability to propose more blocks. Conclusion In this work, we study the effects of compounding and the choice of block reward function on the concentration of wealth in PoS cryptocurrencies. We measure this concentration of wealth through a proposed metric called equitability, which captures the (normalized) variance of parties' stake distributions after a fixed epoch of T blocks. We show that existing block reward functions (such as constant and decreasing rewards) have poor equitability. We introduce a new reward function, which we call geometric rewards, and prove that this is the most equitable block reward function. The negative effects of compounding, i.e. the unfair distribution of wealth, can be further mitigated by choosing initial system parameters judiciously: that is, by ensuring that the total block rewards disseminated in each epoch should be small compared to the initial stake pool size. Several open questions remain. First, our results assume that proposers do not add or remove stake in the middle of an epoch. Such stake dynamics are likely to affect the optimality of geometric rewards and complicate the computation of equitability. Although we can disallow the addition or removal of stake on short timescales (e.g., a day), systems that choose epochs on the order of years will need to deal with dynamic stake pools. Another challenge, which we discuss in Section 6, is that geometric rewards may not be desirable in practice because of the sharp changes in block rewards between epochs. A natural solution is to impose smoothness or monotonicity constraints on the class of reward functions. Solving such an optimization is an interesting direction for future work. Finally, a substantial open problem is that of protecting against strategic players. Although strategic players are not specific to PoS systems or compounding, we show here that geometric rewards alone do not protect against strategic players. Designing incentive-compatible consensus protocols for strategic players is a major question in blockchain systems. Some papers that make progress on this front include Fruitchains [21] and Ouroboros Hydra [13]; both works propose reward and consensus mechanisms for which honest behavior is shown to be a δ-approximate Nash equilibrium. As discussed in Section 1.1, the algorithms of these papers may inherently improve equitability by spreading block rewards over multiple parties. Formally analyzing those protocols through the lens of equitability is another direction for future research. A Proof of Lemma 1 Let e θn S(n)/S(n − 1) and r(n) = S(n + 1) − S(n), then It follows that Hence, B Proof of Theorem 2 By the same logic as the proof of Theorem 1, the optimization problem of interest can be written as where recall that θ n = S(n) S(n−1) , and we define T 0 := 0. Notice that this optimization problem is separable over the variables in different time intervals, so we can separately solve k optimization problems, each of the form for each i ∈ [k]. Using the same KKT conditions as in Theorem 1, we get that θ * n = 1 Ti−Ti−1 log( 1+Ri 1+Ri−1 ), which in turn implies that for n ∈ [T i−1 + 1, T i ], We first represent the standard Pòlya's urn process using a binary tree of the state evolution according to who won at each time step. Recall that the winner is assigned according to which determines who gets the reward. We need the following notations for the proof. We denote the outcome of the random winner drawings as W (0 : 2) = AAH if the winner at time 0 is the adversary (meaning that the adversary was elected the leader and an adversarial block is generated), at time t = 1 is the adversary, and at t = 2 is the honest party. Under this event, we denote the factional stake of the adversary byx(AAH), and the total stake by S(AAH). We use the notion of the standard Pòlya's urn process to denote the process with constant reward c at each time. A strategic behavior consists of union of the following actions. When elected a leader at a certain time t, say t = 1 in the figure below, the adversary may decide to withhold its currently generated block and also the reward. This withheld reward (and the block) is awarded when the adversary either matches or overrides (based on the instance of the future winner elections). If matched, the reclaimed block also takes away one of the honest blocks (and the corresponding reward). If overridden, the reclaimed block may or may not take away one of the honest blocks. We represent the strategy of the adversary on a single withheld block (generated at time t = 1), using the following binary decision tree. We consider a binary decision tree of height T , as follows (e.g. T = 4). We can encode any strategy of the adversary on when to reclaim the withheld reward on the binary tree. We are hiding the part of the tree branching from W (0) = H as it is not affected by the current adversary we are considering. For example, the adversary withholds a block if he is elected a leader at time t = 1, which is encoded as the orange node. The adversary might choose to reclaim the reward (by publishing the side-chain that includes the withheld block of interest) if the next winner is an honest one (W (1) = H), if next two winners are adversarial and honest in that order (W (1 : 2) = AH), and if next three winners are adversarial, adversarial, and honest in that order (W (1 : 3) = AAH). These are encoded on a binary tree as shown above in blue nodes. Any binary tree where a single path from an orange node (block withheld) to a leaf only contains a single blue node (block reclaimed) is a valid strategy, as a withheld reward can only be claimed once. We do not explicitly encode wether a honest block is taken away when adversarial block is reclaimed, as it does not change the proof as we will show. In the above example, the orange node at the event W (0) = A encodes the strategy that a unit c reward is withheld if the adversary wins at time T = 0. Hence, the resulting stake at that node isx(A) =X(0) as no reward is claimed, and S(A) = S(0). the blue node at the event AH denotes that the adversary reclaims the withheld reward if the next winner is an honest party. Under this event of AH, the resulting stake at that node AH isx(AH) =X(A) + (c/S(A)) as c unit reward is given to the adversary and c unit reward is taken from the honest party, and and the total stake S(AH) = S(A) remains unchanged. The next lemma provides a set of operations on the colored binary tree we can perform, in order to turn it into a more stochastically dominant process. We give a proof in Appendix E. Lemma 2. Given a representation of a random process with an adversarial strategy as a colored binary tree, the following operation results in a new random process that stochastically dominates the old one: A.1. convert a white leaf node to a blue leaf node; A.2. convert two blue leaf nodes who are siblings into one blue parent node with two white offsprings; A.3. convert two blue sibling nodes into one blue parent node with two white offsprings; and A.4. convert two blue offsprings of an orange node into one parent node that is blue and orange with two white offsprings. Note that a blue and orange node denotes the combination of a orange node and a blue node, where one unit reward given to the adversary and one unit reward taken away from the honest party. Applying the above operations in the order of A.1, A.2, A.3, and A.4, we have the following random urn process that stochastically dominates any adversarial behavior with a single reward withheld. From the preservation of stochastic dominance by the standard Pòlya's urn process (as shown in Lemma 4), we can now convert a white node in which an adversary is a winner into a blue and orange node, from top More stochastically dominant Less stochastically dominant to bottom. The resulting process is exactly the AM-1 process, finishing the proof of stochastic dominance when only a single reward is withheld at time t = 1. Less stochastically dominant When a single reward is withheld at time t > 1, we need a preservation of stochastic dominance for AM-1. The following lemma justifies conversion of a white node into a blue and orange node when the descendent nodes follow AM-1. We provide a proof in Appendix G. Lemma 3 (Preservation of stochastic dominance of AM-1 process). Consider two AM-1 processes with the same initial total stake S(0). One process has a random initial fractional stake v A (0), which is stochastically dominated by that of the other process v A (0). Then, the final fractional stake preserves the dominance, i.e. v A (T ) is stochastically dominated by v A (T ). In general, a strategic behavior consists of multiple rewards withheld at multiple nodes in the binary tree. Each withheld reward will have some strategy for being reclaimed in the future. Lemmas 4 and 3 ensure that the above argument for converting such a strategy into AM-1 process still holds when multiple rewards are withheld. This finishes the proof of the claim that AM-1 stochastically dominates any adversarial strategy. D Proof of Theorem 4 The fact that AM-2 further stochastically dominates AM-1 follows immediately from Lemma 3 and converting a binary tree representation of AM-1 to that of AM-2 from top to bottom. We omit this part of the proof as it is straightforward. E Proof of Lemma 2 We use the following example of a adversarial strategy for illustration of the proof: A.1. Convert a white leaf node to a blue leaf node. As this change only affects one sample path, only one instance is affected, say T = 4 and W (0 : T − 1) = AAAA. The probability of this outcome does not change, but only the fractional stake corresponding to this outcome changes fromx(AAAA) tõ x (AAAA) =x(AAAA) + (c/S(AAAA)). As c/S(AAAA) > 0, the process after the changing the white leaf node into a blue one is strictly stochastically dominant. After this conversion, the node AAAA is now blue, in which case we apply the next operation. . Given thatx(AAAH) ≤x(AAAA) and c/S(AAA) > 0, the resulting process is stochastically dominant. After this conversion, the node AAA is now blue, in which case we apply the next operation. A.3. Convert two blue sibling nodes into one blue parent node with two white offsprings. This change affects many sample paths, all descendants of a single node (parent of the two blue nodes of interest), say node AA, and node AAA and node AAH are blue nodes. We know from rule A.2. that conditioned on the event W (0 : 1) = AA,x (2) stochastically dominatesx (2). The rest of the process follows the standard Pòlya's urn process. Hence, the following lemma implies the desired claim. We provide a proof of this lemma in Appendix F. Lemma 4 (Preservation of stochastic dominance of the standard Pòlya's urn process). Consider two standard Pòlya's urn processes with the same initial total stake S(0) and the same constant reward c. One process has a random initial fractional stake v A (0), which is stochastically dominated by that of the other process v A (0). Then, the final fractional stake preserves the dominance, i.e. v A (T ) is stochastically dominated by v A (T ). After this conversion, the node AA is now blue, in which case we apply the next operation. A.4. Convert two blue offsprings of an orange node into one parent node that is blue and orange with two white offsprings. Note that the stakes at time t = 2 remain unchanged by the conversion, i.e.x(AA) =x (AA), andx(AH) =x (AH). Only the corresponding probability of events change. Node A is orange with, sayx(A) = v A (t = 0), as the reward is withheld at time t = 1. Hence, P(AA|A) = v A (0), whereas P (AA|A) = v A (0) + (c/S(A)), after the conversion. It follows from the fact that c/S(A) > 0 andX(AA) >X(AH) (and also Lemma 4) that the conversion results in a process that is stochastically dominant. F Proof of Lemma 4 We prove it by a recursion. Consider the following representation of the standard Pòlya's urn process. v A (t) denotes the fractional stake of party A at time t, that starts as v A (0). S(t) = S(0) + c t denotes the total stake. First, we claim that if v A (0) < v A (0) deterministically, then v A (1) D ≤ v A (1). This follows from the fact that For these two valued discrete random variable, not only are those two values both larger for the latter process, but also the probability mass for the larger of those two values are also higher for the latter process. Hence, . Note that assuming stochastic dominance of v A (0) D ≤ v A (0) leads to the same conclusion. Hence, we can recursively apply the above result to prove the desired lemma. G Proof of Lemma 3 Consider the following representation of the AM-1 process. Let v A (t) denote the fractional stake of party A at time t, that starts as v A (0). Let S(t) denote the total stake at time t, that starts with S(0). First, we The rest of the proof follows similarly as in the proof of Lemma 4 in Section F.
2018-09-24T14:56:15.316Z
2018-09-20T00:00:00.000
{ "year": 2018, "sha1": "b485cd16a1cd6bb2f045da64396a48d103a717bf", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1809.07468", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d5b8a46357e462dc95e9e753d0925c67db3d094e", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }