id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
247060121
pes2o/s2orc
v3-fos-license
Easy-Made Setup for High-Temperature (Up to 1100 °C) Electrochemical Impedance Spectroscopy In the following communication, we report an easy-to-assemble Swagelok-like setup for high-temperature electrochemical impedance spectroscopy with good reproducibility based on robust 1.4114 steel 10M screws joined by non-conductive ceramics-Al2O3. We analyze the sample materials for electrochemical merits (activation energy, charge-carrier density and flatband potential) of well-known standards such as yttria-stabilized zirconia with 8 mol.% Y2O3 (8YSZ), CeO2 and In2O3. The material‘s data are compared with literature data performed on a standard impedance analyzer within a casual high-temperature commercial cells. The symmetrical cell consists of insulating material (Al2O3 screw) and two steel contacts, connected by PtRh wires of thermal resistance tolerating temperatures of 2300 °C. Our high-temperature electrochemical setup is able to withstand temperatures up to 1100 °C and can be easily and mildly cleaned for repetitive usage. In addition, we present a methodology for generation of a high-temperature sintered 8YSZ ceramics and evaluate them with our setup. We analyze the internal resistances within the setup and propose a simplified option for introduction of various gas atmospheres into the sample‘s interior, as well as evaluate the utilization of tube furnace for simplicity. We perform equivalent circuit fitting and present an easy to implement approach for reliable high-temperature electrochemistry. One of the most interesting examples of high-temperature electrochemistry involves dielectric spectroscopy, in particular, electrochemical impedance spectroscopy (EIS) ( . EIS is a technique able to non-destructively investigate crucial material parameters at high temperature, such as conductivity, viscosity (acoustic impedance spectroscopy), activation energy of oxygen vacancy in, e.g., k-oxygen sensors, resistance of the ionic transport within, e.g., porous ceramics (Mac Mullin number for solid-state electrolytes) or even the evaluation of charge-carrier density in photovoltaic materials (Mott-Schottky junctions) (Ref [13][14][15][16]. The current development of hightemperature electrochemistry makes EIS therefore a lucrative and cost-effective method for evaluation of materialÕs behavior at elevated temperatures. The main drawback of the application of EIS at both laboratory and pilot scale is the need for a reliable system allowing to acquire the desired spectra in a dedicated span of alternating current (AC) frequencies ( Ref 17,18). Main parts of such a system are the signal analyzer (galvanostat for galvanostatic electrochemical impedance spectroscopy (GEIS) and potentiostat for potentio-electrochemical impedance spectroscopy (PEIS)). Most of the modern instruments (Gamry, BioLogic, Princeton Instruments, CHI, MetroOhm) utilize this bifunctionality in one setup. Another crucial part is obviously the cell containing the material for analysis, which is connected to the signal analyzer (Ref [19][20][21][22]. It needs to be chemically and temperature resistant and should not or minimally and in Fig. 1 Experimental cell setup based on M10 metric 1.4841 hightemperature resistant screws with Al 2 O 3 nut (96% pure, 30% apparent porosity for gas permeation). The thickness of the PtRh wire is exaggerated for clarity. The cell was placed inside the tube furnace with two inlets for feeding the PtRh wires through (Carbolite, operating temperature up to 1200°C). The crocodile clips ensure connection to the potentiostat outside the tube furnace ( Figure S1, Supporting Information). The cell can be operated up to 1100°C and is limited by the 1.4841 heat-resistant screws controlled fashion contribute to the internal resistance of the measurement, nor to the iR drop (Ref [23][24][25]. It should also be reliable, cost-effective and easy to assemble for multiple analysis of, e.g., batches during pilot or large-scale production ( Ref 26). In order to fulfill those requirements, we propose an affordable, yet reliable and easy-to-assemble setup for hightemperature (up to 1100°C) electrochemistry with particular emphasis on EIS. The analysis cell depicted in Fig. 1 consists of three major parts: the 1.4841 steel M10 screws with flat electrical contacts, a ceramic non-conductive alumina nut and high-temperature resistant and conductive wire connections (PtRh) point-welded by an arc discharge (300 V) to the 1.4841 steel screws. The high-porosity of the alumina nut allows to vary the gas atmosphere (e.g., vacuum, synthetic air, argon, neon). The material of interest is placed within a nut of welldefined geometry and is brought to contact with 1.4841 hightemperature-resistant steel from both sides. This allows to avoid the necessity of conductive film deposition by DC sputtering or physical vapor deposition (PVD) on the, e.g., ceramic materials in order to create electric contact, which is often the case for laboratory-scale research. The so-prepared Swagelok-type cell may then be placed in a tube furnace with the desired gas type and flow (Fig. 2, inset A), which allows to perform electrochemical analysis. This contribution investigates the obtained electrochemical merits on standard materials (8% yttria-stabilized zirconium oxide; 8YSZ oxygen gas sensor and catalytic RedOx-active oxide such as CeO 2 and In 2 O 3 ), as well as research-type sintered 8YSZ ceramics. The obtained results are compared with the literature, and the potential of using the simplified Swagelok-type cell approach is discussed. Materials and Methods Electrochemical measurements of staircase potentio-electrochemical impedance spectroscopy (SPEIS) were acquired with a BioLogic VSP potentiostat (France) equipped with PCcontrolled EC-Lab software (version 11.33) and were performed in single sine mode in a 1MHz-1mHz frequency range. Mott-Schottky analysis was acquired in À 1.5 to +1.5 V versus open circuit potential (OCP) range with 1500 potential steps (dE 2 mV), 10 mV sinus amplitude and 6 points per decade of frequency in logarithmic spacing with 2 measurements per set used for signal averaging. Each SPEIS spectrum was validated by Kramers-Kronig relations and numerically fitted by a Z-fit procedure with details emphasized in the text. The custommade standardized 10M screw threads were prepared according to the standard of ISO 10642 out of steel type 1.4841 (temperature inertness till 1150°C in air under standard conditions). PtRh wires (250 lm thick as confirmed by digital optical microscopy and of 99.9% purity, as confirmed by X-ray photoelectron spectroscopy, each 15 cm long) were pointwelded to the screws in house by an arc discharge. The PtRh contribution to the Nyquist plots were neutralized by the iR drop determination, as only the electrode resistance was affected during the measurement. Standard alligator-clips were placed at the end of the wires for connecting the signals to the potentiostat. The internal resistance of the PtRh wire was established as less than 0.1 mOhm and is therefore excluded from the calculation of the electrochemical merits. The conductivity was calibrated using alomel (99.9% purity, inhouse source). The dihedral-width M10 nuts were purchased from Misumi, Japan (CA/NN-M10), which are composed of 96% alumina (with rest 4% of calcium oxide impurities, as established by X-Ray photoelectron spectroscopy in house) and are of 30% apparent porosity. The tube furnace from Carbolite Furnaces Ltd &02-3216P1 (Type MTF 12/25 A Figure S1) with internal controller was employed ( Figure S1, Supporting Information). The temperature within the nut was calibrated as opened (one side left unclosed with steel screw) after 45 minutes from reaching the measurement temperature by the Kthermocouple of +/À 1.1°C measurement certainty and 35 lV/ K Seebeck coefficient, to which all temperature measurements in text are referred (Fig. 2). CeO 2 , In 2 O 3 and the 8YSZ powders were purchased from Sigma-Aldrich/Merck, Germany. The 8YSZ scintillated pellet was prepared by pressing 100 mg material in a 10-mm piston under 10 tons of hydraulic pressure and subsequent scintillation at 1000°C for 24 hours under atmospheric air. The commercial 8YSZ pellet fitting 10M metric screw was purchased from Keramik GmbH, Germany. The gas flow into the cell was introduced by inserting the inlets and outlets to the Carbolite tube furnace ( Figure S1). The gas flow in sccm was controlled by a home-build Hg manometer. At 560°C, pellet 5 At 560°C, pellet 6 Data not reported in the literature The point welding was performed by a direct current (DC) arc discharge at 300 V, without the use of any external metals. Results and Discussion We present a design of the measurement cell, calibration of the tube furnace temperature and results for impedance characterization of the materials 8YSZ, CeO 2 , In 2 O 3 and selfsintered experimental 8YSZ. We estimate the activation energies, flatband potentials and internal resistances of the cell based on equivalent circuit analysis according to the standardized Z-fit numerical procedure. We present the experimental measurement data up to 750°C and estimate the capability of measurements up to 1100°C in order to evaluate the cell stability. Cell Design and its Validation In the first step, the cell was assembled according to Fig. 1. PtRh wires were attached to crocodile clips and the measurement cell setup could be compared to the 2-probe measurement of a typical 2-electrode cell. The only contribution of the conductivity is the material under investigation, as Al 2 O 3 is not conductive also at elevated temperature. The potentially occurring perturbation arising from the wiring was removed by application of PtRh wire with minimal self-resistance. The quality of PtRh was carefully evaluated by means of XPS spectroscopy. The thermal resistance up to 2300°C and excellent electrical conductivity of this alloy have allowed for non-disturbed measurements. The relatively large porosity of the alumina nut has allowed for introduction of various gaseous environments into the interior of the investigation area. An example of such a measurement under 20 sccm of oxygen flow at elevated temperatures is presented in Fig. 4, . Here, the Ô means the negative charge, while . is the positive charge. V 0 is the vacancy replacing oxygen, and x stands for the charge-neutrality. The presence of oxygen vacancies is strongly suggested, as based on the activation energy, which was calculated to be 89.01 kJ/mol (0.92 eV) for the synthesized 8YSZ, 83.21 kJ/mol (0.86 eV) for the commercial 8YSZ ceramic ring (GmbH, Germany), 91.66 kJ/mol (0.92 eV) for CeO 2 and 136.98 kJ/mol (1.38 eV) for In 2 O 3 ( Table 1). Owing to the various concentrations of oxygen vacancies, which was additionally investigated by means of staircase potentiostatic-impedance spectroscopy (SPEIS) and the Mott-Schottky equation (Eq 2), the value of the activation energy was established to depend on the chemical environment ( Ref 27). The results of the activation energy, charge-carrier concentration and flatband potential calculations are gathered in Table 1. We note that the dielectric constants needed for the calculations of the concentrations of active species were calculated according to the procedures reported in the literature; in our case, we restrict ourselves to fixed temperatures with smallest values of resistance, as judged by a Z-fit of the equivalent circuits ( Ref 28,29). Figure 2 shows a representative plot for the Al 2 O 3 nut interior temperature calibration within the opened measurement cell placed within the tube furnace. Owing to the uniform thermal convection within the tube furnace, the nut interior measured with one side open (i.e., with one screw left out) has proven the accuracy of the temperature readout of the furnace display and of the K-element thermocouple temperature readout placed within the cellÕs interior. Measurements of Standards In order to assess the applicability of simple measurement cell in Fig. 1, we have performed a temperature-dependent determination of oxygen vacancies within CeO 2 , In 2 O 3 , commercial 8YSZ and self-made 8YSZ under various gas atmospheres. By applying the Arrhenius equation: and the Mott-Schottky equation: where r is the conductivity, A the Arrhenius constant, E a the activation energy, R the universal gas constant, T the temperature, and C is the capacitance, e and e 0 are the dielectric constant and permittivity in vacuum, k B is the Boltzmann constant, A the electrode surface area, e the number of electrons transferred (assumed as 1), N D the charge-carrier density (in cm À3 ), V is the measured potential against the normal hydrogen electrode (NHE, in Volt), and V FB is the flatband potential (in Volt). According to the proposed hoping mechanism of conductivity in the literature, which depends on the amount of available and addressable (at the given temperature) oxygen vacancies, the saturation of those by gas flow leads to differences in impedance spectra and is a well-known phenomenon (Ref 30). Our approach was to evaluate the Al 2 O 3 porosity in order to observe the effect of oxygen saturation. The experimental results for oxygen flow (20 sccm) are shown in Fig. 4, insets A/B. The measurements of commercial standards are in agreement with the literature (Ref [31][32][33][34][35]. The equivalent circuit modeling is exemplary shown in the Fig. 3, inset A, as a magnified cut-out from the cumulative assembly of EIS spectra in the inset B. The spectra show good agreement with the theoretically obtainable results (error analysis of the Z-fit v 2 / Z = 0.35), as judged by the 6.7% error yielded by the Kramers-Kronig relations (KK relations). The KK analysis represents the mathematical dependence of the imaginary and real parts of the impedance, therefore proving the application potential of the proposed high-temperature measurement system (Ref 36. The conductivity of CeO 2 and In 2 O 3 owing to the hoping mechanism was evaluated and found to be in good comparison with the literature data ( Ref 38,39). We have observed the accessibility of oxygen vacancies to vary at different temperatures, but considered a temperature of 350°C for both CeO 2 and In 2 O 3 in order to compare the charge-carrier density with literature values (Ref 31-35). The respective Arrhenius plots for CeO 2 , In 2 O 3 and YSZ (both synthesized and commercial) can be found in the Supporting Information. The substitution of Ce 4+ in ceria and In 3+ in indium oxide by foreign elements due to the implantation or respective sample preparation (e.g., electrospinning) results in an increased amount of oxygen vacancies, which are treated as the main source of the charge-carriers ( Ref 40). Many of the vacancies will nevertheless be intrinsically built into the structure as point defects, which consequently leads to an improved conductivity ( Ref 41). The different accessibility of oxygen vacancies is translated to charge-carrier densities and is not to be referred to structural changes, as, e.g., ceria exhibiting a fluorite Fm3m space group up to its melting point at 2477°C, but rather to the bulk activation energy needed per mol unit of the material under investigation ( Ref 42,43). For example, the chargecarrier density of In 2 O 3 can vary, as it depends on the amount of reduced material, which occurs due to the thermal RedOx reduction reaction: 2In 2 O 3 ! 4In 0 + 3O 2 , meaning that the materialÕs chemistry is involved in a large bundle of parameters influencing the conductivity. In the case of oxide materials, this may vary from batch to batch (Ref 43, 44). Measurement of Experimental Research Samples In order to investigate the potential of the proposed setup, we have performed a real-run test of the research-type prepared samples. The 8YSZ powder was sintered under air for 24 hours at 1000°C prior to pelletization as described in the materials and methods section. The translucent ceramic pellet was carefully placed inside the measurement cell and brought into the tube furnace. We have observed much higher temperature activity of the sample at 660°C, as manifested by the generation of the semi-circle in the Nyquist plot and therefore the observed temperature-dependent conductivity. The results aiming for a comparison of the merits obtained by our cell with literature values are gathered in Table 1. Figure 4 shows the applicability of the cell for measurements during respective gas treatments. The spectra in A and B show different temperatures, while only in the range of 560-566°C the peculiar generation of the second semi-circle was observed, which was attributed to the phase transition within the monoclinic 8YSZ system to the tetragonal system (Ref 54). Interestingly, this phase transition behavior was observed only under oxygen atmosphere. Further discussion of this effect is nevertheless outside of scope of this article and we refer to the reader to the comprehensive review of that matter in the literature ( Ref 54). The peculiarity of ceramic solid and oxygen gas interaction is still a matter of research, as conductivity (both electrical and ionic) measurements at higher temperatures are considered to be challenging ( Ref 45). The dependence of the gas atmosphere on the 8YSZ conductivity (specified in this study by a gas pressure of 1 atm and a flow of 20 sccm) and especially on the oxygen vacancies saturation is manifested by the presence of the charges in the bulk of the material, which causes creation of holes within CeO 2 and In 2 O 3 . The exact character of holes and the creation of localized Frenkel exciton states within the 8YSZ, as well as their exact position and contribution to the final states of the Bloch-conduction band, are not fully understood yet and are currently a matter of intensive computational studies (Ref [46][47][48][49]. In case of the data presented in Table 1, the conductivity of 8YSZ varies strongly with the atmosphere, while the position of relative to the conduction band and easily obtainable flatband may provide additional merits explaining the accessibility of excitons and their influence on the conductivity ( Ref 50). The possibility of more careful temperature control should provide better insight into the quantum properties of ceramics under investigation within our system. Conclusions In this study, we have presented an easy method for reliable measurements of high-temperature impedance spectroscopy in a cell of well-defined geometry, easy availability of the assembly materials and high durability. The cell consists of 1.4841 10M steel screws with temperature resistance up to 1150°C, alumina (96% Al 2 O 3 ) and PtRh connectors. We have performed analysis of electrochemical merits like activation energy of oxygen vacancies in ceria, In 2 O 3 and yttria-stabilized zirconium oxide, their charge-carrier concentration and flatband potential. We have established that the amount of charge transfer concentration varies with temperature, which we refer to as the activation energy needed to successfully activate the active centers and we discussed the availability of Frenkel excitons potentially contributing to the conductivity. The theory of flatband was referred to the position of the Bloch-conduction band in order to demonstrate the possibility of performing investigations of quantum electronic structure of both powders and pellets. After characterization of the commercial standards, we have demonstrated a possibility of investigation of researchgrade materials, such as self-prepared 8YSZ. We also shown the influence of gas atmosphere on the impedance measurement due to the large 30% porosity of the alumina used as the powder/pellet vessel. In conclusion, our approach can be used in any tube furnace of adequate size and our normalized approach of the cell makes high-temperature impedance measurements a straightforward, reproducible and highly accurate task. Funding Open Access funding enabled and organized by Projekt DEAL. We acknowledge the funding provided by Deutsche Forschungsgemeinschaft (DFG, HA 6128/6-1). Conflict of interest The authors declare no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2022-02-24T14:40:50.014Z
2022-02-23T00:00:00.000
{ "year": 2022, "sha1": "39e90a8c5f984e95aa8c6dfb6c1469c175437a6c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11665-022-06653-3.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "39e90a8c5f984e95aa8c6dfb6c1469c175437a6c", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
251580993
pes2o/s2orc
v3-fos-license
Health Systems Approach to Ensure Quality and Safety Amid COVID-19 Pandemic in Pakistan Ensuring quality and safe care during the coronavirus disease 2019 (COVID-19) pandemic offers a challenge to already strained health systems in low and middle-income countries (LMICs), such as Pakistan with less shock-absorbing capacities. There is a dearth of evidence on mechanisms to provide optimum quality care to COVID-19 patients in the resource-constrained healthcare environment. The lessons learned from the Ebola virus outbreak for the deficient health systems and quality improvement are considered to propose strengthening the health systems response to deliver quality-assured care to patients during the current pandemic. In this regard, the World Health Organization (WHO) health systems framework can serve as a guiding principle towards providing quality-assured and safe healthcare services during the ongoing pandemic in Pakistan by ensuring the availability of an adequate workforce, medical supplies and equipment, strong governance, active information system, and adequate health financing to effectively manage COVID-19. Research evidence is needed to be better prepared for an effective and coordinated health systems response to offer quality and safe care to patients. COVID-19 pandemic is sweeping across the borders and bringing the death toll to 1,453,355 with over 62 million infected cases (as of 30 th Nov 2020). 1 As evidence keeps evolving with new severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), people with comorbidities are considered to be at increasingly high risk for contacting COVID-19 with adverse health outcomes. Realities of the poor shock-absorbing capacity of the health systems across the developed and poor nations are getting onto the surface. Since the time pandemic has taken control, scientific evidence keeps flooding in the context of its epidemiology, infectious disease modelling, sub-specialty clinical care, pharmacologic interventions, co-infections with COVID-19, and its clinical management. Furthermore, the psychosocial health effects of the COVID-19 pandemic on frontline healthcare providers are also increasingly getting recognized. The gap remains on evidence related to healthcare system components in offering quality care to patients during the ongoing pandemic. In the wake of the second spell of the COVID-19 pandemic, it is important to investigate what mechanisms are in place to provide good quality and safe care to COVID-19 patients in the resource-constrained healthcare environment in low and middle-income countries (LMICs), such as Pakistan. Like other countries across the globe, the second wave of COVID-19 has also hit Pakistan, a country that abysmally spends a low amount on health (2.89% of GDP). 2 Total cases in the country have reached 398,024; of which, 86% (341,423) cases have recovered. 3 The death toll stands at 8,025 (as of 30 th Nov 2020). 3 With an increased positivity rate of 6.9%, 3 hospitals are again getting overwhelmed with the treatment and management of COVID-19 patients. The pandemic has increased demands for the health workforce worldwide and thus calls for identifying the gaps in workforce shortages, especially among the frontline workers in Pakistan, as the country remains under the severe health workforce crises list. 4 In Pakistan, the density for health workers (doctors, nurses, and midwives) is 1.45 (for 2017); 5 not meeting the minimum required health worker density (SDG threshold index) by WHO for 4.45 doctors, nurses, and midwives per 1,000 population. 6 Also, the demands for critical care continue to grow as the country is trying to cope with the surge in COVID-19 cases. With no global benchmark for the number of hospital beds in relation to population, 7 the functional critical bed capacity in Pakistan is 0.71 per 100,000 population, lesser than the neighboring countries. 8 Findings from the nationwide survey, held during 2017-2018 in 157 intensive care units (ICUs) across the country, revealed that 54% of the ICUs had a one-to-one nurse-to-bed ratio for ventilated patients. 8 Concerning the availability of one-to-one nurse-to-bed ratio, this ranged from 56.5% having a 1:1 availability in Punjab, compared to 0% in Azad Jammu and Kashmir. Ventilator-to-bed ratio of 1:1 was observed in 52% of the ICUs, with variations reported across the provinces. 8 Furthermore, gaps in the critical care trained staff have also been reported. 8 In context of the current pandemic, there is a need to identify workforce gaps with training needs and infrastructure for critical care across all hospitals in Pakistan to adequately manage COVID-19 patients. Reliable and timely information is an essential foundation of the healthcare system. In Pakistan, health information gets assembled at the district level from primary, secondary health facilities, outreach services, and vertical programs. Performance of the district health information systems (DHIS) is often reported to be hampered by the inadequate feedback on reports, low utilization of DHIS reports in planning, insufficient availability of DHIS tools, and gaps in the training of staff handling DHIS. 9 While tackling the pandemic, it is appreciable to note that COVID-19 dashboards are created at the national and provincial health departments to provide timely information. With current challenges in the healthcare system in Pakistan, there is all-time low satisfaction reported among patients with the use of public sector health facilities. 10,11 Epidemics and pandemics weaken the health systems' capacity at large, especially in resource-constrained countries such as Pakistan with low investment in the health sector. The lessons learned from the Ebola virus (EVD) outbreak for the deficient health systems and quality improvement measures must be considered to strengthen the health systems towards delivering quality-assured care to needy patients during the current pandemic in Pakistan. Quality improvement (QI) in health is the "pursuit of continuous performance improvement". 12 The notion of quality care becomes paramount in context of the exhausted health systems in times of such global health crises. The expectations by patients and family members are presumed to be higher during crises situation, as they see healthcare professionals, the available services, supplies, and other protocols of standard care to improve well-being and save lives. The human, material, financial, and logistics deficiencies surfaced with EVD remain the same to date as identified with the COVID-19 pandemic. 13 Undoubtedly, ensuring quality and safe care during this ongoing pandemic offers a great challenge to already strained health systems in Pakistan, with less shock-absorbing capacities. Dealing with an outbreak primarily requires skills in virology, serology, intensive care, and other related disciplines. It also demands a whole systems approach towards virus containment efforts and providing quality care to infected patients. Although, countries have stepped up to re-organize the processes and improve the standards of care amid COVID-19, 14 yet little evidence is available on how new processes and standards have contributed to the quality of care. Given the surge of COVID-19 in Pakistan, we propose the WHO health systems framework to be used by planners and hospital managers while keeping quality measures at the centre of their activities (Table I). This framework has been adapted from the quality improvement framework proposed for EVD outbreak to strengthen health systems response to deliver quality care. 13 The proposed health systems framework (Table I) can serve as a guiding principle to ensure quality and safe care to COVID-19 patients not just in Pakistan, but across LMICs, where issues with quality and safety of care are most commonly documented. 15 The framework presents key indicators across the health systems building blocks on areas addressed in the earlier section and beyond, to manage the COVID-19 pandemic in Pakistan. Loopholes in any of those building blocks may lead to poor quality and unsafe care. In conclusion, achieving a better understanding of the indicators while using the WHO health systems approach will facilitate planners and managers to be prepared and overcome barriers in providing care to patients. Once the information on the proposed indicators is collected, it will help in navigating how to strengthen health systems response in fighting the COVID-19 pandemic in Pakistan. This uncertain time demands adequate planning, strong governance, adequate human, financial and material resources, standard service delivery measures, and up-to-date information systems in virus containment efforts. While efforts are on way to cope with the global health emergency in Pakistan, we need robust research evidence to be better prepared for an effective and coordinated health systems response to offer quality and safe care to the patients.
2021-02-11T14:02:00.346Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "550acd9291a1689a541ffb4c3ad93fb81691a0cd", "oa_license": null, "oa_url": "https://www.jcpsp.pk/oas/mpdf/generate_pdf.php?string=c3BTc1owbUIwMlVmQitROE1ZVm0zdz09", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "21b86f4091ed2fa333b0ac200648ce9502daa134", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246037225
pes2o/s2orc
v3-fos-license
Changes in Mid-Depth Water Mass Ventilation in the Japan Sea Deduced From Long-Term Spatiotemporal Variations of Warming Trends The influence of global warming on mid-depth water mass ventilation in the Japan Sea was investigated using both Argo-based and ship-based hydrographic datasets. The Argo-based dataset of the entire Japan Sea area revealed a warming trend during the past two decades in the upper portion of the Japan Sea Proper Water (UJSPW), which lies at intermediate depths from just under the main thermocline to approximately 1000 m. The warming rates in the southern Japan Sea are generally greater than those in the northern sea by a factor of 2–3. Long-term hydrographic data obtained over the last five decades in the northeast and southeast of the sea revealed that higher warming rates in the southern sea began from 2008, although a significant warming in both northern and southern seas was initiated from the late 1980s. A stagnation in the UJSPW formation from the late 1980s was suggested by a positive shift in the winter sea surface temperature in its formation region and a decreasing trend in dissolved oxygen concentration during the 1990s. In addition, a vertical multi-box model demonstrated that an imbalance between the heating from the upper layer and the cold water supply from its source region induces a warming in the UJSPW. We conclude that a significant change in the mid-depth water mass ventilation occurred in the entire Japan Sea in the late 1980s due to a stagnation in the UJSPW formation. Subsequently, a modest event in the mid-depth water mass ventilation have occurred since 2008. The higher warming rates in the southern sea than those in the northern sea in the event suggest a reduction in cold UJSPW supply to the southern sea from its formation region. INTRODUCTION Oceanic thermohaline circulation, which is an important part of the global climate system, is believed to be significantly influenced by global warming. The freshening of mid-depth water masses in the Pacific and Indian Oceans and the warming of bottom water in the subarctic North Pacific have been considered evidence of the influence of global warming (Wong et al., 1999;Fukasawa et al., 2004). However, little is known about how thermohaline circulation responds to global warming and the signals that can be detected in oceans. The Japan Sea is a mid-latitude marginal sea loosely separated from the northwestern North Pacific by the Japanese archipelago (Figure 1). One of the unique features of the Japan Sea is a self-contained thermohaline circulation system that includes deep water formation. The thermohaline circulation system in the Japan Sea is sensitive to global warming and climate change owing to its limited size (Bindoff et al., 2007). In fact, a gradual warming and depletion in the dissolved oxygen (DO) concentration have been reported from the abyssal Japan Sea, suggesting a stagnation in the deep water formation (Gamo et al., 1986;Minami et al., 1999;Gamo, 2011). Therefore, the Japan Sea can be regarded as a "canary in a coal mine" of climate change impacts in the global oceans (Gamo, 2011). A simple view of thermohaline circulation in the Japan Sea is as follows. The layer under the main thermocline of the Japan Sea is occupied by a water mass called the Japan Sea Proper Water (JSPW), which was previously considered a single water mass because of its narrow temperature (0-1 • C) and salinity (34.06-34.08) ranges (Uda, 1934;Worthington, 1981). The JSPW is formed by deep convection due to surface cooling in winter in the northwestern Japan Sea (Sudo, 1986;Senjyu andSudo, 1993, 1994;Seung and Yoon, 1995;Senjyu et al., 2002). The cold and oxygen-rich characteristics (>190 µmol kg −1 ) in the JSPW as well as its narrowness in temperature and salinity ranges originate from the convective formation process (Senjyu, 2020). Because of the shallow sill depths of the four straits connecting the Japan Sea with other waters, the JSPW is perfectly isolated from the surrounding seas. Hence, the JSPW sunken into a deep layer in FIGURE 1 | Bottom topography of the Japan Sea. Red squares labeled JW, JE, YB, and TB indicate the areas for the Argo data analysis representing the western Japan, eastern Japan, Yamato, and Tsushima Basins, respectively. Two yellow circles labeled H-4 and PM5 show the long-term hydrographic stations by the JMA. The location of Vladivostok is indicated by a blue circle. the northwestern region gradually upwells during the course of the cyclonic circulation in the abyssal Japan Sea basins (Senjyu et al., 2005b;Senjyu and Yoshida, 2019). Close examinations of water characteristics have revealed that the JSPW consists of several water masses (e.g., Nitani, 1972;Gamo and Horibe, 1983;Senjyu and Sudo, 1993, 1994, 1996Kim et al., 2004). Circulation and ventilation systems corresponding to each water mass have been suggested in accordance with this layered water mass system Hatta and Zhang, 2006). In this study, we implicitly assume three-layered water masses, the upper portion of the Japan Sea Proper Water (UJSPW) from just under the main thermocline to approximately 1000 m (Sudo, 1986;Senjyu andSudo, 1993, 1994), the deep water lying approximately 1000-2000 m (Nitani, 1972;Kim et al., 2004), and bottom water below approximately 2000 m (Gamo and Horibe, 1983;Gamo et al., 1986). This study discusses the recent influence of global warming on the mid-depth water mass in the Japan Sea (corresponding to the UJSPW) based on hydrographic datasets obtained by the Argo floats and research vessels. The UJSPW has been defined as the water mass in the potential density (PD) range 27.31-27.34, along with its core density of 27.32, based on the hydrographic data obtained before 1985 (Sudo, 1986;Senjyu and Sudo, 1994). Although the UJSPW distributes in the entire Japan Sea area, the water mass north of the subarctic front is characterized by a relatively weak stratification (pycnostad) with high DO concentration (Senjyu and Sudo, 1993). Since this is a remnant of deep convection at the UJSPW formation, remarkable signals reflecting global warming are anticipated in its long-term variations. The Argo float is a robotic instrument drifting with the currents at a mid-depth and measuring temperature and salinity between a depth and the sea surface via a periodic movement up and down. As shown in the next section, the Argo float data are available over the entire Japan Sea area, including the regions where frequent observations by research vessels are difficult to conduct, such as the North Korean territory and the south of the Peter the Great Bay off Vladivostok in winter, which are potentially important for JSPW formation. Although the Argo float datasets provide oceanographic conditions in a wide area of the Japan Sea, their temporal coverage spans only just over 20 years as the international Argo Project began in 2000. For this reason, Argo float data have been mainly utilized to detect the mid-depth circulation pattern in the Japan Sea (e.g., Park et al., 2004Park et al., , 2010Choi and Yoon, 2010;Kang et al., 2016). To compensate for the temporal coverage of the Argo data, we analyzed long-term (spanning the last five decades) ship-based hydrographic data obtained in the northeastern and southeastern Japan Sea. The analyses of the complementary datasets revealed changes in the mid-depth water mass ventilation in the wide areas of the Japan Sea that occurred in recent decades. DATA AND METHOD We analyzed the Advanced Automatic Quality Checked Argo data (AQC Argo data) version 1.2a offered by the Japan Marine Science and Technology Center (JAMSTEC), which provides temperature and salinity profiles with more advanced automatic checks than the real-time quality-controlled data from the Global Data Assembly Center. In total, 20610 temperature and salinity profiles were registered in the Japan Sea area for the period from October 2001 to December 2019. The typical sampling intervals in the profile were 10-30 m. The Argo floats in oceans worldwide typically park at a depth of 1000 m and measure temperature and salinity between 2000 m and the sea surface via temporal sinking every 10 days. However, most Argo floats in the Japan Sea have been set to stay at 700-800 m and measure temperature and salinity between the parking depth and the sea surface every 7-10 days. Therefore, the amount of data decreases to less than 40000 points below the depth of 700 m from more than 200000 points in the upper 200 m (Figure 2). However, data from the upper 700 m are sufficient for investigating the oceanic conditions in the UJSPW. First, a screening of the profiles was performed with the criterion that the maximum observation depth was deeper than 500 m and the number of observation layers was greater than 10. Each profile was interpolated every 10 m using the Akima method (Akima, 1970) based on the data that all the quality flags of pressure, temperature, and salinity labeled by the JAMSTEC were normal. Nevertheless, abnormally low or high values were often found in salinity in the JSPW, although temperature measurements were generally within a reasonable range. It has been reported that the long-term variation in salinity at a depth range of 300-1000 m is small (+0.06 per 100 years) (Kwon et al., 2004). Indeed, ship-based salinity measurements at Stations H-4 and PM5 in the northeastern and southeastern Japan Sea (Figure 1), respectively, showed relatively stable values around 34.07 in the UJSPW since the 1970s, particularly after 1993 when the observations with a CTD (conductivity-temperature-depth profiler) were initiated (not shown). Therefore, we applied a simple correction to salinity in the AQC Argo data, assuming that the salinity in the UJSPW was almost constant during the analysis period. The difference between the observed salinity at the bottom layer of each profile (S b ) and the reference salinity (34.070) was evaluated, and the difference was added throughout the salinity profile, excluding the profiles of S b in the range 34.065-34.075. The profiles showing extremely high or low salinity in the bottom layer (i.e., S b > 34.10 or S b < 34.02) were rejected. Following this exclusion process, the number of profiles was reduced to 16115, although the data points were still distributed across the entire sea area ( Figure 3T). However, if we see the distribution of the data points in each year, the profiles were concentrated in the southwestern part of the sea in the early stage of the analysis period ( Figures 3A-G), because the Argo floats in the Japan Sea were mainly deployed by Korean organizations. They were spread annually and distributed throughout the entire Japan Sea area by 2008 (Figures 3H-S). Considering the data distribution in each year, as well as the bottom topography, we defined the four areas representing the western Japan Basin (JW), eastern Japan Basin (JE), Yamato Basin (YB), and Tsushima Basin (TB), shown in Figure 1, and then calculated the yearly mean profiles of potential temperature (PT) and PD in each area from the temperature and corrected salinity profiles. A bi-modal distribution in PT, warm, and cold modes was occasionally found throughout the year, particularly in JW. Close examinations revealed that the warm mode originated from the floats that had been trapped in warm eddies. These warm mode profiles typically showed PT > 1.0 • C at 400 m. Therefore, we excluded the profiles showing PT > 1.0 • C at 400 m from the calculation of the yearly mean PT and PD in each area, as our specific interesting is in the JSPW that is generally colder than 1.0 • C. The yearly mean values and standard deviations for each area were calculated from at least three data points. The degree of spatiotemporal biases in the data distribution in each area can be evaluated by the standard deviations. To investigate longer-period variations in the UJSPW, shipbased hydrographic data (temperature, salinity, and DO) during the period 1971-2019 at Stas. H-4 and PM5 in JE and YB, respectively, were analyzed (Figure 1). Sta. PM5 (37 • 42 N 134 • 42 E) is a most frequently observed hydrographic station in the Japan Sea by the Japan Meteorological Agency (JMA) since the 1960s; particularly, during the period 1972-2009, observations were conducted regularly four times a year mainly in February, April-May, July, and September, although the frequency has been reduced to once a year (mainly in November) since 2011. Hydrographic observations at Sta. H-4 (40 • 30 N 134 • 40 E) have been conducted at least once a year from 1972 to 2019 by the JMA, with exceptions in 1981, 1988, and 1991. The DO measurements at Sta. H-4 during the period 1986-1996 were not available except for 1987 and 1993-1995. The yearly mean and standard deviation of PT and PD were calculated for each station if there were plural data more than two in a year, otherwise only the mean value was calculated. In any cases, we applied the criterion that profiles of PT > 1.0 • C at 400 m are excluded, as in the Argo data processing. In addition, the sea surface temperature (SST) in JW was analyzed as an indicator of oceanic conditions in the JSPW formation region (Figure 1). The Centennial in situ Observation-Based Estimates of the Variability of SST and Marine Meteorological Variables, version 2 (COBE-SST2) provides global monthly mean SSTs with spatial resolutions of 1 • × 1 • in latitude and longitude (Hirahara et al., 2014). We downloaded the dataset during the period 1971-2020 from the website of the North-East Asian Regional Global Ocean Observing System (NEAR-GOOS), Regional Real Time Data Base by the JMA. RESULTS The time series of Argo-based PT and PD at representative depths (400, 500, 600, 650, and 700 m) in each area are shown in Figures 4, 5, respectively. Although there were insufficient data in the early stage of the analysis period except for TB, as previously mentioned, the linear warming trend throughout the depths in all the evaluated areas is presented. Accordingly, PD shows decreasing trends in all the areas; however, most PD values were within the range of the UJSPW (27.31-27.34) defined by Senjyu and Sudo (1994) except for 400 m. We tested the statistical significance of the trends using the Student's t-test. The bold and bold-underlined figures in Tables 1, 2 indicate statistical significance at the 95 and 99% confidence levels, respectively. The warming and lightening rates in PT and PD were significant at the 99% confidence level throughout the depths in YB and TB. The changing rates of the vertically averaged PT for a depth range 400-650 m and the rates for the northern (JW and JE) and southern (YB and TB) areas are also listed. The time periods of the trends are shown in Figure 4 by a red trend line in each panel. The changing rates of the vertically averaged PD for a depth range 400-650 m and the rates for the northern (JW and JE) and southern (YB and TB) areas are also listed. The time periods of the trends are shown in Figure 5 by a red trend line in each panel. JE also showed significant warming rates throughout the depths except for 400 m, although the lightening rate in PD at 400 m was significant at the 99% confidence level. In JW, statistically significant warming and lightening rates were found in the depth ranges 600-650 m and 400-600 m, respectively. The changing rates in each region were also estimated based on the vertically averaged PT and PD for a depth range 400-650 m. All the regions showed the positive and negative rates in PT and PD, respectively. The changing rates in PD were statistically significant in all the regions, although significant PT rates were found in YB and TB in the southern Japan Sea. It is noteworthy that the warming rates in the southern areas (YB and TB) were generally greater than those in the northern areas (JW and JE). To highlight the contrast, warming rates were evaluated for the northern and southern areas separately ( Table 1). Although statistically significant rates were only found at 600 and 650 m in the northern area, the corresponding warming rates in the southern area were approximately two times greater than those in the northern area. In addition, the warming rates in the southern area were statistically significant at the 99% confidence level throughout the depths. Correspondingly, the decreasing rates in PD in the southern area were generally greater than those in the northern area, and the rates were statistically significant in all the cases with an exception at 700 m in the northern area ( Table 2). To see the transition of the trends over several decades, time series of the yearly mean PT and PD at 400, 500, 600, and 700 m at Stas. H-4 and PM5 are shown in Figures 6, 7, respectively, along with the Argo-based yearly mean values in JE and YB. Note that the data points in 2012, 2013, and 2019 were excluded from the time series because of the data processing criterion. Although PT and PD at each layer fluctuated with an amplitude, particularly at Sta. PM5 in the Yamato Basin where the mesoscale eddies are active (Lee et al., 2000;Morimoto et al., 2000;Watanabe et al., 2009;Yabe et al., 2021), the overall trends of the ship-based PT and PD coincide well with those of the Argo-based yearly mean values in both areas. It is worth noting that warming appeared to accelerate from the 1990s throughout the depths at both stations. Therefore, the transition of changing rates in PT and PD were evaluated using a 15-year sliding window based on the yearly mean values. The time series of warming rate for a period of 15year long showed a weak or no warming trend before 1986 at both stations ( Figure 8A). By contrast, an acceleration of warming occurred during the period from the late 1980s to the mid-1990s throughout the depths, although a peak of the warming rate at Sta. PM5 lagged by 2-5 years to that at Sta. H-4, suggesting a limited water mass exchange between the Japan and Yamato Basins (Senjyu et al., 2005a(Senjyu et al., , 2013Senjyu and Aramaki, 2017). After the maximum warming rate, the period of nearly constant warming rate continued until 2007, except for the period 2002-2006 at 400 and 500 m at Sta. PM5. As the positive rate continued over more than 15 years with a statistical significance, the warming of UJSPW from the late 1980s is robust regardless of the selection of sliding window width. The warming rate at Sta. PM5 was slightly accelerated again from 2008, particularly below the depth of Frontiers in Marine Science | www.frontiersin.org 500 m. As a result, the warming rates at Sta. PM5 were greater than those at Sta. H-4, consistent with the recent Argo floats observations ( Table 1). The variation of PD changing rates basically follow that in PT; positive or no changing rates in the 1980s, an acceleration of lightening from the late 1980s to the mid-1990s followed by a slight deceleration of lightening by 1998, and the second acceleration of lightening from 2006 to 2011 ( Figure 8B). The temporal variations in stratification are found in the decadal-mean PT and PD profiles at both stations (Figure 9). The PT and PD profiles for the 1970 and 1980s were almost coincided each other, whereas those after the 1990s exhibited a gradual warming and lightening throughout the depths. It is noticeable that the PT profiles at Sta. PM 5 show a slight acceleration of warming at each layer, whereas PT below 500 m at Sta. H-4 seems to increase at a constant rate during the period from the 1990 to the 2010s. DISCUSSION The ship-based long-term hydrographic data revealed the significant warming initiated from the late 1980s. Simply, the warming in the UJSPW is caused by the imbalance between the heating from the upper layer and the advection of cold water from the UJSPW formation region. To understand this situation, a vertical one-dimensional multi-box model was introduced. This model is similar to the one used in Minami et al. (1999) and consists of 25 boxes with a thickness z = 100 m (Figure 10). The horizontal advection of cold water from the UJSPW formation region is represented by the injection (ventilation) of cold water (PT = θ 0 = 0.1 • C) of volume S 0 into the Boxes 19-23 corresponding to the UJSPW depth range (500-1000 m). Although new bottom water formation during the analysis period has been suggested (Kim et al., 2002;Senjyu et al., 2002;Talley et al., 2003;Yoon et al., 2018), the cold water supply was imposed on only the intermediate layers because our interest was in the UJSPW variation. As the cold water is assumed to be equally injected into the five boxes, the PT in a box (θ i ) changes to Frontiers in Marine Science | www.frontiersin.org by instantaneous mixing with the injected water. In addition, the lateral injection of cold water induces upward advection velocity in each box (W i ) to conserve the volume of the box, The change of PT in each box is controlled by the vertical advection-diffusion equation, with a vertical diffusivity K. We solved Eqs 1 and 2 alternately at the time step t = 1 day, under the boundary conditions PT = 1.3 and 0.05 • C at 300 and 2800 m, respectively. The PT profile reaches an equilibrium state after several hundred years. By tried and error examinations, we attained a comparable equilibrium profile to the observed PT distributions at Sta. PM5 during the 1970-1980s at S 0 = 7.9 × 10 −7 m s −1 (25 m year −1 ) and K = 2.0 × 10 −4 m 2 s −1 after 600 years (Figure 11). However, this result should be considered as a possible combination of the parameters in reasonable ranges rather than the best fit model results to the observed PT profile. From this equilibrium state, the cold water injection was reduced to S 0 /4 in two manners, an abrupt reduction (Exp. 1) and a linear reduction over 30 years (Exp. 2), and then the subsequent change in PT profile was investigated. In both cases, the PT profiles showed a gradual warming throughout the depths due to the imbalance between the downward heat diffusion and the upward cold water advection (Figure 11). However, the evolution of the profiles after the reduction of cold water supply was different in two cases. In Exp. 1, the PT profile gradually approached to another equilibrium state after the sudden PT increase during the first 10 years. On the other hand, the warming at each depth was accelerated with time in Exp. 2. The time series of PT changing rate evaluated by the 15-year sliding window show the difference in PT changes ( Figure 11C). The observed variations in warming rates (Figure 8) and stratifications (Figure 9) seem to be explained by a combination of the results from Exps. 1 and 2. As the shape of the equilibrium PT profile is determined by the ratio of the upwelling velocity to the vertical diffusivity (W/K) in this model, an increase in vertical diffusivity, instead of decreasing advection velocity, brings a warming in the PT profile; however, it is not likely that the vertical diffusivity varies with time over several decades. Therefore, a reduction of cold water supply is the most probable cause of the significant warming initiated in the late 1980s. A stagnation in the UJSPW formation during the 1990s is suggested by the DO variations at Stas. H-4 and PM5. Time series of the DO concentration at the representative depths of the UJSPW showed a decreasing trend after the mid-1990s, although the trends before 1995 were ambiguous due to the superimposed interdecadal oscillation Senjyu, 2010, 2012) and the long period of lack of data at Sta. H-4 (Figure 12). A similar longterm decreasing trend in DO concentration has been observed in the deep and bottom waters in the Japan Sea, and the negative trends in DO have been considered as an evidence of the stagnation in the JSPW formation due to global warming (Gamo et al., 1986(Gamo et al., , 2014Minami et al., 1999;Gamo, 2011;Kumamoto, 2021), although bio-geochemical DO consumption rate in each basin are unknown. As the JSPW is formed by deep convection due to surface cooling in winter in the northwestern Japan Sea (Sudo, 1986;Senjyu andSudo, 1993, 1994;Seung and Yoon, 1995;Kawamura and Wu, 1998;Senjyu et al., 2002), a long-term variation in the winter mean (December-February) SST in the JW region was investigated (Figure 1). Although the arealmean winter SST in the UJSPW formation region exhibited a distinctive positive trend (+0.02 • C yr −1 ) for the analysis period, a detection of discontinuous changes in the SST field was examined using the Lepage test (Figure 13). The Lepage test is a non-parametric two-sample test (Lepage, 1971) and has been used to detect a "jump" in a time series of climate parameters (Yonetani, 1992a,b). In this study, we set to test the statistical difference between before and after 10-year periods bounded by a year. The test statistic (HK) showed significant values of greater than 95% confidence level during the period 1989-1995, which indicates a discontinuous change between the periods before and after the years. In fact, the mean SSTs during the periods 1979-1988 and 1996-2005 were 1.38 and 1.95 • C, respectively. As the warming explained by the linear trend during 1989-1995 is 0.12 • C, a rapid warming (+0.57 • C) occurred in the winter SST field from the late 1980s to mid-1990s. This also supports the stagnation in the UJSPW formation from the late 1980s. CONCLUSION We conclude that a significant change in the mid-depth water mass ventilation occurred in the Japan Sea in the late 1980s. Since the warming trends in JW and TB in the western sea are similar to those in JE and YB in the eastern sea, respectively (Figure 4), it is suggested that the significant acceleration of warming from the late 1980s occurred concurrently in the northwestern and southwestern areas, which is in the entire Japan Sea area. Before the mid-1980s, when the influence of global warming in this region was not very serious, the cold, newly formed UJSPW was supplied to the entire Japan Sea area from its formation region via deep circulation (Senjyu et al., 2005b). However, a stagnation of the UJSPW formation occurred in the late 1980s, as suggested by the abrupt SST warming in its formation region and the depression in DO concentration at Stas. H-4 and PM5, which resulted in the warming of UJSPW over the entire Japan Sea area through the imbalance between the heating from the upper layer and the advection of cold water from the UJSPW formation region. Subsequently, an acceleration of warming began from 2008 at Sta. PM5, which led to the recent higher warming rates in the southern Japan Sea than those in the northern sea (Figure 8). Considering the nearly constant or weak decelerated warming rates at Sta. H-4 after 2008, a relatively large volume of cold UJSPW was still supplied to the northern Japan Sea including the UJSPW formation region, although DO at Sta. H-4 exhibited negative trends (Figure 12). In contrast, in the southern Japan Sea away from the UJSPW formation area, the transport of newly formed UJSPW was restricted, which resulted in greater warming rates in the southern Japan Sea than in the northern sea owing to the larger imbalance in the UJSPW heat budget. As the warming occurred only in the southern sea, the water mass ventilation event after 2008 may be modest compared to that occurred in the late 1980s. The second event after 2008 implies that a remarkable signal of global warming on thermohaline circulation appears first in the terminal regions of deep circulation, which are away from the source region of the deep water, as a weakening of water mass ventilation. Actually, a notable warming of deep water has been observed in the Ulleung Interplain Gap in the southwestern Japan Sea, although no significant long-term trend was reported for the deep flows (Chang et al., 2009). Trans-basin warming in the bottom water of the North Pacific has been reported from the subarctic North Pacific along the 47 • N line, which corresponds to a terminal region of the bottom water pathway from the South Pacific (Fukasawa et al., 2004). We point out a similarity between the warming in the North Pacific bottom water and the recent greater warming rates in the southern Japan Sea, which is away from the UJSPW formation region, although warming trends have already been detected in the northern Japan Sea. In this study, global warming has been considered as a main cause of the long-term variations in the UJSPW. However, decadal-scale variations associated with hemispherescale atmospheric variations have been reported in the Japan Sea (Minobe et al., 2004;Senjyu, 2010, 2012;Na et al., 2012). It is meaningful that the significant UJSPW warming from the late 1980s appears to correlate with a regime shift occurred in the Northern Hemisphere in 1988/1989 that is associated with the Arctic Oscillation (Yasunaka and Hanawa, 2002). This may indicate that the decadal-scale climatic variability triggers a change in the mid-depth water mass ventilation in marginal seas of relatively small volume, as well as the longer-period warming trend. DATA AVAILABILITY STATEMENT Publicly available datasets were analyzed in this study. This data can be found here: The AQC Argo data used in this study are available on the websites of the JAMSTEC (http://www.jamstec. go.jp/ARGO/argo_web/argo/?page_id=100&lang=en). The shipbased hydrographic data can be downloaded from the website of the JMA (http://www.data.jma.go.jp/gmd/kaiyou/db/vessel_ obs/data-report/html/ship/ship.php). The monthly SST data in the COBE-SST2 provided by the JMA were available in the NEAR-GOOS website (http://ds.data.jma.go.jp/gmd/goos/data/ pub/JMA-product/) by the JMA. AUTHOR CONTRIBUTIONS TS designated the study, performed the data processing and analysis, conducted the model calculation, and wrote the manuscript. FUNDING This study was partly supported by JSPS KAKENHI (Grant Numbers JP18H03741 and JP19H04245) and the Environment Research and Technology Development Fund of the Ministry of the Environment, Japan (2-1604).
2022-01-20T14:16:55.713Z
2022-01-20T00:00:00.000
{ "year": 2022, "sha1": "e27f2bf41d9cc165e276832ad409ca5494e88686", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmars.2021.766042/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "e27f2bf41d9cc165e276832ad409ca5494e88686", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
264128337
pes2o/s2orc
v3-fos-license
Feshbach resonances of composite charge carrier states in atomically thin semiconductor heterostructures Feshbach resonances play a vital role in the success of cold atoms investigating strongly-correlated physics. The recent observation of their solid-state analog in the scattering of holes and intralayer excitons in transition metal dichalcogenides [Schwartz et al., Science 374, 336 (2021)] holds compelling promise for bringing fully controllable interactions to the field of semiconductors. Here, we demonstrate how tunneling-induced layer hybridization can lead to the emergence of two distinct classes of Feshbach resonances in atomically thin semiconductors. Based on microscopic scattering theory we show that these two types of Feshbach resonances allow to tune interactions between electrons and both short-lived intralayer, as well as long-lived interlayer excitons. We predict the exciton-electron scattering phase shift from first principles and show that the exciton-electron coupling is fully tunable from strong to vanishing interactions. The tunability of interactions opens the avenue to explore Bose-Fermi mixtures in solid-state systems in regimes that were previously only accessible in cold atom experiments. The past years have seen the advent of few-layer transition metal dichalcogenides (TMDs) as a novel platform to study strongly correlated quantum matter [2,3], marked by the observation of Mott phases [4,5], insulating density waves [6][7][8], excitonic insulators [9], Wigner crystals [4,10,11], the quantum anomalous Hall effect [12], and fractional Chern insulators [13].In TMDs, Bose-Fermi mixtures of excitons and electrons reach for the first time into the previously inaccessible strong-coupling regime [2,[14][15][16][17].While such mixtures have allowed for the optical detection and engineering of novel many-body phases [10], the control over exciton-electron interactions has so far been limited to doping [18] or coupling to optical cavities [19,20].Establishing fully tunable interactions of electrons with both short-and long-lived excitons would open up the possibility to explore even richer many-body phases such as exciton-induced superconductivity [21] or charge-density wave states [15,22], and would further enrich TMDs as a fully tunable quantum simulation platform on par with ultracold atoms. In this letter, we show how full control over interactions can be realized in TMDs.Using a quantum-chemistry inspired microscopic approach we demonstrate that the presence of trions both in open and closed scattering channels allows for the full tunability of interactions from the ultra-strong coupling regime all the way to vanishing interactions both for intra-and interlayer excitons.The trions in the respective closed channel take the role analogous to closed-channel molecules in cold atoms where they build the foundation of tunable interactions between atomic particles [23].In contrast to cold atoms, however, a clear scale separation between the energy of the closedchannel bound states and the relevant scattering energies is absent.This renders the three-body nature of the underlying scattering processes a crucial ingredient which cannot be captured by effective theories based on structureless particles [24]. Here we address this challenge by starting from a microscopic model that fully resolves the internal structure of the three-particle complexes underlying Feshbachenhanced exciton-electron scattering.Our solution of the quantum three-body problem is based on exact diagonalization and reveals the existence of two types of resonances.The first type of resonance has the characteristics of broad Feshbach resonances in ultracold atoms.This resonance allows to tune the interactions between electrons and short-lived intralayer excitons of large oscillator strength, making them ideal for applications in spectroscopy and correlation sensing. The second type of resonance couples long-lived interlayer excitons and electrons.It has the characteristics of a narrow resonance and requires fine-tuning of the external electric field representing the control parameter of the Feshbach resonances in TMD heterostructures.This novel type of resonance allows the realization of long-lived exciton-electron mixtures at strong-coupling, and thus enables a new approach to explore the phasediagram of Bose-Fermi mixtures in regimes that have so far been out of reach in cold atomic systems due to their chemical instability [25].can tunnel in between the layers (cf.Fig. 1).The layers are separated by a single sheet of hexagonal boron nitride (hBN) of thickness d.The layers define a pseudospin degree of freedom: akin to a cold atom system, the interactions between particles depend on this pseudospin, which in the TMD setting are coupled through tunneling.The system is subject to an external perpendicular electric field E = E z êz .This field allows to tune the energy difference ∆E = eE z between the pseudospin states, analogous to magnetic fields employed to realize Feshbach resonances in cold atoms.In an effective mass approach the corresponding Hamiltonian reads where Q i = ±e and m i are, respectively, the charges and the masses of the electrons (i = 1, 2) and the hole (i = 3).The in-plane distances and momenta of the particles are described by Rij = | Ri − Rj | and pi , respectively.The Pauli matrices τ i µ=x,y,z act on the layer (pseudospin) subspace of each particle labeled by a and b. The first term in Eq. ( 1) represents the kinetic energy of particles and the second the energy detuning ∆E of the two layers.The third term accounts for the tunneling of the hole, t 3 = t and t 1 = t 2 = 0.This choice of species-dependent tunnel-coupling is motivated by experiments [1] and may be seen as originating from the energy offset between the conduction bands of the hBN and TMD layers, which is drastically larger than the one of the valence bands.The resulting confinement of each electron to one layer leads to the decoupling of Hilbert space into invariant subspaces.In the following we focus on the case where the electrons are located in the top layer. The interaction potential V ab (r) between two charges separated by an in-plane distance r is obtained from solving Poisson's equation for two identical TMD layers separated by a distance d that models the thickness of a monolayer of hBN.The interaction potential in momentum space reads for two charges in the same layer (a = b), and for charges in different layers (a ̸ = b; for details see the Supplemental Material [26]).The real-space potentials are obtained via Fourier transformation.The screening length is r 0 = α 2D /2ε 0 with the TMD layer's twodimensional polarizability α 2D and the vacuum permittivity ϵ 0 .Eqs. ( 2) and ( 3) reduce to the Keldysh potential [27] in the ultrathin, monolayer limit [26].Importantly, the three-body model in Eq. ( 1) does not a priori assume the formation of excitons nor does it treat them as rigid objects.Instead, excitons appear, on equal footing to the trions, as eigenstates of the Hamiltonian and can themselves be layer-hybridized due to charge-carrier tunneling. Feshbach resonances.-Tounderstand the excitonelectron scattering physics, we diagonalize the Hamiltonian (1) using a discrete variable representation (DVR) [14,28].Exploiting translational invariance, we transform the Hamiltonian into the co-moving frame of the hole and separate the center-of-mass motion [26].Rotational invariance implies the conservation of total angular momentum m.In the following we focus on m = 0.The eigenstates of the resulting Hamiltonian are wave functions ψ(r 1 , r 2 , θ) that, additionally to the layer degrees of freedom, are parametrized by relative particle distances r 1,2 and the angle θ between the electron coordinates, see Fig. 1.While our approach describes generally electrons and holes in any TMD heterostructure, we assume two MoSe 2 layers and use material parameters obtained from DFT [29].We expect our results to be quite universal and to apply to other material combinations as well. The exact diagonalization gives access to a rich spectrum containing both symmetric and antisymmetric states with respect to electron exchange.In Fig. 2 we show the result for the spatially symmetric states (i.e.unequal spin or valley degree of freedom) and study how the low-energy spectrum depends on the band detuning ∆E.We mark the energies of trions in the absence of tunneling with bold colored lines.Depending on whether the hole is in the top or in the bottom layer two such trion states exist.We denote these as bare intralayer (blue) or bare interlayer trion (green).The shaded areas mark the corresponding electron-exciton scattering continua.We find that the hole is always part of the excitonic component of the wavefunction.Since increasing ∆E decreases the energy of the hole, one is able to bring the originally more weakly bound interlayer exciton into resonance with the intralayer exciton state.The thin black lines in Fig. 2 mark the energies of the eigenstates of the system in presence of a finite hopping strength, t = 2 meV [1].They can be expressed as a superposition of the eigenstates for t = 0.The states belonging to the scattering continua appear as sets of discrete states due to the finite size of our system.By following the energy of bare trions ('closedchannel') one can identify points where they cross the exciton-electron scattering threshold ('open-channel').Tunneling of the hole couples these channels.This turns the closed-channel trions into weakly bound Feshbach trions, leading to the emergence of the Feshbach resonances marked by the labels F 1 and F 2 . In the emergence of exciton-electron Feshbach resonances, layer hybridization plays a key role.In Fig. 3 we show the layer-and θ-averaged probability density n(r 1 , r 2 ) ∝ dθ |ψ(r 1 , r 2 , θ)| 2 of the lowest states for the band detunings ∆E labeled as I-VI in Fig. 2. For each state we show the degree of layer hybridization as probability of finding the hole in the top (T) or bottom (B) layer.The upper most row shows the lowest trion state.As ∆E is increased to a value around ∆E = 140 meV the deeply bound intralayer trion (label 0 in Figs. 2 3) turns into an interlayer trion (label 0 ′ ).The second and following rows in Fig. 3 show excited states (labels 1-3).For ∆E = 120 meV (I) the multi-nodal structure of the wave function for one of the electrons shows that these states represent intralayer exciton-electron scattering states.The lowest left subfigure in Fig. 3 shows the interlayer trion (label R 1 ) that is immersed in the intralayer scattering continuum.This trion is subject to layer-hybridization turning it into a metastable, resonant state.It becomes increasingly unstable as the broad Feshbach resonance F 1 is approached. and When crossing the resonance F 1 at ∆E ≈ 125 meV, we observe a transition of the first excited state (second row in Fig. 3) from being the lowest intralayer scattering state into the newly emergent Feshbach trion.Only at larger values of ∆E this intralayer-dominated trion then turns into the interlayer trion as highlighted by the complete reorganization of layer hybridization in this state.At the same time, the second intralayer scattering state (showing a binodal structure) turns into the first intralayer scattering state (showing only one node) while the ground state wave function remains nearly unchanged (upper row).Analogously, higher excited states change their number of radial nodes by one when crossing the resonant state. A further increase of ∆E beyond F 1 leads to the anticrossing of the trions at ∆E ≈ 139 meV (label IV), accompanied by a maximal layer hybridization.Due to the bound character of the states this hybridization is robust and independent of the system size of our numerical diagonalization.The intralayer trion crosses the interlayer scattering continuum at the second resonance F 2 at ∆E ≈ 141.7 meV (label V).At this point it turns into a hybridized, resonant state before recovering its pure intralayer character only for larger ∆E detunings (state R 2 in Figs. 2 and 3).We note that the layer hybridization found in Fig. 3 implies a modification of the electric dipole of the excitons and trions.This might give rise to interesting, tunable many-body physics of excitons and trions at finite density. We now turn to the detailed analysis of the Feshbach resonances by investigating the scattering properties of the lowest scattering state (open channel) when tuning ∆E across the resonances.Specifically, we determine the phase shift of the electron-exciton scattering which determines the strength of the modification of the scattering states due to interactions.To this end we fit the long-distance part of the lowest scattering state to the asymptotic scattering wave function, Here J m and Y m are the Bessel functions of first and second kind, respectively, and k n the momentum of the nth scattering state.We obtain the 2D scattering phase shift from [30,31]. The low-energy s-wave scattering phase shift δ E ≡ δ 0 (k) (with E = ℏ 2 k 2 /µ) is parameterized in terms of the energy scale E s according to cot(δ) E→0 = 1/π ln(E/E s ).The scale E s can be linked to the 2D scattering length a 2D , conventionally used in the context of cold atom experiments, via a 2D = ℏ 2 /2E s µ.It can be interpreted as the characteristic kinetic energy scale on which a many-body system experiences resonant, strong coupling physics.For instance, in fermionic systems the Fermi energy E F would provide such a scale. We show the scattering parameter E s in Fig. 4. Far away from the resonance F 1 , E s is approximately constant and takes a value that is on the order of the respective background trion energy (intralayer on the left and interlayer on the right of F 1 ).Approaching F 1 from the left E s increases and diverges.Correspondingly, at low scattering energies the system becomes effectively non-interacting.Right on resonance, E s jumps abruptly to zero and a bound state appears (note, the respective scattering threshold is hybridized by itself and thus energetically below the bare scattering threshold).In this regime E s follows precisely the energy of the newly emerging Feshbach bound trion. The equivalence of vanishing and diverging scattering lengths at F 1 and F 2 is a hallmark of 2D scattering physics [32].In contrast to 3D Feshbach resonances in ultracold atoms, thus also the concept of resonance width, defined by the distance in magnetic field of the zero crossing of the scattering length to the resonance position [23], is not applicable in TMDs.Namely, the zero of a 2D actually coincides precisely with the 2D Feshbach resonance position.This scattering regime, that is a unique 2D feature and has remained inaccessible in cold atoms even when using confinement-induced resonances [23], can now be realized and studied in atomically thin semiconductors. The hybridization of intra-and interlayer scattering thresholds in the vicinity of F 1 facilitates a large coupling between the open and closed scattering channels.This results in the broad Feshbach resonance at F 1 .In contrast, the resonance at F 2 is much more narrow in terms of the ∆E-change required to tune the system across the resonance (if the tunnel coupling is increased, the modification would be stronger).Importantly, the character of the resonances F 1 and F 2 differs with respect to the lifetime and oscillator strength of excitons in the respective open channel.While F 1 allows to manipulate the scattering of electrons and short-lived intralayer excitons with a large oscillator strength, the scattering physics of electrons with long-lived intralayer excitons (of small oscillator strength) can be tuned via F 2 .This allows to use intralayer excitons, optically injected close to F 1 , as a tunable probe for correlation sensing of electronic many-body systems, while the resonance at F 2 brings tunable interactions to many-body systems comprised of electrons and stable interlayer excitons, paving the way to controllable, long-lived 2D Bose-Fermi mixtures. Note, in this work we have focused on one specific configuration that is directly relevant for ongoing experiments.Allowing for different configurations (including modified tunneling strength, charge, valley, and spin degrees of freedom) will give rise to an even richer set of Feshbach resonances to be explored. Conclusion.-In this work we have studied the emergence of electrically tunable 2D Feshbach resonances that allow to tune the scattering properties of shortand long-lived excitons with electrons.In both cases the systems can be tuned to the limit of vanishing interactions.Consequently, these 2D Feshbach resonances might pave the way to probe many-body physics using valley-selective interferometric protocols, such as Ramsey or spin-echo schemes, applied to excitons [33].Our findings may open up the prospect to realize many-body sys-tems comprised of long-lived, dipolar interlayer excitons and itinerant electrons, where tunable interactions might enable exciton-induced superconductivity [21,34] or supersolidity in dipolar exciton condensates [15,22,35]. which is exactly the monolayer potential [1,2].The Fourier transformation to real space is performed numerically.Here, it is important to realize that in these results only the first and second component of momentum and position appear, while the third one only comes in as the concrete value of the layer separation d. THREE-BODY HAMILTONIAN IN THE RELATIVE COORDINATE FRAME The bilayer Hamiltonian reads Ĥ = FIG. 1.(a) Sketch of the bilayer three-body system.The hole (red) can tunnel between the layers with a coupling constant t, whereas electrons (blue) are confined to the top layer.The lab frame coordinates of the charges are Ri, i = 1, 2, 3.The relative positions r1 and r2 span an angle θ.(b) Illustration of the band structure detuning where the layer index takes the role of a pseudospin degree of freedom |↑⟩, |↓⟩.(c) Effective scattering potentials for excitons and holes, each supporting a trion state. FIG. 2 . FIG. 2. Spectrum of three-body states, spatially symmetric with respect to electron exchange, as function of the band detuning ∆E.Bold colored lines mark trion energies for t = 0 whereas black lines show the eigenenergies (including trion and exciton-electron scattering states) in presence of hole tunneling.Shaded areas represent the different scattering continua.Blue color indicates intralayer, and green color interlayer configurations as shown in the pictogram.Feshbach resonances appear at the positions labeled by F1 and F2.Symbols I-VI mark ∆E values for which the states labeled by 0-3, 0 ′ , R1 and R2 are visualized in Fig. 3. FIG. 3 . FIG. 3. Angle-and layer-averaged probability densities of electrons n(r1, r2) ∝ dθ |ψ(r1, r2, θ)| 2 in the lowest states at the positions marked with labels I-VI in Fig. 2. The layer hybridization of intra-(T: hole in top layer) and interlayer (B: hole in bottom layer) configurations is shown for each state. F 2 F 1 FIG. 4 . FIG.4.Band detuning dependence of the exciton-electron scattering parameter Es.The broad and narrow Feshbach resonances are labeled as F1 and F2, respectively (as also shown in Fig.2).
2023-10-16T06:41:59.776Z
2023-10-12T00:00:00.000
{ "year": 2023, "sha1": "0a3926970fdfb1587f4aee1d8bbced87d5045dfa", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0a3926970fdfb1587f4aee1d8bbced87d5045dfa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
208075080
pes2o/s2orc
v3-fos-license
Fracture resistance of roots enlarged with various rotary systems and obturated with different sealers Background. This in vitro study compared the fracture resistance of roots instrumented either with ProTaper or One Shape rotary systems and filled with one of the silicate, epoxy resin or silicone-based sealers. Methods. Sixty single-rooted extracted mandibular premolars were decoronated to a length of 13 mm and then randomly divided into two main groups (n=30) in terms of the rotary system used for preparation. Group 1 samples were instrumented with the ProTaper Universal system up to a master apical file of #F2, while samples in group 2 were enlarged with One Shape system. The two main groups were then divided into 3 subgroups in terms of the sealer used (n=10) and filled with guttapercha (either F2 or MM-GP points) of the rotary system used and one of the sealers as follows: group 1, BioRoot RCS + ProTaper F2 gutta-percha; group 2, AH Plus + ProTaper F2 gutta-percha; group 3, GuttaFlow + ProTaper F2 gutta-percha; group 4, BioRoot RCS+ MM-GP points; group 5, AH Plus + MM-GP points; and group 6, GuttaFlow + MM-GP points. Each specimen then underwent fracture testing by using a universal testing machine at a crosshead speed of 1.0 mm/min until the root fractured. Data were statistically analyzed. Results. Two-way ANOVA showed no significant differences between the groups. One Shape instruments showed significantly better fracture resistance compared to ProTaper instruments. Statistically, no significant difference was found between AHPlus, GuttaFlow and BioRoot RCS sealers. Conclusion. It can be concluded that the rotary system used for the instrumentation of teeth has some influence on the fracture resistance, while the root canal sealers do not have such an effect. proportional to the amount of the remaining tooth structure. During root canal treatment, the possibility of vertical root fracture is higher in over-instrumented teeth. 1,2 Nickel-titanium (NiTi) instruments have been generally used in endodontic practice because of their relatively higher reliability and better flexibility and efficiency than stainless steel files. 3 ProTaper Universal (PTU) (Dentsply Maillefer, Ballaigues, Switzerland) is a conventionally used NiTi rotary system that operates rotationally. 3,4 The instrument has a variable taper along its length and a convex triangular cross-section. 5 Another system, One Shape (MicroMega, Besançon, France), is a single-file shaping system, and it is recommended that it should not be sterilized. It has two cutting edges and triple helical construction. Two cutting edges provide bending resistance, while triple helical construction is torsion-resistant. One Shape offers three different cross-sectional areas along its length for added flexibility, and the region closest to the shaft has an "S" cross-section with two cutting angles. 6 Gutta-percha is used as the most popular root canal filling material because it has many advantages, such as easy removal from the root canal, and it is biocompatible, non-toxic and non-allergic. For hermetic seal, gutta-percha is not sufficient alone because it has no adhesion to root canal walls. 7 Root canal sealers are needed to fill the voids between the gutta-percha cones and the voids between the gutta-percha cones and root canal walls. 8 Lateral compaction is the most common root canal filling technique. Also, this technique prepares the ground for vertical root fracture due to the application of force to the root, and it is time-consuming. 9 As a result of using NiTi rotary systems and with the advent of tapered gutta-percha cones, the single-cone technique has become more useful. 10 Resin-based AH Plus root canal sealer (Dentsply, Detrey, Germany) is widely used today due to its many advantages. 11 Another silicone-based root canal sealer is GuttaFlow (Coltene Whaledent, Langenau, Germany). GuttaFlow is a liquid filling system that combines root canal sealer and gutta-percha (GP) in a single material. GuttaFlow is biocompatible, has excellent fluidity and features a thin sealing layer. 12 Another root canal sealer is tricalcium silicate-based BioRoot RCS (Septodont, Saint Maur-des-Fosses, France); it consists of tricalcium silicate, zirconium dioxide and povidone, water and calcium chloride. 13 In addition, the manufacturer claims that BioRoot RCS can obturate the root canal with and without gutta-percha cones because of the excellent bonding by penetrating into the dentin structure. This study aimed to evaluate the fracture resistance of roots instrumented either with ProTaper or One Shape rotary systems and filled with one of the silicate, epoxy resin or silicon-based sealers while the teeth were obturated either with the laterally condensed gutta-percha or the single-cone technique. Specimen Preparation Sixty extracted caries-free and single-rooted mandibular premolar teeth were decoronated to a length of 13 mm. The teeth were stored in saline solution before the experiments. The tooth lengths were determined by placing a #15 K-file (Dentsply Maillefer, Tulsa, OK) in the root canal until the apical foramen was observed and reducing the file length by 1 mm. The teeth were randomly divided into two main groups (n=30) in terms of the instrumentation system. The samples in group 1 were instrumented with ProTaper Universal system (Dentsply Maillefer, Ballaigues, Switzerland) according to the manufacturer's instructions; SX, S1, S2, F1 and F2 instruments were used, while the samples in group 2 were enlarged with One Shape system (Micro Mega, Besancon, France). During the instrumentation, the root canals were irrigated with 2.5 mL of 5.25% NaOCl between each change of file. After instrumentation, the specimens were irrigated with 5 mL of 17% EDTA to remove the smear layer. The root canals were dried with sterile paper points (Diadent, Diadent Group International, Burbany, BC, Canada). Two main groups were then subdivided into three subgroups in terms of the sealer used (n=10) and filled with gutta-percha (either F2 or MM-GP points) of the rotary system used. The experimental groups were treated as follows: Group 1: BioRoot RCS (Septodont, France) and ProTaper F2 gutta-percha Group 2: AH Plus (Dentsply, Germany) and ProTaper F2 gutta-percha Group 3: GuttaFlow (Coltene, Germany) and ProTaper F2 gutta-percha A ProTaper F2 master gutta-percha, corresponding to the final instrument, was used as a single cone. The root canal walls were covered with sealer (BioRoot RCS, AH Plus, GuttaFlow) using paper points, and then the apical portion of the gutta-percha was coated with sealer and inserted into the root canal. An MM-GP point was used as a master cone, and the root canal walls were covered with sealer (BioRoot RCS, AH Plus, GuttaFlow) using paper points; then the apical portion of gutta-percha was coated with sealer and inserted into the root canal, followed by the placement of lateral cones for lateral condensation. The root canal orifices were sealed with Cavit temporary filling material (3M ESPE, Germany). The obturated teeth were stored at 37ºC at 100% humidity for one week for complete setting of the sealers. Mechanical Testing After one week, 3 mm of the roots were embedded in self-cured acrylic resin (Imicryl, Konya, Turkey) by using cylindrical molds measuring 15 mm in diameter and 13 mm in height, leaving 9 mm of the root length exposed. The temporary filling material was removed with an excavator. The specimens were mounted on the lower plate of a universal testing machine (INSTRON, Llyod LRX; Lyod Instruments Ltd., Fareham, UK) ( Figure 1). A compressive loading force was applied vertically to the coronal surfaces of the roots with a loading rate of 1 mm/min until vertical root fracture (VRF) occurred. The maximum load at failure was recorded in Newton via data analysis software (Nexygen-MT, Llyod Instruments, Fareham, UK). Data were recorded and statistically analyzed with two-way ANOVA. Results The means and standard deviations of push-out bond strengths (MPa ± SD) of the experimental groups are presented in Table 1 in terms of the sealers. Two-way ANOVA showed no significant differences between the groups (P=0.051). One Shape instruments exhibited significantly higher fracture resistance compared to ProTaper instruments (P=0.002, P<0.05). No significant differences were found between AH Plus, GuttaFlow and BioRoot RCS sealers (P=0.782, P>0.05). Discussion In this study, we compared the fracture resistance of roots instrumented with either ProTaper or One Shape rotary system and filled with one of the silicate, epoxy resin or silicone-based sealers. In this study, no additional silicone was used as an artificial periodontal ligament to counteract the vertical force. To perfectly simulate the periodontal ligament, it is still difficult to apply a force parallel to the long axis of the tooth for proper simulation of the clinical situation. 14 This is one of the limitations of such experiments. The strength test is a method that has been used to examine the effect of fracture resistance of obturation materials on the root canal-filled teeth. 2,15 Stresses generated through the root canal were transmitted along the root surface where the interfacial adhesion failed. 15 In this study, a single load was applied to the fracture parallel to the long axis of the teeth, which produced more uniform stress distributions by using a universal test machine (Instron Corp, Canton, MA, USA). Chemomechanical preparation of the root canal system is performed during root canal treatment to remove the infected pulp tissues; mechanical preparation should also be performed. Excessive tooth structure removal during mechanical preparation and excessive forces applied during obturation reduce the fracture resistance of root-filled teeth. 16 Several studies have shown that decreased fracture resistance of roots after preparation with different rotary systems. 17,18 A round cross-sectioned root canal results in more homogeneous stress distribution during root canals obturation, increasing the fracture resistance. 19 Accordingly, ProTaper NiTi and One Shape rotary systems were used in this study to produce round- shaped root canals. One Shape instruments showed significantly better fracture resistance compared to ProTaper instruments in the present study. This could be attributed to the fact that One Shape files permit more round cross-sectional root canals during preparation because of their "S" cross-section with two cutting angles. During the root canal preparation, a relatively low concentration of NaOCl (2.5%) was used as an irrigant to minimize any adverse effects on the dentin mechanical properties. 20 The primary goal of root canal filling is to strengthen a weakened root against fracture. To achieve an ideal and three-dimensional root canal obturation, gutta-percha cones should be used with a root canal sealer. However, root canal obturation has been known as the major reason for vertical root fracture. In the lateral condensation technique, the spreader laterally compacts the gutta-percha and adapts it to the root canal wall under consistent vertical load. 21 However, the lateral condensation technique was used in this study because it is a widely recommended classic technique. 21 In another study, Ersoy and Evcil 22 investigated root canal sealers and obturation techniques. 22 The single-cone technique showed significantly higher resistance to fracture than the lateral condensation technique. However, in this study, there were no significant differences between the root canals filled with lateral condensation technique and the other groups, which might be explained by the use of NiTi files. Many studies have shown that epoxy resin-based sealers exhibit better adaptation with the root canal dentin compared to glass-ionomer and ZOE-based sealers. 23 It has been shown that retention of the filling material can be mechanically improved, thereby strengthening the root canal dentin to increase the fracture resistance. The fracture resistance of AH Plus root canal sealer has already been investigated in numerous studies. [23][24][25] In a previous study, AH Plus and MTA Fillapex showed significantly higher resistance to fracture than other conventional root canal sealers. 25 With great attention to the adhesive properties and sealing ability of epoxy resin-based root canal sealers, the effect of AH Plus on the fracture resistance of root-filled teeth was compared with other types of root canal sealers in this study. In our study, there were no significant differences between AH Plus and other root canal sealers. BioRoot RCS, commonly known as an MTA-based sealer, is a powder/liquid hydraulic tricalcium silicate-based cement recommended for the single-cone technique or cold lateral condensation root filling. It has a lower cytotoxicity than other conventional root canal sealers and might induce hard tissue deposition. 13,27 Siboni et al 28 showed that BioRoot RCS has high calcium ion releasing ability. A recent study showed that BioRoot RCS has a higher bioactivity than the ZOE sealer on human PDL cells. 29 According to the manufacturer, this sealer has an integration similar to Biodentine (Septodont), trying to integrate the ideal properties of Biodentine in a root canal sealer. 30 Also, a silicon-based sealer, GuttaFlow, has calcium ion releasing ability. In a previous study, BioRoot RCS and IRootSP resulted in higher resistance to fracture compared to MTA-Fillapex. In the present study, no significant differences were found between BioRoot RCS and the other conventional sealers. Also, in that study, the authors observed that the LTC techniques resulted in more resistance to fracture than the SC techniques, but we observed no significant differences. The differences between the results of studies might be explained by the type of the sealer used, the brand of the sealer and the experience of the practitioner. In the present study, no differences were found in fracture resistance between the roots filled with AH Plus, BioRoot RCS and Gutta Flow and the obturation techniques. Furthermore, all the fracture patterns observed in the study after failure were irreparable. Figure 2. The tested sealers. Conclusion While One Shape instruments resulted in significantly better fracture resistance compared to ProTaper instruments, all the three root canal sealers examined in this study strengthened the prepared root canals with increased fracture resistance.
2019-12-08T09:13:19.301Z
2019-10-07T00:00:00.000
{ "year": 2019, "sha1": "295ca6e25edbf9d05c54fe5ce47ad4ec054e43fc", "oa_license": "CCBY", "oa_url": "https://joddd.tbzmed.ac.ir/PDF/joddd-13-215.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2add99f88445e58fb1bdadc01d8569951dde05a8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
258332693
pes2o/s2orc
v3-fos-license
Enhanced Mott cell formation linked with IgM Fc receptor (FcμR) deficiency In previous studies, Mott cells, an unusual form of plasma cells containing Ig‐inclusion bodies, were frequently observed in peripheral lymphoid tissues in our IgM Fc receptor (FcμR)‐deficient (KO) mouse strain. Because of discrepancies in the reported phenotypes of different Fcmr KO mouse strains, we here examined two additional available mutant strains and confirmed that such enhanced Mott‐cell formation was a general phenomenon associated with FcμR deficiency. Splenic B cells from Fcmr KO mice clearly generated more Mott cells than those from WT mice when stimulated in vitro with LPS alone or a B‐1, but not B‐2, activation cocktail. Nucleotide sequence analysis of the Ig variable regions of a single IgMλ+ Mott‐hybridoma clone developed from splenic B‐1 B cells of Fcmr KO mice revealed the near (VH) or complete (Vλ) identity with the corresponding germline gene segments and the addition of six or five nucleotides at the VH/DH and DH/JH junctions, respectively. Transduction of an FcμR cDNA into the Mott hybridoma significantly reduced cells containing IgM‐inclusion bodies with a concomitant increase in IgM secretion, leading to secreted IgM binding to FcμR expressed on Mott transductants. These findings suggest a regulatory role of FcμR in the formation of Mott cells and IgM‐inclusion bodies. Introduction Mott cells, also called morular or grape cells, are bizarre plasma cells containing Ig-inclusion bodies termed Russell bodies in the cytoplasm [1,2]. Ultrastructurally, electron-dense Russell bodies are built up within the cisterna of dilated rough ER and represent a cellular response to the accumulation of abundant nondegradable Igs that fail to exit from the ER [3][4][5][6][7]. Mott cells are rarely detected in normal tissues but are frequently observed in various pathological conditions including autoimmune disorders, B-cell neoplasms, and chronic infections [1,2,5,6,[8][9][10][11][12][13][14]. Many different causes or factors have been implicated in the formation of Russell bodies and Mott cells. These include (i) structural alterations of Ig heavy chain (HC), especially truncation of the CH1 domain, preventing its appropriate processing, (ii) impairment of Ig light chains (LCs), which normally prevent Ig HC aggregation as shown in Ig LC-deficient mice, and (iii) inability to degrade or to export Ig, leading to its aggregation [15][16][17][18]. The Fc receptor for IgM (FcμR), the newest member of the FcR family, is a type I transmembrane sialoglycoprotein with an M r of ∼60 kDa. Unlike FcRs for switched Ig isotypes (e.g. FcγRs, FcεRs, FcαR, Fcα/μR), FcμR is selectively expressed by lymphocytes: B, T, and NK cells in humans and only B cells in mice, although several articles have reported the FcμR expression by non-B cells in mice [19][20][21][22][23]. It exclusively binds IgM, either J-chain-containing pentameric or J-chain-lacking hexameric IgM, with a high affinity of ∼10 nM [19,24]. FcμR more efficiently binds the Fc portion of IgM when it recognizes a membrane component (like a self-antigen) via its Fab region on the same cell surface (cis engagement) than the Fc portion of IgM in solution or fluid (trans engagement) [25]. In the mouse thymoma line BW5147 stably expressing human or mouse FcμR, the human receptor binds IgM irrespective of the stages of cell growth (constitutive binding), whereas the mouse receptor binds IgM only before the early log stage of cell growth (transient binding), despite there being no significant changes in the receptor levels during cell culture [19,26]. By taking advantage of this difference in ligand binding, mutational analysis of human FcμR revealed that at least three sites in the Ig-like domain (Asn 66 in CDR2, Lys 79 to Arg83 in DE loop, and Asn109 in CDR3) are responsible for its constitutive ligand binding [27,28]. To determine the in vivo function of FcμR, four different laboratories have developed five different Fcmr-deficient (KO) mice and at least eight different groups of investigators have examined the resultant phenotypes [see review [29]]. Some clear discrepancies have been noted, particularly in FcμR functions of non-B cell populations, that appear to be due to various factors including differences in the exons of Fcmr that were targeted to generate the mutant mice [29]. However, one common feature among these different mutant mice is the impairment of B cell tolerance, as evidenced by the propensity to produce autoantibodies of both IgM and IgG isotypes [22,23,[29][30][31][32]. In previous studies, Mott cells were increased in the spleen and LN tissues of our Fcmr KO mice [33]. The aim of the present study was to determine if such enhanced Mott cell formation is a general phenomenon associated with FcμR deficiency or a characteristic unique to our mutant strain. Enhanced Mott cell formation in three different strains of FcμR-deficient mice To determine the association of Mott cell formation with FcμR deficiency, we examined the frequency of Mott cells in lymphoid tissues from three available different strains of Fcmr KO mice. These mutant mice had been developed by different groups using distinct targeting strategies (see Fig. 1): (i) our strategy (KO-HO) involved the germline deletion of Fcmr exon 2 to 4 encoding the Ig-like domain, first and second stalk regions, respectively, and most of intron 4 [22,23]; (ii) the KO-NB strains involved both generalized and B cell type-specific conditional deletion of exon 4 (second stalk region) [31]; and (iii) the KO-KHL strain involved the B cell type-specific conditional deletion of exon 4 to 7 (second stalk, transmembrane, first and second cytoplasmic regions, respectively) [32]. Mott cells and their inclusion bodies or extracellular spherons were uniquely identified by their strong staining with the periodic acid-Schiff (PAS; Sigma-Aldrich) reagent in the splenic red pulp, because PAS did not stain periodic acid-fixed erythrocytes, and thus, cellular changes in the splenic red pulp were easily detectable (Fig. 2). Mott cells with variable morphologies were scattered in the red pulp and in the medulla and extrafollicular areas of LNs ( Figs. 2A and B). They were sometimes clustered (Fig. 2C), suggesting possible local expansion. Mott cells were also often observed in the splenic and nodal serosal fatty tissues (Fig. 2D), consistent with the notion that Mott cells are derived from B-1 B cells present in the peritoneal cavity [11,34,35]. Mott cells were also found in the Peyer's patches (Fig. 2E) and the medullary cavity of BM (Fig. 2F); the latter finding raising the possibility that they were either generated in situ or migrated The frequency of Mott cells in peripheral lymphoid tissues was increased in all three strains of Fcmr KO mice as compared with the corresponding control mice (Fig. 3). In the 20-weekold female KO-NB strain, both generalized (Cmv/KO) and B cellspecific (Cd19/KO) FcμR deficiency, resulted in a significantly higher frequency of Mott cells in spleen than in WT controls (p < 0.04 and 0.01, respectively). The same was true for Mott cells in LNs of B cell-specific, Fcmr KO mice (p < 0.02). On the other hand, in the younger 15-week-old female KO-KHL strain, there was also an increasing trend of splenic Mott cell formation as B cell-specific Fcmr KO (Cd19/KO) > control Cd19-mediated Creexpression (Cd19 Cre+/− ) > another control floxed Fcmr (Fcmr fl/fl ), but such differences, particularly in Cd19/KO versus Fcmr fl/fl , were not statistically significant (p = 0.09). In much older (60-wk) male KO-HO mice, there were markedly increased frequencies of Mott cells in spleen compared to WT controls (p < 0.001). Quantitative assessment of Mott cells in the BM cavity was unfortunately unsuccessful because of the irregular medullary shapes, which made it too difficult to assess the areas. Collectively, these findings indicated that: (i) Mott cells were present in peripheral lymphoid organs (i.e. spleen, LNs, Peyer's patches), in serosal fatty tissues and, to a lesser extent, in the BM; (ii) Mott cell formation was clearly increased in Fcmr KO mice irrespective of the targeted exons and deletion strategies; and (iii) such increases showed an age-dependent tendency. Generation of Mott cells in vitro and their immortalization by hybridoma technology To determine the generation of Mott cells in vitro upon appropriate stimulations, splenic B cells from Fcmr KO (KO-HO) and WT mice were activated for 4 days at 37°C with three different stimuli: (i) LPS alone, (ii) LPS/dextran-anti-IgD/IL-4/IL-5 for preferential stimulation of B-1 B cells, and (iii) anti-CD40/dextran-anti-IgD/IL-4/IL-5 for preferential stimulation of B-2 B cells. Significantly fewer IgM-containing cells were seen in Fcmr KO B-cell cultures than in WT control B-cell cultures stimulated with LPS alone (p < 0.05) and a B-1 cocktail (p < 0.05), and a similar, but statistically insignificant, trend was also observed with a B-2 cocktail (Fig. 4A, left panel). By contrast, the frequencies of IgM-inclusion body-containing cells or Mott cells in mutant B-cell cultures were clearly increased compared to WT control Bcell cultures when stimulated with LPS alone (p < 0.001) or the B-1 stimulation cocktail (p < 0.01), but not with the B-2 stimulation cocktail (middle). The proportion of κ + cells among total IgM + cells was comparable in all the B-cell cultures (right). A similar increase in Mott cells was also observed with sorted B-1, but not B-2 B-cell cultures (not shown). These in vitro findings were, thus, consistent with the enhanced Mott cell formation in vivo Fcmr KO mice described above and with the previous findings that Mott cells were of B-1 B-cell origin [11,35]. Next, to immortalize Mott cells, splenic B-1 B cells (10 6 cells) were enriched from Fcmr KO and WT control mice based on their co-expression of CD19 and CD5 (see Supporting Informationn Fig. S1) and activated ex vivo with LPS, before cell fusion. Sixteen and six B-1 hybridoma clones, corresponding to approximately 3.3 and 1.2% of the total plated wells, were thus, generated from Fcmr KO and WT mice, respectively. Even though Mott cells contain unique inclusion bodies in their cytoplasm, we could not distinguish Mott cell hybridomas from others based on examination by inverted phase-contrast microscopy. It is also noteworthy that Mott cells in single-cell suspension of lymphoid tissues could not be identified by flow cytometry based on their forward-and sidescatter characteristics. The identification of Mott cell hybridomas, thus, relied on PAS staining and staining of intracytoplasmic Ig in cell smears. Only one hybridoma clone (KO-03, μλ) derived from mutant mice exhibited Mott cell morphology characterized by the presence of inclusion bodies, strongly positive for PAS staining and for fluorochrome-labeled anti-μ and anti-λ antibody staining (Fig. 4B). Of the 16 mutant B-1 hybridomas, the Ig isotype distribution was 10 IgM (8 κ, 2 λ), 1 IgG3κ, 1 IgG2bκ, and 2 Ig-nonproducing, whereas among the six WT B-1 hybridomas, there were 4 IgM (3 κ, 1 λ), 1 IgG2bκ, and 1 Ig-nonproducing, as determined by both cytoplasmic Ig staining and enzyme-linked immunosorbent analysis of culture supernatants. For assessment of the self-reactivity of these B-1 B-cell hybridomas, Ag8.653 cell smears (fixed with 95% ethanol/5% glacial acetic acid) were used for indirect immunofluorescence analysis of culture supernatants. Four IgMκ-containing supernatants (three from mutant and one from WT mice) were found to react with the intracellular or plasma membrane components of Ag8.653. Consistent with the findings that most Mott hybridomas secrete small amounts of IgM [6], the KO-03 Mott hybridoma indeed secreted detectable amounts of IgMλ in culture supernatants (∼0.6 μg/mL), but such IgMλ did not react with Ag8.653 cellular components. These findings indicated the generation of a single Mott hybridoma by fusing splenic B-1 B cells in Fcmr KO mice with the Ag8.653 plasmacytoma line. Few mutations in Ig variable regions of the FcμR-deficient Mott cell hybridoma To determine the nucleotide sequences of the Ig HC and LC variable regions (Ighv and Iglv ) of the Mott hybridoma, first-strand cDNA was generated from total RNA by RT-PCR using a set of primers for universal VH and Cμ1 and for Vλ2 and Cλ2. The nucleotide sequence analyses of the cloned PCR products revealed that the KO-03 clone utilized V1-55*01, D4-1*01, and J3*01 for its Ighv (Fig. 5A) and V2*02 and J2*01 for its Iglv (Fig. 5B). There were only three nucleotide mutations in the VH region (∼99.0% identity of the germline V1-55*01) and six and five N-nucleotide additions at the VH/DH and of DH/JH junctions, respectively. The JH gene segment was identical to the germline J3*01. The variable region of Ig λ2 chain was 100% identical to the germline V2*02 and J2*01 sequences. The findings of few or no mutations in Ighv and Iglv and of N-nucleotide addition are consistent with the notion that Mott hybridomas are of adult B-1 B-cell origin with few mutations [35,36]. Reversal of the Mott cell phenotype by introduction of the FcμR To determine if the expression of the FcμR can reverse the Fcmrdeficient Mott hybridoma to a normal plasma cell phenotype, the IgMλ + Mott hybridoma clone (KO-03) was transduced with a retroviral expression construct containing both mouse FcμR and GFP cDNAs (FcμR/GFP) or only GFP cDNA as an empty vector control. After enriching GFP + cells by fluorescence-activated cell sorting (FACS) and establishing individual stable transductants, the frequency of cells containing IgM inclusion bodies in their cytoplasm was assessed by immunofluorescence microscopic analysis at 3-week post-transduction. As shown in Fig. 6A, the frequency of cells containing IgM-inclusion bodies in the FcμR/GFP KO-03 transductants was significantly diminished as compared with that in the GFP only KO-03 transductants (p < 0.01) or in the original WT KO-03 Mott hybridoma (p < 0.005). No significant difference in the frequency of cells containing IgM-inclusion bodies was observed between the WT KO-03 hybridoma and the GFP KO-03 transductants. Interestingly, the concentration of IgM secreted into the culture media was significantly higher in FcμR/GFP transductants than in GFP transductants and the Mott hybridoma, although the cell growth of these three cell lines was essentially similar (Fig. 6B). As expected, there was cell-surface expression of FcμR by FcμR/GFP transductants, but not by the control GFP transductants (Fig. 6C). Only the FcμR/GFP KO-03 transductants exhibited weak cell-surface IgM staining (Fig. 6D). Since these Ag8-derived transductants did not express CD79a (Igα)/CD79b (Igβ) (not shown), which are required for the cellsurface expression of membrane IgM, the observed surface IgM staining must result from the binding of pentameric IgM secreted by the FcμR/GFP KO-03 transductants to FcμR (cytophilic IgM). Given the fact that the assessment of IgM binding by mouse FcμR is usually difficult, this finding of cytophilic IgM was remarkable. Collectively, these findings strongly suggest a regulatory role of FcμR in the formation of Mott cells containing IgM-inclusion bodies. Discussion Conflicting results exist in terms of the phenotypes reported in five different FcμR-deficient mouse strains; one of the possible explanations for such discrepancies is differences in the gene targeting strategies [29]. The aim of the present study was to determine if the enhanced Mott cell formation observed in our mutant strain [33] is a generalized phenomenon associated with FcμR deficiency or a phenomenon unique to our strain. Comparative histological analysis was, thus, performed with three available differ-ent strains of Fcmr KO mice and their corresponding controls. The results showed a clear association of enhanced Mott cell formation with FcμR deficiency. Significantly high frequencies of Mott cells were generated from Fcmr KO mice than from WT controls when their splenic B cells or sorted B-1 B cells were stimulated in vitro with LPS alone or with the B-1 stimulation cocktail. By contrast, the B-2 stimulation cocktail did not generate Mott cells from splenic B cells or sorted B-2 B cells of either of the mouse groups, consistent with the previous findings of Mott cells of B-1 cell origin [11,35]. A single IgMλ + Mott hybridoma clone was Mott cells-containing Ig inclusion bodies are rarely observed in normal lymphoid tissues but are found in various pathological conditions including neoplasms, inflammatory diseases, and autoimmune disorders [1,2,5,6,8,14]. B-1, but not B-2, B cells were shown to generate Mott cells in vitro in the presence of LPS or IL-5 at a much higher frequency in autoimmune NZB and NZB/W F1 mice than in nonautoimmune NZW mice [11]. By using NZB/W F1 x NZW backcross mice, the locus contributing to Mott cell formation, called Mott-1, was mapped to a satellite marker locus between Mit48 and Mit70 on chromosome 4 of NZB mice [11]. Mott cells were also frequently observed in autoimmune "viable motheaten" mice that have a defect in protein tyrosine phosphatase SHP1/PTPN6 on chromosome 6 [5,6]. B cell-specific deletion of Ptpn6 promoted B-1 B-cell development, systemic autoimmunity, and increased Mott cells, like in motheaten mice [36], suggesting a linkage of protein tyrosine phosphatase SHP1/PTNP6 deficiency with Mott cell formation. Intriguingly, T cells appeared to play a role in Mott cell formation, because Mott cells were rare in neonatally thymectomized motheaten mice and athymic nude NZB/W F1 mice [5,11]. Thus, multiple genes or loci are apparently involved in the formation of Mott cells and Russell bodies. There is a precedent in a transgenic mouse model that certain autoreactive B-cell hybridomas accumulate IgM in the Golgi due to the formation of immune complexes between IgM and glycosaminoglycan and release large spherical IgM complexes, termed spherons, of up to 2 μm in diameter [37]. As to the molecular basis for enhanced Mott cell formation in Fcmr KO mice, given their propensity to produce autoantibodies and the predominance of cis engagement of FcμR on the same cell surface, we proposed the following model [29]. A given B-cell expresses an IgM BCR with self-reactivity to an intracellular membrane component but may not interact with the corresponding antigen because of its low affinity. When the cell receives a certain signal to switch from the μm to μs exon usage (e.g. via a Toll-like receptor), along with the synthesis of J chain, during the translocation from ER to the Golgi, the resultant pentameric IgM is accumulated inside the vesicles, where it binds its cognate membrane antigen via the Fab regions and to FcμR via its Fc portion. This cis engagement of self-antigen/secreted pentameric IgM/FcμR within the vesicles prevents further development of such autoreactive B cells, thereby contributing to peripheral tolerance. Mott cells-containing Ig inclusion bodies are byproducts in the absence of FcμR. In the present study, we could not absolutely validate this model but did provide evidence that FcμR regulates the formation of Mott cells and Russell bodies. Our initial concern was that the enhanced Mott cell formation we previously observed was a unique finding restricted to our mutant strain, because there was no description of this phenotype in other mutant strains. Of three Fcmr KO mice with different targeted exons (exon 2-4, only exon 4 vs. exon 4-7) and deletion strategies (generalized vs. B cell-specific), the incidence of Mott cells in peripheral lymphoid tissues was increased as compared to their control mice. The enhanced Mott cell formation is, thus, a general phenomenon associated with FcμR deficiency. Splenic B cells or B-1 B cells from Fcmr KO mice indeed generated more Mott cells than those from WT control mice upon activation in vitro with LPS or a B-1 stimulation cocktail, in agreement with the previous findings with autoimmune NZB strains of mice [11]. A single IgMλ + Mott hybridoma clone (KO-03) was developed from LPS-stimulated splenic B-1 B cells (10 6 cells) of Fcmr KO mice and carried a few mutations in Ighv, with several N additions at VH/DH and DH/JH junctions, and a germline Iglv. The paucity of somatic hypermutations in the Ig HC variable region (V1-55*01) is characteristic of B-1 B cells, consistent with the findings reported by others [35]. Despite the polyreactive nature of B-1 B cell-derived IgM, reactivity of IgMλ (KO-03) with the cytoplasm of Ag8.653 cells was undetectable by immunofluorescence analysis. Transduction of FcμR cDNA into this Mott hybridoma resulted in a clear reduction of the frequency of cells carrying IgM-inclusion bodies and acquiring secreted IgM through FcμR on the cell surface (cytophilic IgM). Since, we have experienced difficulty in clear-cut assessments of IgM ligand binding by mouse FcμR, unlike the human receptor, this finding of cytophilic IgM was noteworthy. The reversal of the Mott phenotype by introduction of FcμR strongly suggests the regulation of Mott cell and Russell body formation by the FcμR. Mice and ethics approval Fcmr −/− (Fcmr KO) strain of C57BL/6 (B6) mouse origin, which was originally developed at the laboratory of Dr. Hiroshi Ohno [23] and designated KO-HO, was generated from its B6backcrossed Fcmr +/− frozen embryos kindly provided by Dr. Takashi Kanaya (RIKEN Center for Integrative Medical Sciences, Yokohama, Japan), at the MPI for Infectious Biology in Berlin. The Fcmr genotype of resulting offspring was determined by genomic PCR of tail DNA using a diagnostic set of primers: diagnostic sets of primers: 5 -ctgtagggctgaggctgggctggtgacagg-3 (forward), 5 -cgatggctaatatggcaatagtatgggatg-3 (reverse), and 5cttctctcccatagtgtgggccatggtggc-3 (reverse) corresponding to the 5 -flanking and 3 -flanking Fcmr exons 2 and 5, respectively as described [22]. All studies involving animals were conducted with approval of the Landesamt für Gesundheit und Soziales (Lageso) of the permission number of H 0126/16. Histopathological analysis Formalin-fixed, paraffin-embedded tissue blocks (spleen, LNs, intestine, and postdecalcified long bones) were prepared from the Fcmr KO-HO and WT control mice. Similar tissue blocks of spleen and lymph nodes from two additional different strains of Fcmr KO B6 mouse as well as their corresponding controls were also provided for comparative analyses by each investigator: Dr. Nicole Baumgarth (NB) and Dr. Kyeong-Hee Lee (KHL). The age and sex of the analyzed mice were: KO-HO, 60-week-old and male [22,23]; KO-NB, 20-week-old and female [31]; and KO-KHL, 15-weekold and female [32]. Tissue sections of approximately 4 μm in thickness were cut, deparaffinized, and stained with PAS (Sigma-Aldrich). Strongly PAS-positive plasma-like cells and spheres or Russell bodies were identified as Mott cells under Keyence Biorevo BZ-9000 microscopy (Keyence, Neu-Insenburg) and the frequency of Mott cells in a given area was estimated by computer. B-1 cell hybridomas To immortalize Mott cells, similarly sorted CD19 + /CD5 + B-1 B cells (10 6 cells) from three Fcmr KO-HO and six WT B6 mice (21-24 weeks, females) were resuspended in 1 mL of 20% FCS complete medium and stimulated with LPS at 50 μg/mL for 1 day. The resultant LPS-activated B-1 cells were fused with a threefold excess number of Ig nonproducing P3-X63-Ag8.653 cells [38] and were plated into 96-well flat-bottom plates at approximately 2 × 10 3 B-1 B cells/0.2 mL/well along with B6 peritoneal lavage cells as feeders. For detection of Mott cells, hybridoma cells were cytocentrifuged onto glass slides, and the resultant cell smears were stained for intracytoplasmic Igs with fluorochrome-labeled antibodies specific for each Ig isotype and for inclusion bodies with PAS staining as described [33]. Sequence analyses of Ig HC and LC variable regions of Mott hybridomas The nucleotide sequence of the Ig H and L chain V regions of the Mott hybridoma was determined by RT-PCR. In brief, 2 μg of the total RNA isolated from hybridoma cells by Trizol was converted to the first strand cDNA by using the SuperScript TM IV First-Strand Synthesis System (Invitrogen) with oligo(dT) 18 primers. The resultant first strand cDNA was used as a template DNA for amplification of the VH-Cμ1 and Vλ2-Cλ2 regions by using a set of primers [(i) universal coding VH (5'-aggtsmarctgcagsagtcwgg-3') and noncoding Cμ1 (5'-ggctctcgcaggagacgagg-3') and (ii) coding Vλ2 (5'-gccatttcccaggctgttgtgactcagg-3') and noncoding Cλ2 (5'-ggtgagwgtgggagtggacttgggc-3')] with Platinum SuperFi II DNA polymerase (Invitrogen). In IUPAC nucleotide code, m = a or c; r = a or g; s = g or c; w = a or t. The amplification was performed as follows: denaturation at 94°C for 1 min, 35 cycles of denaturation at 94°C for 20 s, annealing at 66°C (for VH-Cμ1) and 70°C (for Vλ2-Cλ2) for 20 s, and extension at 72°C for 80 s, and final extension at 72°C for 10 min. The amplified PCR products of VH-Cμ1 and Vλ2-Cλ2 with the expected size of approximately 400 and 360 bp, respectively, were gel purified and subcloned into pCR-Blunt II-Topo vector before nucleotide sequence analyses of the inserted PCR products in both strands by using Sp6 and T7 primers. The nucleotide sequence was analyzed by the IMGT/V-Quest [39] and IgBlast (NCBI) programs. Transduction To determine the effect of mouse FcμR on the Ig-inclusion bodies in Fcmr KO B-1 B cell-derived Mott hybridoma, a bicistronic retroviral expression vector pRetroX-sGreen (Takara) containing mouse FcμR cDNA (FcμR/GFP) or no insert cDNA (GFP only) as a control was transfected into a PLAT-E packaging cell line, before transducing into Mott hybridoma cells as described [19]. After enriching GFP + cells by FACS, Mott hybridomas before and after transduction with mouse FcμR cDNA were examined for their expression of FcμR and IgM on their cell surface by flow cytometry. Flow cytometric analysis For surface expression of FcμR and IgM, a mixture of original GFP − KO-03 Mott hybridoma cells and FcμR/GFP or GFP cDNAtransduced KO-03 Mott hybridoma cells were incubated with biotin-labeled mouse anti-mouse FcμR mAb (MM3 clone, γ1κ isotype) or rat anti-mouse μ mAb (RMM-1 clone, γ2aκ), washed, and then with PE-labeled streptavidin. Isotype-matched, irrelevant mAbs were included as controls. Stained cells were examined by BD FACSCanto II flow cytometry along with FACSDiva software (BD Bioscience), and flow cytometric data were analyzed with FlowJo software (Becton Dickinson). John F. Kearney, and Shozo Izui for critical reading and suggestions. Open access funding enabled and organized by Projekt DEAL. Conflict of interest: The author declares no conflict of interest. Author contributions: KHL and NB provided the tissue blocks from their Fcmr KO and control mice. UK, PKJ, HK, and FM developed Fcmr KO mice from their frozen Fcmr +/− embryos. KAQ, KH, PMA, and HK conducted histological analysis ( Fig. 2 and 3). HK, PKJ, KAQ, and PMA made B-1 B-cell hybridomas ( Fig. 4 and Supporting Information Fig. S1), analyzed their nucleotide sequence (Fig. 5), and determined their phenotypes (Fig. 6). HK, FM, and AR wrote a paper and made Fig. 1. All authors contributed to the article and approved the submitted version. Data availability statement: The data that support the findings of this study are available on request from the corresponding author.
2023-04-27T06:17:30.571Z
2023-04-26T00:00:00.000
{ "year": 2023, "sha1": "e8b3e2b042601979e120c29c4f7a5d5c5c0e8fba", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/eji.202250315", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "cd9cc1c59d31a1000759f9295a89c2f8b816a1df", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
265242087
pes2o/s2orc
v3-fos-license
Anti-tau intrabodies: From anti-tau immunoglobulins to the development of functional scFv intrabodies Over the last decade, there has been a growing interest in intrabodies and their therapeutic potential. Intrabodies are antibody fragments that are expressed inside a cell to target intracellular antigens. In the context of intracellular protein misfolding and aggregation, such as tau pathology in Alzheimer’s disease, intrabodies have become an interesting approach as there is the possibility to target early stages of aggregation. As such, we engineered three anti-tau monoclonal antibodies into single-chain variable fragments for cytoplasmic expression and activity: PT51, PT77, and hTau21. Due to the reducing environment of the cytoplasm, single-chain variable fragment (scFv) aggregation is commonly observed. Therefore, we also performed complementarity-determining region (CDR) grafting into three different stable frameworks to rescue solubility and intracellular binding. All three scFvs retained binding to tau after cytoplasmic expression in HEK293 cells, in at least one of the frameworks. Subsequently, we show their capacity to interfere with either mouse or mutant human tau aggregation in two different primary mouse neuron models and organotypic hippocampal slice cultures. Collectively, our work extends the current knowledge on intracellular tau targeting with intrabodies, providing three scFv intrabodies that can be used as immunological tools to target tau inside cells. INTRODUCTION Intracellular tau aggregation is a common feature of several neurodegenerative disorders, collectively termed tauopathies.2][3] This has led to tau being one of the main therapeutic targets to be explored in the field of AD.As such, many different treatment approaches have been investigated, with immunotherapy as one of the main modalities. 4munotherapeutic strategies aim to halt disease progression by capturing extracellular forms of tau, based on the tau spreading hy-pothesis. 5However, tau aggregation is an intracellular phenomenon, thus therapeutic modalities that can act inside the cell may be more effective.In this regard, a small phase I study showed that reducing intracellular tau levels using antisense oligonucleotides (ASOs) was safe, well tolerated, and reduced aggregated tau levels, as measured by positron emission tomography (PET), in mild AD patients. 6ven though cognitive benefits have not been published yet, these results show first evidence that intracellular tau targeting strategies may have promise. A potential alternative modality to ASOs and strategies alike is the use of intrabodies.Intrabodies are antibody fragments that are expressed inside of cells, therefore harnessing the affinity and specificity that is characteristic of immunoglobulin (IgGs) to target intracellular proteins. 7As genetically encodable proteins themselves, intrabodies have the advantage that they can be expressed in a cell-or tissue-specific manner and can be developed to target specific conformations and post-translational modifications. 8Moreover, as the main delivery strategy being evaluated is gene therapy via adeno-associated virus (AAV) vectors, a single administration could be enough for longterm, sustained therapeutic effect. 95][16] In the current work, we report on the development of three anti-tau intrabodies in the single-chain variable fragment (scFv) format, derived from monoclonal antibodies (mAbs) PT51, PT77, and hTau21. 17These scFvs bind distinct epitopes at the proline-rich domain (PRD) and C terminus of the tau protein, including a phosphorylated epitope (pS199/pS202).We demonstrate that these scFv intrabodies interfere with K18mediated aggregation of human tau with the P301L mutation in primary mouse cortical neurons and organotypic hippocampal slice cultures (OHSCs).Additionally, scFv intrabody PT77, which binds to an epitope around pS199/pS202, was also able to reduce ADseed-mediated aggregation of mouse tau in primary neurons.Collectively, our work extends the current knowledge on which regions can be targeted to interfere with intracellular tau aggregation, providing three scFv intrabodies that can be used as immunological tools to target tau inside cells. RESULTS mAbs were successfully converted to scFvs mAbs PT51, PT77, and hTau21 (Figure 1A), all three developed from the same immunization campaign, 17 were converted to scFvs.All three scFvs were expressed in Escherichia coli periplasm or secreted from HEK293 cells.Analysis of E. coli lysates and culture medium from HEK293 cells in western blot did not show major differences in scFv expression levels between the different linkers used (Figures 1B and 1G), with a few exceptions.In the E. coli expression system, when the variable domains were connected with the GS linker, (GGGGS) 3 , scFv hTau21 was detected mostly as a single band, with only a very faint band of potential dimers.When the GSEK linker (GGSEGKSSGSGSESKSTGGS) is used, scFv hTau21 is detected as multiple bands, with the strongest bands being around 28 and 49 kDa, suggesting that it may be present as dimers.With the GS linker, some potential dimer formation is observed for scFvs PT51 and PT77, albeit with a low signal intensity. Independent of the expression system, scFvs hTau21 and PT51 retained binding to recombinant tau (Figures 1C, 1D, 1H, and 1I) without exhibiting non-specific binding to a negative control protein (data not shown).After periplasmic expression in E. coli, scFv PT77, which was derived from a phospho-tau-specific mAb, reacted with both recombinant tau and enriched paired helical filaments (ePHFs) (Figures 1E and 1F) with a very pronounced signal observed with the GSEK linker.Reaction with recombinant tau, i.e., indicative for an apparent loss of phospho-specificity, is unexpected.However, upon expression and secretion from HEK293 cells, scFv PT77 bound only to ePHF with no observed cross-reactivity to recombinant tau (Figures 1J and 1K), indicating that phospho-specificity was retained. scFvs PT51 and hTau21 retain tau binding as an intrabody After expression as an intrabody in the cytoplasm of HEK293 cells, expression levels were again assessed via western blotting.Intrabodies PT51 and hTau21 were detected in both linker formats, while intrabody PT77 could not be detected (Figure 2A). Evaluation of the cell lysates on enzyme-linked immunosorbent assay (ELISA) for binding revealed that intrabodies hTau21 with the GSEK linker and PT51 with both linkers retained tau binding on ELISA, while no binding was detected for intrabody PT77 (Figures 2B-2E).Low expression levels for intrabody PT77 could account for the absence of signal on ELISA.On the other hand, intrabody hTau21 with the GS linker was well expressed but did not react on ELISA, illustrating that other factors, such as stability and folding, may also be involved. To evaluate if loss of binding might be the result of misfolding and/or aggregation due to the suboptimal environment of the cytoplasm, cytoplasmic solubility was evaluated via confocal microscopy.Intrabodies PT51 and PT77 showed a punctate staining potentially indicative of aggregation (Figures 3B1, 3B3, and 3B4).A diffuse and even distribution in the cell was observed for intrabody hTau21, independently of the linker used, potentially indicative of soluble expression (Figures 3B2 and 3B5).Linker influence on solubility was seen for intrabody PT77, which appears to be soluble when the GSEK linker is used to connect the variable domains but shows some puncta when the GS linker is used (Figures 3B4 and 3B5). CDR grafting can rescue intrabody binding and cytoplasmic solubility To overcome low solubility and stability of the intrabodies in the cytoplasm, the three intrabodies were CDR grafted into different frameworks.9][20] The intrabodies were redesigned in the VL-VH orientation and only the GSEK linker was used, based on its better performance on ELISA binding.Additionally, versions without the cysteines participating in disulfide bonds were also evaluated.Intrabody PT51 became soluble in the VL-VH orientation, with solubility being maintained for all four CDR grafts (Figure 3B).As for intrabodies PT77 and hTau21, solubility was maintained after switching the orientation of the variable domains and the CDR grafting (Figure 3B).Replacement of the cysteines participating in the disulfide bonds did not seem to affect solubility as assessed by confocal microscopy. Figure 1.Properties of the selected mAbs and characterization of derived scFvs expressed in the periplasm of E. coli and through the secretory pathway of HEK293 cells (A) Summary table with Fab (or mAb) affinity against human tau and tau paired helical filaments (PHFs) from AD brains; efficacy in immunoprecipitating (IP) tau with aggregation capabilities from brain homogenates of P301S mice and AD patients.Adapted from Vandermeeren et al. 15 (B) scFvs were expressed in the periplasm of E. coli and cleared cell lysates were used for characterization.Expression levels were determined by western blotting and arbitrary unit (AU) concentrations were determined based on western blotting quantification.Images are representative of two independent experiments.(C and D) All lysates were tested for binding against recombinant human tau on ELISA, starting from 1 AU of scFv.Detection in western blotting and ELISA was done with an anti-HA tag HRP-labeled antibody.(E and F) ePHF coating was used for phospho-specific scFv PT77.Results are shown as mean of two independent experiments.(G) scFvs were expressed as secreted protein from HEK293 cells and culture medium was used for characterization.Expression levels were determined by western blotting.(H and I) All samples were tested against recombinant human tau on ELISA, starting from undiluted culture medium.(J and K) ePHF coating was used for phospho-specific scFv PT77.Results are shown as mean ± SD of three independent experiments.Detection is done with an Anti-FLAG-tag HRP-labeled antibody. Binding was evaluated on ELISA and using a nuclear translocation assay based on Zhou et al. 21CDR grafting did not improve ELISA binding for intrabody PT51, having a negative impact instead (Figure 4).However, in the nuclear translocation assay (referred to as NLS-assay), three of the PT51 grafted versions (F8, A48-4D5, and A48-4D5 SS À [scFv without the cysteines that participate in disulfide bonds]) showed a clear nuclear localization in the presence of tau-NLS (Figure 5A).With intrabody PT77, the opposite was observed.Virtually all six CDR-grafted variants rescued binding to ePHF on ELISA (Figure 4), while only the 4D5 SS À graft showed some nuclear localization (Figure 5A).Lack of binding in the NLS assay was not due to absence of phosphorylated tau-NLS, as we confirmed its presence with a Meso Scale Discovery (MSD) assay (Figure 5B).Regarding intrabody hTau21, CDR grafting did not improve binding on ELISA (Figure 4), or on the NLS assay (not shown).For this intrabody, only the original construct was positive for binding on both assays (Figures 4 and 5A). Intrabody PT77 SS -4D5 interferes with mouse tau aggregation induced by AD-tau seeds To evaluate if the intrabodies can interfere with tau aggregation, we transduced primary mouse cortical neurons with AAV6-intrabody prior to inducing endogenous tau aggregation by addition of ADbrain-derived tau seeds. A significant reduction in tau aggregation was seen upon expression of intrabody PT77 SS À 4D5 in a concentration-dependent manner (Figure 6).Importantly, total mouse tau and a-synuclein levels remained stable (data not shown), independent of the presence of AAV or AD-tau seeds, suggesting that the observed decrease in tau aggregation is not a result of neuronal death.The other two intrabodies, PT51 GS and hTau21 GSEK, did not affect aggregation levels in this model (Figure 6).Intrabodies PT77 SS À 4D5, hTau21 GSEK, and the negative control intrabody were all detected on western blotting at least at the highest MOI.Intrabody PT51 could not be detected. The three intrabodies were also tested on a K18-induced aggregation model.In this model, human tau with the P301L mutation is overexpressed in primary mouse cortical neurons via AAV transduction, followed by induction of aggregation with K18-P301L tau seeds. All three intrabodies interfered with mutant human tau aggregation in an MOI-dependent manner (Figure 7).At the highest MOI, some decrease in mutant human tau aggregation is also observed with the negative control intrabody, albeit without statistical significance.We believe that this is a non-specific effect of intrabody overexpression, as no dose-dependent decrease is seen with this intrabody.Additionally, the reduction observed with intrabody PT77 SS À 4D5 is significantly greater than with the negative control (p < 0.001).Fulllength human tau and total mouse a-synuclein levels remained stable (data not shown), independent of the presence of AAV or K18 seeds, once again indicating that the observed reduction in aggregation is not due to neuronal death.PT77 SS À 4D5, hTau21 GSEK, and the negative control intrabody were all detected on western blotting, at least at the highest MOI, while intrabody PT51 could not be detected.scFv intrabodies interfere with K18-seeded tau aggregation in organotypic hippocampal slice cultures (OHSCs) OHSC were prepared from transgenic mice overexpressing P301S tau, and, similar to the primary mouse cortical neuronal model, K18 seeds were used to induce the aggregation of mutant human tau.scFv intrabody PT77 and the negative control are both well expressed and detected on western blotting, albeit with some variability between slices (Figure 8B).As for scFv intrabodies PT51 and hTau21, expression levels were lower and only a few slices have detectable bands on western blotting (Figure 8B).Aggregated tau and phosphorylated aggregated tau levels were evaluated using sandwich immunoassays that use the same antibody for coating and detection.Aggregated tau was evaluated with two assays, one recognizing the N and another the C terminus of human tau.Phosphorylated aggregated tau was detected with the use of either AT8 or PT3 antibodies.AT8 recognizes tau phosphorylated at S202/T205/S208, 22,23 while PT3 recognizes tau phosphorylated at T212/T217. 24When measuring aggregated tau with phosphorylation-independent assays, none of the reductions induced by the intrabodies were significant (Figure 8A, top graphs).However, when looking at phosphorylated tau aggregates, all three intrabodies were able to reduce AT8-phosphorylated aggregates, whereas, for PT3-phosphorylated aggregates, only the reduction with scFv intrabody PT77 was significant (Figure 8A, bottom graphs).It should be noted that, in the case of scFv intrabody PT77, its epitope on tau (S199/S202) partially overlaps with the epitope of the AT8 antibody used in one of the assays to measure phosphorylated, aggregated tau.mAb PT77 interferes with AT8-binding to tau aggregates (Figure 8C); thus, we cannot fully exclude some interference of the scFv intrabody in the AT8 assay.Regarding the PT3 assay, PT77 does not show interference (Figure 8C).Total mouse a-synuclein levels were not affected by the presence of K18 seeds or AAV (data not shown), suggesting that the reductions observed in aggregated tau levels are not due to loss of neurons subsequent to toxicity. DISCUSSION We have successfully converted mAbs PT51, PT77, and hTau21 into scFv intrabodies.scFvs PT51 and hTau21 retained binding to tau with at least one of the linkers used, while scFv PT77 retained binding only after CDR grafting into more stable frameworks. Differences in binding between scFvs expressed in the E. coli periplasm versus secretion from HEK293 were observed.scFv PT77, which is derived from an mAb that specifically recognizes a phosphorylated epitope on tau, lost its phospho-specificity when expressed in E. coli, while phospho-specificity was retained when secreted from HEK293 cells.0][31] This could then explain the differences in binding observed in our work and stresses the importance of confirming critical properties of a scFv derived from a well-characterized mAb in an appropriate cell system.scFvs PT51 and PT77 showed some degree of aggregation when expressed in the cytoplasm as intrabodies, at least with one of the linkers.This suggested that these two intrabodies were not folding into their correct conformation, likely due to the lack of or mispairing of intradomain disulfide bonds in the reducing environment of the cytoplasm. 19,32Nevertheless, soluble cytoplasmic expression alone is also not sufficient for intrabody activity, since hTau21 only retained binding with the GSEK linker, even though it is soluble in both versions.Other factors, such as intrinsic sequence stability and binding affinity, also need to be considered for optimal intracellular activity.For instance, in a recent publication where scFvs were also designed from mAbs and compared as a secreted protein versus intrabody, the authors introduced specific mutations in the intrabody sequence to promote intracellular stability, and all three intrabodies retained binding on ELISA. 15Even though the authors did not refer to how the intrabodies performed before the sequence changes, based on our work it is fair to assume that binding was either very weak or absent. Intrabody development remains challenging, even though several strategies have been described to improve selection of cytoplasmic stable constructs and engineer scFvs for cytoplasmic function.One such approach consists of designing disulfide-free scFvs, as the inability of these bonds to form in reducing conditions is one of the main factors for scFv intrabody misfolding and aggregation.To this end, the amino acid combination Val-Ala has been successfully employed to obtain disulfide-free scFvs for cytoplasmic expression. 19,32dditionally, grafting the CDRs from scFvs of interest into frameworks that have been described as stable in the cytoplasmic environment has also been suggested as a potential strategy to improve scFv intrabody stability. 33However, to our knowledge CDR grafting has not been used for the development of therapeutic scFv intrabodies, with most of its application having been focused on cytoplasmic expression of scFvs for further purification or evaluation of scFv intrabody activity in E. coli and yeast models. 18,34,35Combining these two strategies, we were able to rescue solubility and binding of intrabody PT77.On the other hand, intrabodies PT51 and hTau21 did not benefit from this strategy, demonstrating that CDR grafting may not be a straightforward universal solution for intrabody development.Moreover, even though donor framework residues identified as important for binding function were kept during CDR grafting, for intrabodies PT51 and hTau21, it appears that other residues from their original frameworks may be important for antigen contact and/or proper loop folding. Binding was also evaluated with a nuclear translocation assay. 217][38][39] In most reports, ELISA binding was not directly compared to NLS-assay binding, with other methods, such as co-immunoprecipitation and yeast or phage display, used instead.In all cases, the NLS assay was predictive of binding in other assays.In our case, the results did not always correlate with ELISA binding.We did not investigate further why this was the case, but we speculate that the differences we observed could be related to binding kinetics, scFv folding, and differences in epitope presentation between assays. To evaluate the effect of the developed intrabodies on tau aggregation, we first used two primary mouse cortical neuron models: one where aggregation of endogenous mouse tau is induced by AD-brain derived tau seeds 40 and one where human tau with the P301L mutation is overexpressed and its aggregation is induced by K18-P301L seeds. 41dditionally, we also evaluated our intrabodies in OHSCs where endogenously expressed human tau with the P301S mutation is aggregated by the addition of K18-P301L seeds. 42 the three intrabodies reported here, only PT77 SS À 4D5 was able to interfere with AD-seed-mediated mouse tau aggregation in primary neurons.As for K18-seeded mutant human tau aggregation, all three intrabodies, PT51 GS, hTau21 GSEK, and PT77 SS À 4D5, were able to reduce aggregation, both in primary neurons and OHSCs.Importantly, in the K18-seeded models none of the scFv intrabodies can bind to the K18 seeds, meaning that the observed effect is on de novo aggregation.Intrabody PT77 SS À 4D5 outperformed PT51 GS and hTau21 GSEK in all three models.This was more evident in OHSC, where PT77 SS À 4D5 leads to a larger reduction in phosphorylated tau aggregates than the other two intrabodies.We have observed that the parental mAb PT77 (150 kDa) competes with AT8 for binding in immunoassays.However, we have not tested this with scFv-PT77 ($25 kDa).Even so, antibody fragments tend to have lower affinity than their mAb counterparts, so we would expect that any interference resultant of the presence of a scFv would not be at the same level as with an mAb.Still, we cannot completely exclude that, if the scFv intrabody is bound to the tau aggregates, it may prevent mAb AT8 from binding and result in an overestimation of the effect on AT8-phosphorylated tau aggregates.Nevertheless, with the PT3 phosphorylation assay, scFv intrabody PT77 does not interfere with PT3 binding and still confirms the strong reduction of phosphorylated tau aggregates by scFv intrabody PT77. We hypothesize that the differences seen in the ability of scFv intrabodies PT51 GS and hTau21 GSEK to interfere with tau aggregation across models are likely related to structural differences in the aggregates that are formed in each model.The parent mAbs of these scFvs were previously evaluated for their capacity to immunodeplete tau aggregates from AD-brain homogenates and P301S mice brain homogenates.Interestingly, both mAbs were more efficient at removing seeding-competent aggregates from P301S brain homogenates than from AD-brain homogenates. 17This is in line with what we observed with the intrabodies, where efficacy was higher against mutated human tau aggregation. Our intrabodies were not coupled to a degron and no changes were observed in total tau levels in both models.Thus, we hypothesize that the mechanism by which our scFv intrabodies are interfering with aggregation is by preventing newly formed aggregates from recruiting monomeric tau via steric hindrance.This could be either by binding to oligomers and small aggregates or monomeric tau itself.Previous reports of other "naked" anti-tau intrabodies have also observed a reduction in insoluble or phosphorylated tau without affecting soluble tau levels. 15,16Additionally, another report showed that total tau levels were only reduced when the scFv intrabody was fused to a mutated form of ubiquitin that would target it to either the proteasomal or the lysosomal degradation pathways. 14 the best of our knowledge, this is the first report showing the effectiveness of anti-tau intrabodies in in vitro neuronal models of tau aggregation as well as in OHSCs.5][16] The neuronal models and OHSC we described here require more hands-on work and provide a smaller throughput.However, they are closer to physiological/disease conditions, as tau aggregation occurs in a neuronal environment, and, in the case of the AD-seed model, a disease-related form of tau is used as template.Additionally, we show the importance of using different models of tau aggregation in parallel, as aggregate structure will be different between models, resulting in different efficacy outcomes. In tau immunotherapy, the current consensus is that targeting the mid-region of tau is more likely to be effective in preventing extracellular tau spreading and consequent intracellular aggregation. 17,43,44However, in the case of directly targeting intracellular tau, it seems for now that all tau domains are effective epitopes.Previous publications have reported successful results with intrabodies targeting the N-terminal and microtubule-binding repeats of tau.Our work demonstrates that targeting the PRD and the C terminus of tau with intrabodies can reduce aggregation levels of mutated human tau in vitro.Moreover, wild-type mouse tau aggregation was reduced when targeting pS199/pS202.Side-by side comparisons between intrabodies targeting different domains are needed to further clarify whether one is more effective to prevent and/or clear intracellular tau aggregates. To conclude, in our work we have shown that mAbs can be successfully converted into scFv intrabodies, even if some sequence engineering is required.We demonstrate that CDR grafting is a viable approach to rescue unstable scFv intrabodies and highlight the need for thoughtful scFv design as well as selection of the appropriate cellular models used. MATERIALS AND METHODS scFv selection and design mAbs were selected from a previously characterized panel of anti-tau antibodies, based on epitope location, mAb and/or Fab fragment affinity, and in vitro and in vivo potency to interfere with tau aggregation. Previously identified DNA sequences of the variable domains of each mAb were used to design scFvs in the VH-VL orientation.Two variants of each scFv were made using two different linkers to connect the variable domains, the (GGGGS) 3 linker and the GGSEGKSSGSGSESKSTGGS linker described by Bird and colleagues. 45These will be referred to as GS and GSEK linker, respectively. scFv expression in the periplasm of E. coli cDNA of each scFv with each linker was cloned into the periplasmic expression vector pADL-22c, which includes an N-terminal His 6 HAtag.Overexpression of all constructs was carried out in MC1061F 0 E. coli cells (Biosearch Technologies, Novato, CA).Pre-cultures were prepared from glycerol stocks in 2YT-medium (Sigma, Y1003) supplemented with 100 mg/mL carbenicillin (Thermo Fisher, Figure 5. Evaluation of intrabody tau binding in the cytoplasm of HEK293 cells (A) When the intrabody is co-expressed with tau-NLS, it can be translocated to the nucleus only if it is capable of binding tau in the cytoplasmic environment.Co-expression with human a-synuclein-NLS is used as negative control.(B) Tau-NLS phosphorylated at S199/S202 was detected in cell lysates using a sandwich MSD assay with PT77 as capture antibody.Images are representative of three independent experiments with two biological replicates each.Scale bar, 50 mM. 10177012) incubated at 37 C overnight with agitation of 400 rpm.Subsequently, 50 mL of the pre-culture was used to inoculate 5 mL of 2YT medium supplemented with carbenicillin followed by incubation for 3 h at 37 C.Then, 1 mM IPTG (Merck, D48784) was added, and cultures were further incubated overnight and subsequently harvested the next day by centrifugation at 2,200 Â g for 15 min.Pellets were quickly frozen on dry ice and thawed on tepid water.Pellets were then resuspended in BugBuster HT Protein Extraction Reagent (Merck, D49036) supplemented with 0.2 mg/mL Chicken Lysozyme (Sigma, L3790) and left for 30 min with vigorous shaking.Cell debris and insoluble material were removed by 500 Â g spin for 2 min and supernatant was collected. scFv expression in HEK293 cell line cDNA of all scFvs with each linker was cloned into a mammalian expression vector that was designed internally, pUNDER, 46 under the control of the cytomegalovirus promoter.A FLAG tag (DYKDDDDK) was added to the C terminus of each scFv for detection purposes in downstream assays.For cytoplasmic expression, the secretion signal sequence was removed. Total protein in HEK293 cell lysates was determined with the bicinchoninic acid (BCA) assay (Sigma, BCA1-1KT) prior to western blotting, and 6 mg of total protein was loaded in the gels.Bacterial cell lysates and culture medium from HEK293 cells were loaded undiluted. ELISA Nunc MaxiSorp flat-bottom 96-well plates (Thermo Fisher Scientific, 430341) were coated with 50 mL containing 1 mg/mL of full-length re-combinant human tau (hTau), 1 mg/mL of full-length recombinant human a-synuclein, or AD-brain derived tau paired helical filaments (ePHF) 1:500 in coating buffer (10 mM NaCl, 10 mM Tris-HCl, pH = 8.6), and left overnight at 4 C.The next day, the plates were washed five times with 200 mL of wash buffer (0.05% Tween 20 in PBS) followed by 2-h incubation at RT with 150 mL of blocking buffer (0.1% casein in PBS).After another wash, 50 mL of sample were added in a serial dilution.After 2-h incubation at RT, the plates were washed and the detection antibody Anti-HA HRP (Abcam, ab1190) or Anti-FLAG M2-HRP (Sigma, A8592), diluted 1:2,500 in blocking buffer was incubated for 2 h at RT.Following incubation, the plates were washed and 50 mL of TMB, (Thermo Scientific, 34029) was added to the wells.The enzymatic reaction was stopped with 50 mL of 2 N H 2 SO 4 .Plates were read immediately on EnVision 2102 Multilabel plate reader (PerkinElmer, Waltham, MA) and data were analyzed with GraphPad Prism 9 software. To correct for concentration differences in bacterial cell lysates, band intensity on western blotting was quantified with ImageQuantTL software and converted to arbitrary units so that each scFv was tested at approximately the same concentration.For HEK293 cell lysates, samples were analyzed in a serial dilution starting at 100 mg/mL total protein.Culture medium samples were not corrected for protein content. NLS assay HEK293A cells (QBiogene) were cultured on 96-well plates (Greiner Bio ONE, 655090) as described above.Plasmid DNA co-transfection was done 24 h after plating, with Lipofectamine 2000 (Invitrogen, 11668), according to the manufacturer's instructions.At 48 h after transfection, the cells were fixed in 4% paraformaldehyde and permeabilized with TBS containing 0.3% Triton X-100.scFvs were detected with primary antibody Anti-FLAG M2 (Sigma, F3165) and secondary antibody goat anti-mouse IgG Alexa Fluor 555 (Thermo Fisher, A-21424).Imaging was done on an Opera Phenix instrument (PerkinElmer, Waltham, MA) equipped with a 40Â water immersion objective.Captured images were visually analyzed. scFv sequence engineering To improve cytoplasmic solubility and stability, scFv sequences were modified using CDR grafting into two (scFvs PT51 and hTau21) or three (scFv PT77) different scFv frameworks.The selected frameworks were based on antibody germlines with high similarity to previously described scFvs F8, 4D5, and A48-4D5 and this nomenclature was kept to identify the grafts.In order to prevent the formation of disulfide bonds, cysteine-free versions were also made for the CDRgrafted versions by replacing the cysteines with the amino acid combination Ala-Val. 32rification of AD seeds and ePHF from human brain Human brain tissues from histologically confirmed sporadic AD patients with abundant tau pathology (Braak staging V/VI) were provided by the Center for Neurodegenerative Disease Research brain bank at the University of Pennsylvania and by the Newcastle Brain tissue resource with informed consent from next of kin. Purification of AD seeds and ePHF from brain sections were performed as described in Soares et al. 40 and Vandermeeren et al., 17 respectively.Both protocols were executed in accordance with relevant ethical guidelines.ePHF corresponds to the sarkosyl-insoluble fraction of homogenates from non-dissected human brain blocks.AD seeds are purified from gray matter only and correspond to a purer version of sarkosyl-insoluble tau that undergoes sonication. Generation of K18-P301L seeds Recombinant K18-P301L (truncated human tau protein corresponding to the longest isoform between residues Q244 and E372) was produced in E. coli.Fibrils were generated by incubating 40 mM K18-P301L protein with 40 mM low-molecular-weight heparin and 2 mM DTT in 100 mM sodium acetate at 37 C.After 10 days, the solution was centrifuged at 100,000 Â g for 1 h at 4 C.The pellet was resuspended in PBS. The neuronal aggregation assay using human AD-tau seeds was performed as described by Soares et al.. 40 Briefly, neurons were transduced with AAV-intrabody on day in vitro (DIV) 7 followed by addition of AD-tau-seeds on DIV 10.Neurons were kept until DIV 17, after which they were lysed in RIPA buffer. The neuronal aggregation using K18-P301L seeds was based on Guo and Lee 41 with some modifications.Briefly, primary mouse neurons were transduced with AAV-intrabody and AAV-hTauP301L on DIV 1 followed by addition of sonicated K18-P301L seeds on DIV 3. Neurons were kept until DIV 10, after which they were lysed in RIPA buffer with phosphatase and protease inhibitors.scFv expression levels were evaluated using western blotting as described above.Samples were loaded in the gel undiluted.scFvs were detected using Anti-FLAG M2-HRP (Sigma, A8592) 1:500.For b-actin staining, the membranes were stripped with Restore PLUS Western Blot Stripping Buffer (Thermo Scientific, 46430) for 15 min, followed by blocking and incubation with monoclonal antib-actin peroxidase (Sigma, A3854) 1:20,000. OHSCs with K18-induced aggregation OHSCs were prepared from P301S mice at postnatal day 8.Briefly, pups were sacrificed by decapitation in compliance with protocols approved by the local ethical committee and national institutions adhering to AAALAC guidelines.Brains from pups were bisected into hemi-brains and the hippocampus was isolated and sliced into 410-mm-thick transverse sections.These sections were then transferred to ice-cold MEM (Gibco, 31095-029) supplemented with HEPES (Sigma-Aldrich, H0867), Tris (Sigma-Aldrich, 93350), and penicillin-streptomycin (Sigma-Aldrich, P4333) and separated.After 1-h incubation at 4 C, the slices were mounted via the interface method, 47 with three slices being added per well, in pre-incubated culture plates with warm culture medium (MEM [Gibco, 31095-029] supplemented with HEPES [Sigma-Aldrich, H0867], Tris [Sigma-Aldrich, 93350], penicillin-streptomycin [Sigma-Aldrich, P4333], HBSS [Thermo Fisher Scientific, 20420], sodium pyruvate [Sigma-Aldrich, S8636], NaHCO 3 [Sigma-Aldrich, S5761], and horse serum [Gibco, 26050088]).AAV intrabody was added at a concentration of 2 Â 10 10 VG/mL to each well and the plates were incubated overnight at 37 C and 5% CO 2 .As a negative control, an AAV expressing an scFv binding to a-synuclein was used.The next day, medium was renewed and AAV intrabody at the same concentration was added.Plates were put back at 37 C until DIV 4, after which temperature was changed to 35 C for the remainder of the experiment.To induce the aggregation of tau, 333 mM K18-P301L seeds were added at DIV 14 on top of each slice.At DIV 31, slices were lysed in RIPA buffer with phosphatase and protease inhibitors.Throughout the experiment, culture medium was changed twice a week. MSD Ninety-six-well Multi-Array plates (MSD, L15XA-3) were incubated overnight at 4 C with coating antibodies diluted in PBS.After overnight incubation, plates were washed five times with wash buffer (0.05% Tween in PBS) and then incubated with blocking buffer (0.1% casein in PBS) for 2 h at RT with agitation at 400 rpm.Next, the plates were washed again, and cell lysates were added to the plates, in serial dilutions in blocking buffer.Plates were then sealed and incubated overnight at 4 C.The next day, the plates were washed and incubated with the respective detection antibody diluted in blocking buffer for 2 h at RT with agitation at 400 rpm.After incubation, the plates were washed and 150 mL of MSD Read Buffer T with surfactant (Meso Scale Discovery, R92TC) 2Â diluted in distilled water was added to each well.Plates were immediately read using the MSD SECTOR Imager 6000 (Meso Scale Discovery, Gaithersburg, MD). All MSD assays used in this work were developed at Janssen R&D using internally developed antibodies.In the total mouse a-synuclein assay, two commercial reagents were sequentially used for detection: biotinylated antibody D37A6 (Cell Signaling, 74184) and sulfolabeled streptavidin (MSD, R32AD-1). Statistical analyses were conducted using the R software version 4.2.1. Figure 2 . Figure 2. Characterization of scFvs expressed in the cytoplasm of HEK293 cells (A) scFvs were expressed in the cytoplasm of HEK293 cells and cleared cell lysates were used for characterization.Intrabody presence in the lysates was determined by western blotting.(B and C) Cell lysates were tested on ELISA in serial dilution starting at 10 mg of total protein against recombinant human tau.(D and E) ePHF coating was used for phospho-specific scFv PT77.Results are shown as mean of two independent experiments.Detection is done with an Anti-FLAG-tag HRP-labeled antibody. Figure 3 . Figure 3. Intrabody solubility in the cytoplasm before and after CDR grafting (A) Schematic representation of the CDR grafting strategy.The CDRs from each chain of the original scFvs (represented in blue) are transferred to a new framework (represented in brown).Framework amino acids identified as important for binding are transferred as well (represented by blue stripes).Additionally, versions where cysteines (indicated by C) are replaced by the amino acid combination Val-Ala (indicated by V and A, respectively) were also designed.Designed with biorender.com.(B) Immunocytochemistry evaluation of intrabody solubility in the cytoplasm.(B1-B6) Original intrabody sequences.(B7-B9) Intrabodies designed in the VL-VH orientation.(B10-B22) CDR-grafted versions, with and without disulfide bonds (SS À ).Images representative of three independent experiments with two replicates each.Scale bar, 25 mm.SS À , scFv without the cysteines that participate in disulfide bonds. Figure 4 . Figure 4. Evaluation of intrabody tau binding after CDR grafting into different frameworks Intrabodies were expressed in HEK293 cells and cell lysates were tested in a serial dilution starting at 1:3 dilution.Detection is done with an Anti-FLAG-tag HRP-labeled antibody.Results are represented as mean ± SD of three independent experiments. Figure 6 . Figure 6.Effect of intrabodies on AD-seed-mediated aggregation Endogenous mouse tau aggregation was measured on cell lysates with sandwich MSD assays.Results are shown as percentage of condition without intrabody expression after normalization to total mouse a-synuclein levels, represented as mean ± SD of three independent experiments.Statistical analysis done with the fitted mixed-effect model with Dunnett correction for multiple comparisons (**p % 0.01).Western blot images representative of three independent experiments. Figure 7 . Figure 7. Effect of intrabodies on hTau-P301L aggregation upon K18-P301L seeds addition hTau-P301L aggregates were measured on cell lysates using sandwich MSD assays.Results are shown as percentage of condition without intrabody expression after normalization to total mouse a-synuclein levels, represented as mean ± SD of three independent experiments.Expression levels were detected with western blotting using undiluted lysates.b-Actin was used as loading control.Statistical analysis done with the fitted mixed-effect model with Dunnett correction for multiple comparisons (*p % 0.5; **p % 0.01).Western blot images representative of three independent experiments. Figure 8 . Figure 8. Evaluation of the effect of anti-tau scFv intrabodies in OHSC (A) Aggregated and phospho-aggregated tau levels are shown as percentage of the AU of slices treated with K18 and no intrabody (K18 alone) with mean ± SD.Statistical analysis was done with the fitted mixed-effect model with Dunnett's correction for multiple comparisons (*p < 0.05; **p < 0.01; ***p < 0.001).Note that, for the AT8/AT8 assay, no values could be interpolated for scFv intrabody PT77.Thus, the values were set to the lowest limit of quantification.(B) scFv intrabody expression levels were evaluated with Anti-FLAG staining on western blotting.(C) Tau aggregates isolated from the brain of patients with AD were incubated with increasing amount of mAb PT77 prior to the measurements.
2023-11-17T16:02:07.300Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "65a4e6c908ad778035df4028f154651d858bfd36", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.omtm.2023.101158", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aed71428afdbb281f1b1c752d308eebb57b9dc74", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
139260749
pes2o/s2orc
v3-fos-license
Adsorption characterization and CO2 breakthrough of MWCNT/Mg-MOF-74 and MWCNT/MIL-100(Fe) composites Carbon capture using adsorption processes can significantly mitigate global warming. Mg-MOF-74 is a distinct reticular material amongst other adsorbents owing to its distinguished carbon dioxide adsorption capacity and selectivity under low-pressure applications, while MIL-100(Fe) has lower CO2 adsorption capacity, but extraordinary thermal and hydrostability in comparison to many classes of MOFs. In this paper, we present CO2 adsorption characteristics of new compounds formed by the incorporation of multi-walled carbon nanotubes (MWCNTs) into Mg-MOF-74 and MIL-100(Fe). This was done to improve the thermal diffusion properties of the base MOFs to enhance their adsorption capacities. The new composites have been characterized for degree of crystallinity, and the CO2 and N2 equilibrium uptake. The real adsorption separation has been investigated by dynamic breakthrough tests at 297 K and 101.325. The equilibrium isotherm results showed that Mg-MOF-74 and 0.25 wt% MWCNT/MIL-100(Fe) (MMC2) have the highest CO2 uptake in comparison to the other investigated composites. However, the interesting results obtained from breakthrough tests demonstrate that good improvements in the CO2 adsorption uptake and breakthrough breakpoint over pristine Mg-MOF-74 have been accomplished by adding 1.5 wt% MWCNT to Mg-MOF-74. The improvements of CO2 adsorption capacity and breakpoint were about 7.35 and 8.03%, respectively. Similarly, the CO2 adsorption uptake and breakthrough breakpoint over pristine MIL-100(Fe) are obtained by 0.1 wt% MWCNT/MIL-100(Fe) (MMC1) with improvements of 12.02 and 9.21%, respectively. Introduction The fossil fuel burning processes produce greenhouse gases, including CO 2 , N 2 , and CH 4 . Global warming caused by these gases leads apparently to shore floods, hot atmosphere, soil droughts, and damage of eco system. In this scenario, carbon dioxide holds the most significant portion of the flue gases released to the atmosphere [1]. Thus, extensive efforts have been made by scientists, institutions, countries, and environmental organizations to reduce the CO 2 emissions. The principal source of CO 2 is a combustion processes that used fossil fuel. However, the utilization of fossil fuel is still very for satisfying energy demands. Hence, the feasible solution to continue using fossil fuel with mitigation of climate change is Carbon Capture and Storage (CSS). A massive number of researchers have already studied CO 2 capture using both experimental and simulation approaches as well as synthesizing novel adsorbents [2]. Using adsorption for CO 2 separation is advantageous by the ease of regenerating the adsorbent by being exposed to heat and/or vacuum [3]. The most common known adsorbents are activated carbons and zeolites being exploited for CO 2 separation and storage. Zeolites could adsorb a higher quantity of CO 2 than does by activated carbon at low operating pressures (< 20 kPa) [4,5], whereas carbon-based adsorbents are better for CO 2 storage applications [5,6]. Conversely, the obvious merits of carbon adsorbents over zeolites are cost penalties, hydrostability, lower regeneration energy, and easiness of production on a commercial scale [7]. Despite, zeolite-based adsorbents have the relatively higher CO 2 adsorption capacity, especially at lower adsorption pressures (10-30 kPa (abs.) at T = 30 °C), the CO 2 uptake is greatly reduced in case of CO 2 /H 2 O mixture and requires significantly higher heat of regeneration [8,9]. Since 20 years ago, a novel class of reticulate adsorbents has been discovered and called metal organic frameworks (MOFs) [10,11]. In this context, the highest adsorption CO 2 adsorption capacity (1470 mg g −1 ) was reported for MOF-177 at 35 bar [12]. Over time, a large number of MOFs have been developed by scientists for maximize the CO 2 uptake and selectivity. MOF-74 is the most currently available MOFs with high CO 2 adsorption uptake as well as an excellent CO 2 over N 2 selectivity [16,17]. More specifically, Mg-MOF-74 was quantitatively identified as the highest adsorbent adsorbs CO 2 at low-pressure conditions (350 mg/g at 298 K) [17]. It is also reported that Mg-MOF-74 showed higher H 2 O hydrophilicity (593 ml/g at 298 K) than zeolite [18]. Despite the high CO 2 uptake of Mg-MOF-74, the existence of H 2 O reduced the CO 2 capture capacity, unlike HKUST-1, MIL-101(Cr), and MIL-100 (Fe) types [19]. The study [19], moreover, showed the reduction of CO 2 adsorption at different conditions. For example, at 1 bar and 298 K, the CO 2 adsorption capacity of the dry Mg-MOF-74 was about 8.4 mmol/g of CO 2, while with hydration 6.5 and 13%, the CO 2 adsorption capacity values were 6.7 mmol/g and 5.4 mmol/g, respectively [19]. MIL-100(Fe) is remarked as the hydro-and thermal-stable adsorbent. It showed that some increase in CO 2 uptake recorded with increasing RH, (up to 105 mg g −1 for CO 2 at 40% RH), with a large decrease in adsorption heat [20]. A considerable number of experimental and numerical research attempts based on CO 2 capture and separation have been conducted so far in terms of breakthrough and pressure and temperature swing adsorption [35][36][37][38][39][40][41][42][43][44][45][46][47][48][49]. Nevertheless, the majority amount of research chemically concentrated on development of novel adsorbent materials targeted for acquiring high CO 2 adsorption and selectivity. At this point, the poor thermal conductivity shown by a vast majority of these adsorbents has been experienced as a major barrier in improving the CO 2 uptake of these materials in real applications. In addition, a very limited number of research attempts have conducted the enhancement of CO 2 capture and storage via improving the thermal properties of the base adsorbent. For this reason, this paper aims at investigating the effects of incorporating multi-walled carbon nanotubes (MWCNTs) with Mg-MOF-74 and MIL-100(Fe), with the aim of enhancing the thermal properties of the adsorbents, and investigating the influence of the MWCNT addition on the CO 2 adsorption capacity and breakpoint of the resulting MWCNT/Mg-MOF-74 and MWCNT/MIL-100(Fe) composites. The synthesized and activated Mg-MOF-74 and MIL-100(Fe) have been physically incorporated with different percentages of MWCNTs. They are characterized for the degree of crystallinity, CO 2 adsorption isotherms, and CO 2 adsorption breakthrough. Moreover, the dynamic separations of breakthrough tests are carried out to address the actual CO 2 separation from CO 2 /N 2 (20% v/v. CO 2 and 80% v/v. N 2 for MWCNT/Mg-MOF-74, and 15% v/v. CO 2 and 85% v/v. N 2 for MWCNT/MIL100(Fe)) mixture and then computing the level of enhancements on CO 2 uptake and separation. MWCNT/Mg-MOF-74 and MWCNT/MIL-100(Fe) samples preparation We have followed a successful procedure for synthesizing Mg-MOF-74 as described in [50]. Briefly, 0.337 g 2,5-dihydroxyterephthalic acid and 1.4 g Mg(NO3)2·6H2O were dissolved in a solution of 135 ml dimethylformamide, 9 ml ethanol, and 9 ml water with sonication for 10 min. The resulting stock solution was decanted into twelve 50 ml bottles. The bottles were tightly capped and heated at 398 K for 26 h. The mother liquor was, then, decanted. Following this, the products were washed with methanol, and then, left immersed in methanol. The products were combined to one bottle and exchanged into fresh methanol daily for 4 days. The activation process was carried out by evacuating the product to dryness and then heated under vacuum at 523 K for 6 h. The synthesis of MIL-100(Fe) was performed in accordance with a previously reported procedure [51]. We first dissolved Fe(NO 3 ) 3 ·9H 2 O (4.04 g, 0.01 mol) in de-ionized water (50.2 ml, 2.8 mol) and the mixture was completely put in a 125 ml Teflon-liner containing BTC (1.4097 g, 0.00671 mol). After that, the Teflon-liner was tightly sealed inside a stainless steel autoclave and was kept at 383 K for 14 h. After heating, the autoclave was slowly cooled to ambient temperature, after which the "as-synthesized" dark orange solid was recovered using a centrifuge that was operated at 8000 rpm for about 45 min. The as-synthesized MIL-100(Fe) was washed with copious amounts of water and ethanol and finally with an aqueous NH 4 F solution in purpose removing any unreacted species. Specifically, the dried solid was, first, immersed in deionized water (60 ml per 1 g of solid) and the resulting suspension was stirred at 70 °C for 5 h. Again, the suspension was centrifuged and the wash process was repeated using ethanol (60 ml) at 65 °C for 3 h. This two-step purification was repeated until the decanted solvent following centrifugation became completely colorless, after which the solid was immersed in a 700 ml aqueous NH 4 F solution and stirred at 70 °C for 5 h. The suspension was again centrifuged and the solid was washed 5 times DI water at 60 °C, and finally dried in air at 75 °C for 2 days followed by 95 °C for 2 days. The Powder X-ray diffraction (PXRD) analysis To determine the crystallinity of composites, PXRD patterns of MWCNT/Mg-MOF-74 were collected using a Bruker D8-Advance (Cu K α λ = 1.54056 Å). The operating power of the PXRD system was 30 kV/30 mA and the step-counting method (step = 0.02 o , time = 3 s) was used to collect data at range 2θ = 3-45 o and 298 K. For MIL-100(Fe); the diffraction data were collected between 3 and 45° (2θ) with a total scan time of 3 h. Scanning electron microscopy (SEM) Scanning electron microscopy (SEM) was carried out using a TESCAN LYRA3 FEG microscope to test structure of Mg-MOF-74, 1.5 wt% MWCNT/Mg-MOF-74, MIL-100(Fe) and 0.5 wt% MWCNT/MIL-100(Fe). SEM samples were prepared by placing a powder form of samples on Al tapes and sputter coated with gold. The images were obtained at voltage of 20 kV. Gas physisorption measurements The first step in the physisorption measurements of CO 2 and N 2 is the sample degassing to remove any guest molecules within the pores of each material. Typically, 50-200 mg of each sample was transferred to pre-weighed empty sample cell with a 9 mm diameter. Degassing was conducted at 150 °C under vacuum for about 17 h for MWCNT/ MIL-100(Fe) and 220 °C under vacuum during about 5 h for MWCNT/Mg-MOF-74 using an Autosorb degasser (Quantachrome Instruments, Inc.). Nitrogen adsorption isotherms at 77 K were first recorded to estimate the Brunauer-Emmett-Teller (BET)-specific surface area (S BET ), average pore radius, and total pore volume. The interesting equilibrium adsorption isotherms for CO 2 at 273, 298, and 313 K, and for N 2 at ambient temperature (298 K) were recorded. The CO 2 heat of adsorption was evaluated using the adsorption isotherms measured at 273 and 313 K in accordance with the Clausius-Clapeyron equation. Breakthrough experiments of binary gas mixture (CO 2 + N 2 ) A dynamic CO 2 /N 2 breakthrough setup was constructed to separate CO 2 from a CO 2 /N 2 mixture (representing a flue gas), Fig. 1. The home-made setup is composed of a bed column with specifications of Inner diameter = 4 mm, outer diameter = 6 mm, and length = 7 cm. The column was filled with the MWCNT/Mg-MOF-74 composite (about 0.26 g), or the MWCNT/MIL-100(Fe) composite (about 0.74 g). The system includes CO 2 and N 2 cylinders, two MFC (calibrated for CO 2 and N 2 flow rates), two check valves, and bypass tube (for calibrating the gases concentration detected by mass spectrometer from the inlet gas compositions). It, moreover, includes two bourdon absolute pressure (manufactured by Baumer, accuracy ± 1.6%), mass spectrometer (to measure the outlet gases composition leaving from the bed), vacuum pump and electric heater jacket (for regeneration purpose), and interconnecting stainless steel fittings and tubes to regulate the flow of carrier gas within the system. The The full breakthrough capacity of CO 2 and N 2 was measured by evaluating the ratio of compositions of the downstream gas and the feed gas. The CO 2 adsorption capacity of the adsorbents is evaluated from the CO 2 molar flowrate inters and leaves the bed using the expression [42]: Fig. 1 Scheme of CO 2 /N 2 adsorption separation breakthrough setup where q co 2 (mmo/g) represents the CO 2 uptake, Q F and Q (t) (m 3 /s) are the input and output volumetric flow rates, C 0 and C(t) (mol/m 3 ) are the influent and effluent CO 2 concentrations, t (s) is the time, ε is the bed porosity, and V (m 3 ) is the bed volume. The term ( VC 0 ) in Eq. 1, which represents the CO 2 amounts still in the bed void (without being adsorbed), has very small values comparing to the other terms, so that, it can be ignored. However, we have considered it in our calculations. Results and discussion Powder X-ray diffraction (PXRD) analysis Figure 2 shows the PXRD patterns of MWCNT/Mg-MOF-74 compounds as well as MIL-100(Fe). It can be seen (Fig. 2a) that the PXRD pattern of MWCNT/Mg-MOF-74 and Mg-MOF-74 samples is in good agreement with the simulated pattern. The incorporation of MWCNTs has not decreased the crystallinity of the framework, as all the intensity peaks locations have represented the Mg-MOF-74 structure. Hence, it can be concluded that the incorporation of less than 1.5 wt% MWCNTs using physical mixing preserves the characteristic lattice structure of the Mg-MOF-74 framework. The same conclusion was drawn for MWCNT/MIL-100(Fe) as reported in the recent work [51]. Figure 2b is to exhibit that the synthesized MIL-100(Fe) was in good agreement with the simulated pattern. Adsorption equilibrium isotherms of carbon dioxide and nitrogen The N 2 equilibrium isotherms for the MWCNT/Mg-MOF-74 and MWCNT/MIL-100(Fe) composites have been measured at 77 K. Table 1 lists the important porosity-related parameters estimated from the N 2 adsorption data MWCNT/ Mg-MOF-74 and MWCNT/MIL-100(Fe) composites. The measured BET surface area was almost close to each other in MWCNT/Mg-MOF-74 compounds between 1470 and 1590 m 2 /g. In addition, the total pore volume measured at 95% relative pressure (P/P0) and the pore size measured were determined to be almost the same for all the samples by around 0.63-0.71 cc/g and 19 Å, respectively. The Mg-MOF-74 BET surface area and total pore volume values are in good agreement with those reported in the literature [17,50,52]. It can be deduced from the data shown in Table 1 that the addition of MWCNTs does not result in substantial differences concerning its influence on the porosity-related parameters evaluated for the MWCNT/Mg-MOF-74 compounds. Similarly, the incorporation of MWCNTs with MIL-100(Fe) did not change the porosity-related parameters of pristine MIL-100(Fe). There was almost slight increase in the surface area from 1083 m 2 g −1 for base MIL-100(Fe) to 1464 m 2 g −1 for MMC2. The total pore volume at 0.95 relative pressure was around 0.61 and 0.69 cc/g for MMC1 and MMC2, respectively, in comparison to 0.55 cc/g for the pristine MIL-100(Fe). Regarding the pore size, it is clearly that the pore diameter for all the MIL-100(Fe) and composite samples was around 20 Å. These porous-property values are close to those reported for MWCNT/MIL-100(Fe) composites [51]. However, the synthesized MIL-100(Fe) in the present work is not the best qualitative fashion of this adsorbent due to the method of synthesis and purification we have followed. BET and pore volume of MIL-100(Fe) could be varied between 1090 and 2050 m 2 g −1 (for BET) and from 0.65 to 1.15 cc/g (for average pore volume) [53]. The CO 2 adsorption isotherms for MWCNT/Mg-MOF-74 and MWCNT/MIL-100(Fe) composites, measured at 273, 298, and 313 K, are exhibited in Figs. 4 and 5. It is obvious that the adsorption uptake increased sharply in the region below 15 kPa and increased gradually with increasing adsorption pressure greater than 20 kPa. This behavior gives a good advantage for CO 2 capturing in low-pressure applications including the CO 2 separation from the flue gas (P CO2 = 10-20 kPa). However, as expected, an increase in the measured temperature showed an adverse effect on the recorded uptakes for each material. As obvious from Fig. 4, the highest CO 2 uptake has been measured for pristine Mg-MOF-74 followed by MFC1 at all the measured temperatures (273, 298, and 313 K). For MWCNT/MIL-100(Fe) compounds (Fig. 5), the adsorption uptake increased more or less linearly with increasing adsorption pressure. It is obvious that MMC2 showed optimal adsorbed amounts, and MMC1 resulted in the second highest uptake even greater than the pristine MIL-100(Fe) and MMC3 composites, as shown in Fig. 5a-c. It is worth mentioning here that the CO 2 uptake for MWCNT/Mg-MOF-74 composites was much higher than that adsorbed by MWCNT/MIL-100(Fe) compounds. Another point is that the best version of MIL-100(Fe) could adsorb about 1 and 1.52 mmol/g at 0.6 bar for outgassing pretreatments 150 and 250 °C, respectively [54]. Consequently, the reduction of CO 2 capacity of the present MIL-100(Fe) (q = 0.6 mmol/g at 0.6 bar and 298 K) is due to both the followed synthesis method (using Fe(NO 3 ) 3 ·9H 2 O) and the activation treatments (75-95 °C). The N 2 adsorption isotherms for MWCNT/Mg-MOF-74 composites, measured at 298 K, are displayed in Fig. 6a. It is evident that the pristine Mg-MOF-74 exhibited the largest uptake amount, followed by MFC4, MFC1, MFC6, MFC2, MFC5, and MFC3, respectively. Figure 6b , the maximum uptake measured for N 2 was observed to be significantly smaller than that measured earlier for CO 2 . In other words, all the samples have been noticed to exhibit preferential selectivity of CO 2 over the N 2 . To represent isotherms in mathematical models, Toth fitting (Eq. 2) is satisfied. For instance, the CO 2 and N 2 values obtained using Toth fitting of MFC6 and MMC1 show an excellent agreement with those of the experimental isotherms as plotted in Fig. 7. The respective fitting parameters at 95% level of confidence are tabulated in Table 2: Here, q is the equilibrium adsorption amount (mmol/g) of species i. q m , K eq , and n are the Toth fitting constants. observed to exhibit a more or less curvilinear correlation with the instantaneous CO 2 uptake, as shown in Fig. 8a Experimental adsorption breakthrough test for MWCNT/Mg-MOF-74 and MWCNT/MIL-100(Fe) composites Breakthrough experiments have been performed for the binary gas (CO 2 /N 2 ) to quantify the improvements in CO 2 adsorption uptake and breakpoint as a result of the incorporation MWCNTs inside Mg-MOF-74 and MIL-100(Fe). In Particle size used in breakthrough was measured for some selective composites including pure materials and the maximum MWCNTs contents composites, as shown in Fig. 9. Mg-MOF-74 and MFC6 have almost close particle size distribution where the particle size values allocated between 1 and 88 µm. It is also shown that percentage of large sizes have been decreased by adding 1.5 wt% MWCNT. For MWCNT/MIL-100(Fe) particle size distribution, it is obvious from Fig. 9b that the both tested samples (MIL-100(Fe) and MMC3) have similar distribution (between 1 and 1400 µm) with almost the same percentages. This is attributed to the small percentages of MWCNT added to the adsorbents. For systematic tests, the pressure drop has been diminished to a round zero as monitored by two bourdon meters Fig. 10. As evident, the outlet concentration ratios calculated each of these two gases have been plotted against the measurement time. In general, it was observed in all the tested samples that the concentration ratio evaluated for CO 2 at the bed outlet remained constant at zero for some time (e.g., about 6-7 min for MWCNT/Mg-MOF-74 and 2-3 min for MWCNT/MIL-100(Fe) compounds), whereas the concentration ratio for N 2 increased up to about 1.3 (almost molar fraction = 1) owing to the absence of CO 2 which was preadsorbed into the Mg-MOF-74 or MIL-100(Fe) composite adsorbent bed. Following the first adsorption minutes of measurement time, the CO 2 concentration ratio was observed to increase up to 1, whereas the concentration ratio of N 2 was evaluated to gradually drop to a value close to 1. For MWCNT/Mg-MOF-74 composites, the optimal value of the breakpoint, a time at which the concentration ratio of the bed outlet has been evaluated to be less than 5%, was measured to be about 8.16 min (28.4 min/g) for MFC6 against 7.5 min (27.67 min/g) for Mg-MOF-74. This was followed by the value measured for MFC4 of about 8.1 min, and then by 7.96 min for MFC1 (Fig. 10a). In the same manner, the highest breakthrough breakpoint obtained by MWCNT\MIL-100(Fe) was associated with MMC2 by about 3.21 min (4.33 min/g) (Fig. 10b). The next breakthrough point was obtained by MMC1 at about 3.19 min (4.32 min/g), and, then, by pristine MIL-100(Fe) at about 2.9 min (3.69 min/g). Adsorption breakthrough and separation processes can be also investigated numerically as described in our previous works [57][58][59]. In addition, adsorption breakthrough curves, can be analytically represented by fitting the experimental curves using some approaches reported in the literature; one of these approaches is expressed in the following equation [60,61], where Here, k an K are fitting constant; k (1/s), called adsorption time constant, could be used to determine the diffusion coefficient ( k = 15D r 2 , D (m 2 /s) is the diffusion coefficient and r (m) is the adsorbent particle radius), L (m) is the bed length, and v (m/s) is the flow velocity. The breakthrough time (t) is taken in seconds. For example, Fig. 11 shows the analytical and experimental adsorption breakthrough of carbon dioxide adsorbed by MWCNT/Mg-MOF-74 composites. The analytical results using Eq. To evaluate the improvements of CO 2 adsorption capacity and breakpoint by adding MWCNT to Mg-MOF-74 and to MIL-100(Fe), the adsorbed amounts of CO 2 have been calculated from the experimental breakthrough curves Fig. 8a. This, theoretically, implies that each of these composites should not only exhibit higher CO 2 uptake values than pristine Mg-MOF-74, but also require comparatively lower energy for regeneration process (recycling recovery). Figure 12b shows the improvement in both adsorption capacity and breakpoint due to adding MWCNT to the pristine MIL-100(Fe). As evident, MMC1 exhibited an optimal improvement reaches 12.02 and 9.21% for CO 2 adsorption capacity and breakpoint, respectively. This improvement was followed by MMC2 for measured adsorption uptake and breakpoint about 8.74 and 9.47%, respectively, comparing with the base adsorbent (MIL-100(Fe)). On the contrary, the evaluated adsorption uptake and breakpoint improvement values for MFC2, MFC3, and MMC3 showed lower performance than the base adsorbents. This attribute indicates that there is no a uniform improvement can be obtained for the incorporation of CNT with MOFs. The detected improvement in the CO 2 adsorption capacity and breakpoint primarily refers to an improvement in the thermal properties of Mg-MOF-74 and MIL-100(Fe) frameworks upon the incorporation of MWCNTs [30][31][32]. As the thermal conductivity of CNT is significantly high (2000-5000 W m −1 K −1 ) [62], the MWCNT/Mg-MOF-74 and MWCNT/MIL-100(Fe) composites' effective thermal conductivity values could accordingly be higher than those of pristine adsorbents (0.2-0.4 W m −1 K −1 ). Therefore, the heat diffusion across the bulk composites is enhanced during adsorption processes, which helps in cooling down the adsorbent and enhances CO 2 adsorption uptake. Furthermore, the enhancement of effective thermal conductivity of adsorbents helps in quickly heating the adsorbent particles during and desorption process which in turns accelerating the CO 2 evacuation from the adsorbent. This research confirms a comparative dynamic CO 2 uptake compared to the published data shown in Table 3. The MIL-100(Fe) shows the lowest values of adsorption in a comparison to AC, 13X and Mg-MOF-74, while Mg-MOf-74 and MFC6 have the highest CO 2 uptake. Mg-MOF-74 dynamic CO 2 uptake was about 5.46 mmol/g which is greater than 4.06 mmol/g reported in the literature [17,50], because it was, in this study, measured at 20% CO 2 molar fraction. The cost of adding very low quantity of MWCNT (< 1.5 wt%) to the adsorbents is believed to be neglected in a comparison to the CO 2 separation improvements. In the literature, chemists usually use the adsorption isotherm data to compare the CO 2 capacities of different adsorbents. However, we found out by carrying both adsorption isotherm measurements and adsorption breakthrough experiments that they can give different ratings of adsorption capacity. Keeping in mind that adsorption isotherm measurements are taken under constant temperatures, while the breakthrough measurements are not, as the breakthrough bed is allowed to vary its temperature due to the heat dissipation from the adsorbent to the ambient or surrounding environments. The improved thermal diffusion cools down the adsorbent quickly. Therefore, the cooler is adsorbent, the higher is CO 2 uptake which is also confirmed in the isotherms. The most accurate adsorption capacity if we are joining to use a PSA/VSA/TSA is that measured in a breakthrough setup. Conclusions Two types of MOFs, Mg-MOF-74 and MIL-100(Fe), were synthesized and incorporated with MWCNTs. In total, seven compounds of Mg-MOF-74 materials containing 0, 0.1, 0.25, 0.5, 0.75, 1, and 1.5 wt% MWCNTs and four compounds of MIL-100(Fe) involving 0, 0.1, 0.25, and 0.5 wt% MWCNT have been characterized for the degree of crystallinity, intrinsic porosity, CO 2 adsorption capacity and separation, and dynamic adsorption breakthrough tests. The powder X-ray diffraction patterns as well as the porosity-related parameters for each of the composites did not include any substantial variation in peak intensities and peak locations, BET surface area, and pore volume and size, indicating that the crystal lattices of Mg-MOF-74 and MIL-100(Fe) were unaffected by the incorporation of MWCNTs using the physical mixing (up to 1.5 wt% MWCNT for Mg-MOF-74 and 0.5 wt% MWCNT for MIL-100(Fe). Equilibrium adsorption isotherms of CO 2 measured at 273, 298, and 313 K, and N 2 adsorption isotherms measured at 298 K confirm that the highest adsorption capacities for each of these two gases are exhibited by Mg-MOF-74 and 0.25 wt% MWCNT/MIL-100(Fe) (MMC2). Overall, the MWCNT/Mg-MOF-74 composites have much larger adsorption uptake values than those of MWCNT/MIL-100(Fe) composites. The key performance evaluation of the MWCNT/Mg-MOF-74 and MWCNT/MIL-100(Fe) composites has been achieved through the measurement of actual time-variant CO 2 breakthrough curves, which have revealed a good improvement in CO 2 adsorption capacity as well as adsorption breakpoint due to the incorporation of MWCNTs in the Mg-MOF-74 and MIL-100(Fe) frameworks. The most optimum combination of these characteristics has been observed
2019-04-30T13:06:43.584Z
2018-01-23T00:00:00.000
{ "year": 2018, "sha1": "39b23c67fbdb29c6d33cf750fec7f4107bd12dee", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40095-018-0260-1.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "25d0fb28d74811a59a01a714604a4cf53d8aea0c", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
265275319
pes2o/s2orc
v3-fos-license
Factors associated with house-soiling in Italian cats Objectives The aim of the present study was to identify factors associated with house-soiling in Italian cats. Methods A cross-sectional online survey collected information on respondents’ and cats’ details and litter management, and whether the cat showed house-soiling. Univariable and multivariable regression models were performed using house-soiling (present/absent) and the type of house-soiling (ie, urinary, faecal, concurrent urinary and faecal) as dummy variables. Results Data from 3106 cats were obtained. The number of dogs and other cats in the household, the cat’s age, the number, type and location of the litter boxes, the type of litter, and the frequency of litter scooping and full replacement were retained in the final multivariable regression model for house-soiling (model P <0.001, Akaike information criterion [AIC] = 2454.30). Urinary tract diseases, the type and number of litter boxes and the number of dogs in the household were associated with urinary house-soiling (model P <0.001, AIC = 534.08), and gastroenteric/musculoskeletal diseases, number of litter boxes and litter box location were associated with faecal house-soiling (model P <0.001, AIC = 448.52). Healthy cats, the number of dogs in the household, the type of litter and litter full replacement frequency were retained in the final multivariable regression model for the concurrent expression of urinary and faecal house-soiling (model P <0.001, AIC = 411.47). Conclusions and relevance Meeting cats’ preferences for litter and litter box type, location, behavioural needs and strict litter hygienic conditions is recommended. Cat owners need to be educated to prevent and manage house-soiling in their cats. Introduction 2][3] It consists of the deposition of urine (periuria), faeces (perichezia) or both outside the litter box. 4,5[8][9][10][11] House-soiling is considered to impair the cat-owner relationship, so much so that it is one of the main reasons for cat relinquishment 2,12,13 and the second main owner complaint (39%) after aggression (47%). 9Despite owners' concerns, the solution to house-soiling often relies on simply understanding the factors behind it, whether medical or management related, and handling them. 14ouse-soiling can be manifested through different types of elimination. 4,5If it involves urine, it can be either in urine-marking behaviour (ie, spraying) or voiding. 15,16praying usually occurs with the cat in a standing position, depositing a small amount of urine on vertical objects/surfaces, and is elicited for territorial, competitive or sexual reasons. 4,15,17][17] Spraying and voiding are considered behavioural problems by owners. 14As there are many reasons why a cat voids its bladder (or bowels, in the case of faecal housesoiling) outside the litter box, house-soiling is considered a multifactorial problem. 18tudies conducted mainly in the USA, the UK and Australia postulated many risk factors for housesoiling in cats.The characteristics of litter (eg, individual preference, granule size, new litter, cleaning frequency) and the litter box (eg, covered vs open, location, number), 5,19,20 cats' characteristics (eg, breed and age), 10,21 anxiety related to significant social or environmental challenges (eg, multi-cat household, presence of other animals), 2,4,[22][23][24] negative associations with the litter box (eg, pain during its use) 4 and a wide range of medical conditions, including age-related conditions (eg, arthritis, cognitive dysfunction, urinary tract or kidney diseases, neurological diseases and general weakness), 2,4,19 have all been associated with this behavioural problem. 6][17][18] Medical conditions causing discomfort at the urinary or gastrointestinal level (ie, cystitis, diarrhoea) have been considered to be risk factors for its occurrence, as they increase the urgency to evacuate. 17oreover, it seems that the type of medical condition may lead to a specific type of house-soiling: a history of urinary tract disease could predispose to urinary house-soiling, while osteoarticular, pelvic or gastroenteric diseases could predispose to faecal house-soiling. 2,19,22lthough house-soiling is a widespread problem, there is still no clear information regarding the possible risk factors for its occurrence. 2 In particular, the studies to date were performed mainly in countries outside Europe, on a limited number of cats presented to veterinary consultations for behavioural problems (ie, with a risk of overestimating the prevalence of house-soiling in the population) or without considering multivariable regression models.Therefore, scientific evidence on house-soiling and its risk factors is still scant. We hypothesised that respondents' details, the cats' intrinsic details, and litter and litter box details would be associated with house-soiling in pet cats.This research aimed to detect the possible factors that may increase or decrease the likelihood of house-soiling, to fill the current gap of knowledge and enhance both cat-human relationships and cat welfare. Survey Detail of the design and distribution of the survey and the description of the study population's demographic characteristics and house-soiling prevalence have been reported previously. 25Briefly, the survey was developed through a process of iterative review by the researchers, piloted by the authors with 20 cat owners and adjusted in response to feedback.The study design considered key design features that Dean 26 and Christley 27 suggested to develop a valid questionnaire in veterinary medicine.The survey was digitised using Qualtrics Software (Qualtrics; www.qualtrics.com) and it was open between March and May 2022.Italians owning one or more domestic cats and having a litter box for them were invited to take part in the survey.Invitation letters in Italian and the link to the survey were disseminated through social media, associations and veterinary institutions.The anonymous survey comprised 18 closed and three open-ended questions asking for the respondents' housing, family and pet details, cats' details, litter details and whether the cat showed elimination outside the litter box (see Table S1 in the supplementary material).If the respondent replied 'Yes' to question 17 (ie, 'Does your cat eliminate outside the litter box?'), a further set of four questions was asked, regarding elimination type, locations of the eliminations, the cat's posture and whether the cat had health problems (see Table S1 in supplementary material). A power calculation 28 determined that 2736 survey responses would be representative of the Italian cat population, which was estimated at 10.1 million in 2022, 29 with a 3% 30 absolute precision and 99.9% confidence interval (CI).The survey received 2839 responses.Of the total responses, 2794 met the inclusion criteria (respondents owning one or more domestic cats and providing a litter box for them).Data for 3106 cats were retained, reaching a significant sample size. Data handling and definition of the variables The full description of data handling has been published previously. 25The categories with an insufficient number of answers (ie, <5% of answers) were combined to avoid unbalanced data in regression. 31The complete list of the names, descriptions and categories of the variables analysed in this study was reported previously 25 and is also reported in the supplementary material (Table S2). Statistical analysis Descriptive statistics of all the numeric, categorical and dichotomous variables for the entire data set have been previously published. 25Of the subset including respondents who had at least two litter boxes, descriptive statistics of the number of cats, number of litter boxes and total number of locations with at least one litter box (ie, the total number of rooms that were ticked as locations with at least one litter box, inferred by question 13 of the survey) and number of litter boxes per location (see Table S3 in the supplementary material) were performed. The variables 'Number of cats' and 'Number of litter boxes' were initially considered numeric to calculate the number of litter boxes per cat (ie, the ratio between the number of cats and the number of litter boxes).Then, the number of cats and the number of litter boxes were transformed into categorical variables, and the descriptive statistics of categorical and dichotomous variables referring to the subset of cats showing house-soiling were performed for this study. 28In order to identify the association between the expression of housesoiling and household characteristics, living environment features and litter box/litter characteristics/ management, two-step regression models were performed on the entire data set, with univariable and then multivariable regression models for the dichotomous outcome variable 'Eliminates outside the litter boxes' (absence/presence).Moreover, to investigate whether multiple litter boxes located in the same room would increase the likelihood of house-soiling, a new categorical variable (named 'Litter boxes in the same location') was created for a subset of respondents having at least two litter boxes (1: all litter boxes in the same room; 0: litter boxes placed in different rooms) (see Table S3 in the supplementary material).This new variable was used as an independent variable in a univariable logistic regression model with house-soiling as the outcome.Then, to identify the factors associated with the different types of house-soiling (ie, urinary, faecal, and concurrent expression of urinary and faecal), a subset of data containing information for only the cats showing house-soiling was used to perform further regression models.Following what had been done previously, two-step regression models (first univariable and then multivariable) were performed for the dichotomous outcome variables (ie, yes/no) of 'Urinary house-soiling', 'Faecal house-soiling' and 'Concurrent expression of urinary and faecal housesoiling'.In both the regression analyses, to avoid over-or underestimations for the effects of explanatory variables due to collinearity, a first step to exclude multicollinear independent variables was conducted.Multicollinearity was tested by calculating variance inflation factors (VIFs) for models containing the independent variables.The calculation of VIFs was performed using the vif function in the car package in the R environment. 32Variables that had a VIF value exceeding 5 were considered collinear. 33Within the group of collinear variables, only the most representative variable was kept (eg, the size of the housing is also representative of the type of housing).The variables 'Housing type', 'Housing size' and 'Garden' were collinear, and thus only 'Housing size' was retained among the independent variables to be tested for association with house-soiling in the subsequent regression models.Similarly, high multicollinearity values were noticed among the variables 'Other animals', 'Animals, other than cats', 'Animals, other than cats and dogs' and 'Dogs'; therefore, the variable that was kept was 'Other animals'.Moreover, in the regression analysis for the subset of data referring to the cats showing house-soiling, the association between the variables 'Presence of cat health problems' and 'Cat's age' was tested with a binomial model in which the presence of a feline health problem was the outcome and the cat's age was the independent variable.Cats aged >5 years were 1.7 times more likely to have health problems than younger cats (P <0.001).Since we aimed to investigate the possible risk factors that could determine the occurrence of a specific type of house-soiling, the variable 'Cat's age' was not further considered in these subsequent models, while 'Cat health status' was kept among the independent variables tested for the outcomes 'Urinary house-soiling', 'Faecal housesoiling' and 'Concurrent expression of urinary and faecal house-soiling'.After excluding the collinear variables, the associations between the independent variables and the dichotomous outcomes were tested.As a first step, univariable binary logistic regression models were carried out to test the pairwise associations between each independent variable and each outcome.The P values of each independent variable tested in a univariable binary logistic regression were calculated using the Wald test, and for each outcome, the variables that showed a P value <0.10 were considered for inclusion in the backward stepwise selection for multivariable logistic regression models.The backward elimination was run manually.Observations with missing values were automatically excluded from the analyses.Predictive variables were removed until all variables in the final model had a P value ⩽0.10 and the lowest Akaike information criterion (AIC) value for the model was attained. All the univariable and multivariable models were performed using functions belonging to the packages lme4, lmtest, nlme, lsmeans and car in the R environment. 32he results were reported as odds ratios (ORs), CIs and P values.The significance threshold was set at P ⩽0.05, and P values >0.05 and <0.10 were set as trends towards significance. Results The description of the demographic population characteristics, cats' living environment, litter box/litter management and prevalence of house-soiling in Italian pet cats for the entire data set (3106 cats) have been published previously. 25In the case of respondents having at least two litter boxes (n = 1636 cats), the median number of litter boxes was 3 (interquartile range [IQR] 2-4; range 2-30), the median number of cats was 3 (IQR 2-5; range 1-30), the median number of the total number of locations with at least one litter box was 1 (IQR 1-2; range 1-7) and the median number of litter boxes per location was 2 (IQR 1.5-3; range 0.4-18). House-soiling was shown by one-sixth of the total study population of cats, with a reported prevalence of 16.74%.Most cats eliminating outside the litter box showed urinary house-soiling (54.60%), with faecal house-soiling (24.90%) or concurrent urinary and faecal house-soiling (20.50%) being less frequent.The cats performing housesoiling eliminated mainly in the same spot (64.64%), precisely on objects (31.66%) or near the litter (28.25%), assuming a squatting posture (35.24%).However, almost one-third of the respondents reported not knowing the posture, since the cats were never observed while eliminating.Most cats that were house-soiling were healthy (80.18%); among those with pathologies, the main ones were gastrointestinal/musculoskeletal (10.91%) or urinary tract diseases (8.91%) (for further details, see Tateo et al 25 ).In cats that were provided with ⩾2 litter boxes, house-soiling reached a prevalence of 22.2% (364/1636). Risk factors for house-soiling In univariable models, house-soiling was associated with housing size (P = 0.038), number of children aged under 7 years (P = 0.027), presence of other animals (P <0.001), number of dogs (P <0.001), number of cats (P <0.001), square metres per cat (P <0.001), cat's breed (P = 0.030), cat's age (P <0.001), number of litter boxes (P <0.001), type of litter box (P <0.001), litter box location in the living room (P = 0.002), litter box location in the bedroom (P = 0.012), litter box location under the stairs (P = 0.017), type of litter (P <0.001), litter scooping frequency (P = 0.004) and litter full replacement frequency (P = 0.007).The full list of Wald test P values for the predictive variables associated with the manifestation of house-soiling is reported in the supplementary material (Table S5). In the univariable model, considering only the respondents with at least two litter boxes, whether multiple litter boxes were placed in the same or different locations was not associated with the occurrence of house-soiling (P = 0.218). The variables retained in the final multivariable regression model for the expression of house-soiling behaviour (model P value <0.001, AIC = 2454.30)are shown in Table 1.The presence of dogs or other cats living in the family was strongly associated with an increased probability of cats showing house-soiling, with cats living with 1 dog or ⩾3 dogs being 1.5-2 times more likely to show this behavioural problem than cats living in a family with no dogs (P <0.001).Similarly, cats living in large groups, with ⩾4 cats, were almost twice as likely to show house-soiling as cats living alone (P = 0.018).Among the cats' intrinsic characteristics, older age was a risk factor for house-soiling, with cats having >1.5-fold higher odds of showing this behavioural problem if they were aged >2 years compared with younger cats (P <0.001).The number and type of litter boxes were also risk factors associated with house-soiling; cats living in an environment with more than one litter box were almost twice as likely to show this behaviour (P <0.001), and the odds of having cats showing house-soiling increased by 1.4 times if the litter box was open compared with a covered box (P = 0.021).The litter scooping frequency was also retained in the final model, as scooping frequencies lower than once a day significantly increased the odds of house-soiling (P <0.001) compared with a litter scooping frequency of twice a day.The litter type was also associated with the expression of house-soiling, with cats being more likely to show this behavioural problem if the litter was biodegradable or 'other' types (ie, papers, lentils) compared with the clumping type (P = 0.019).The variables of litter full replacement frequency and litter box location under the stairs were retained in the model as the AIC obtained including these variables in the multiple regression model was smaller than the AIC of the model without these factors (AIC = 2493).Therefore, the likelihood of having cats expressing house-soiling seems to increase when full litter replacement happened rarely (ie, 'clean when needed', 'never cleaned') (P = 0.050).Litter box location under the stairs also seems to increase the probability of house-soiling (P = 0.087). Risk factors for urinary house-soiling In univariable models, urinary house-soiling was associated with housing size (P = 0.048), number of dogs (P = 0.008), square metres per cat (P = 0.011), number of litter boxes (P = 0.029), type of litter box (P = 0.003) and cat health status (P <0.001).The full list of Wald test P values for the predictive variables associated with urinary house-soiling is reported in the supplementary material (Table S6).The variables retained in the final multivariable regression model for the expression of urinary house-soiling behaviour (model P value <0.001, AIC = 534.08)are shown in Table 2.The expression of urinary housesoiling behaviour in cats was strongly associated with the presence of urinary tract diseases, as cats with those health problems were about nine times more likely to urinate outside the litter box compared with healthy cats (P <0.001).The number and type of litter boxes were also associated with urinary house-soiling; cats with 3 and ⩾4 litter boxes at their disposal were >2 and approximately 4 times more likely to show urinary house-soiling (P = 0.021 and P <0.001, respectively) compared with those living in houses with one litter box, and the odds of having cats showing urinary house-soiling increased by more than twice if the litter box was covered instead of open (P = 0.002).The expression of urinary housesoiling behaviour in cats was not proportionally related to the number of dogs living with them, since cats were four times more likely to show urinary house-soiling when they lived with no dogs or one dog compared with cats who lived with ⩾3 dogs (P = 0.003 and P = 0.002, respectively). Risk factors for faecal house-soiling In univariable models, faecal house-soiling was associated with number of cats (P = 0.009), square metres per cat (P = 0.036), number of litter boxes (P = 0.016), litter box location on the balcony (P = 0.008) and the cat health status (P <0.001).The full list of Wald test P values for the predictive variables associated with faecal house-soiling is reported in the supplementary material (Table S7).The variables retained in the final multivariable regression model for the expression of faecal housesoiling behaviour (model P value <0.001, AIC = 448.52)are shown in Table 3.The expression of faecal housesoiling behaviour in cats was strongly associated with the presence of health problems belonging to the category 'other' (ie, gastrointestinal and musculoskeletal problems), as cats with those health problems were more than twice as likely to defecate outside the litter box compared with healthy cats (P <0.001).The number and location of litter boxes were also important cofactors for faecal house-soiling.Cats were less likely to defecate outside the litter box if they had ⩾4 litter boxes at their disposal compared with cats living in houses with one litter box (P = 0.050), and less likely to perform faecal house-soiling when the litter boxes were located on the balcony or in the bathroom (P = 0.018 and P = 0.029, respectively). Risk factors for concurrent urinary and faecal house-soiling In univariable models, the concurrent expression of urinary and faecal house-soiling was associated with housing size (P = 0.010), number of dogs (P = 0.001), number of litter boxes per cat (P = 0.029), type of litter box (P = 0.006), type of litter (P = 0.003) and cat health status (P = 0.017).The full list of Wald test P values for the predictive variables associated with faecal house-soiling is reported in the supplementary material (Table S8). The variables retained in the final multivariable regression model for the expression of concurrent urinary and faecal house-soiling behaviour (model P value <0.001, AIC = 411.47)are shown in Table 4.The number of dogs living with the cats was strongly related to the concurrent expression of urinary and faecal house-soiling, with respondents having 2 or ⩾3 dogs being three times more likely to have a cat showing this behavioural problem compared with respondents who did not have dogs (P <0.001).Cats with urinary tract diseases were less likely to be reported to show the concurrent expression of urinary and faecal house-soiling behaviour in comparison with healthy cats (P = 0.004).The type of litter and the full litter replacement frequency were also associated with urinary and faecal house-soiling.In particular, the litter types belonging to the category 'other' (ie, papers, lentils) performed worse compared with silica gel litters, as they increased the likelihood of cats urinating and defecating outside the litter box eight-fold compared with silica gel (P = 0.027).Very frequent full litter replacement (more than two/three times a week, or weekly) increased the odds of cats urinating and defecating outside the litter box by >7 and 4 times (P = 0.035 and P = 0.050, respectively) compared with a litter full replacement frequency of every 10/20 days. Discussion This study documents the prevalence of house-soiling in Italian cats and describes, for the first time, the possible factors associated with it.Surprisingly, our prevalence was lower than that reported in other studies. 2,4,8,11,23This was probably because, in some of the studies, the prevalence was calculated in cats presented at veterinary consultations for behavioural problems, while our sample is representative of the Italian cat population, with and without health and behavioural problems.House-soiling was associated with cats' details, living environment characteristics, litter and litter box types and management, supporting our hypothesis.Our findings are useful in identifying the factors that may increase or decrease the risk of house-soiling, and consequently may be useful to enhance cat-human relationships and cat welfare.This study provides evidence that may help practitioners to educate owners about preventing the problem or managing it instead of considering cat abandonment, relinquishment to a shelter or euthanasia due to this behavioural problem.18]34 In our multivariable model, the presence of dogs and the number of cats in the household were positively associated with the expression of house-soiling.This may be because, in the presence of dog(s) or other cats, cats may not have easy access to their litter box and/or a safe entry route that avoids an encounter with a potential enemy. 16,17,34oreover, a cat that has been ambushed by another household pet while using the litter box may be nervous about re-using it. 34In the case of multi-cat households, cats may compete for the same resource, namely the litter box(es), and this may lead some cats to choose safer places to eliminate. 17,18However, the relationship between cat welfare and social and environmental factors is complex.Finka and colleagues 24 reported that not only the number of cats in the households per se could represent a stressor, but also the combination with other environmental (eg, outdoor access, indoor space availability, human density) and endogenous (eg, breed, sex, age, neuter status) factors. More private litter box locations can be preferred if the litter box is located in busy areas of the house where the cat does not feel safe. 4,18In our study, placing litter boxes near or under stairs seemed to increase the likelihood of house-soiling, most probably because stairs are a noisy place where many people, sometimes strangers, pass by. 15 This agrees with what was reported by Neilson, 34 namely that a cat that is uncomfortable with the presence of strangers/other animals can show litter box aversion due to social anxiety.In addition to placement, the distribution of multiple litter boxes in the house is also an important factor to consider.In fact, locating different litter boxes in the same area has been found to be associated with an increase in the manifestation of house-soiling. 23In our study, distributing multiple litter boxes in the same room was not associated with an increased occurrence of house-soiling.However, from our survey, it was not possible to infer what the actual arrangement of litter boxes was within the room, such as whether they were close to each other or close to food and water sources.This is a limitation that should be taken into account when interpreting the results, and the design of the survey should be improved in future studies.Some studies do not recommend placing multiple litter boxes within the same room, 16,18 especially if they are close together, as they could be considered by the cat as one big box 16 and the cat may similarly show aversion to all of them.Placing the different litter boxes in different locations is, therefore, encouraged in cats that are house-soiling. 4In our subset considering only the respondents who had at least two litter boxes, the prevalence of house-soiling in cats was higher than in the total data set.This could be due to either the greater number of cats owned by those respondents with more litter boxes or a practice put in place to try to reduce the occurrence of house-soiling.Increasing the number of available litter boxes is certainly one way to manage house-soiling, but it should be complemented by other practices, such as offering litter boxes of different types, with different substrates and placed in different locations in the home. 4n addition to litter box location, anxiety related to negative events (eg, pain) can also lead to aversion towards the litter box.For example, older cats may have trouble climbing over the edge of a litter box and may perceive pain during litter box entry or use. 16,34In this way, a classically conditioned aversive association with litter box use may occur. 4,16In our study, older cats were more P values in bold refer to the statistical significance or trend towards the significance of the predictive variable in the model; the significance of a category against the reference is reported in regular font CI = confidence interval; OR = odds ratio; Ref = reference category; SE = standard error likely to show house-soiling than younger cats (aged <2 years).These results agreed with other studies 2,16,34 reporting age as a risk factor for house-soiling in cats, especially if the cat has arthritis or other musculoskeletal problems.In fact, older cats are considered more experienced cats that may have learned to associate certain litter and litter box characteristics with negative experiences. 22he household pet group and the cat's behavioural needs and characteristics must be considered crucial factors in the prevention and management of house-soiling. Decreasing the pet population density in multi-pet households, allowing outdoor access or access to different parts of the house, and moving the litter box away from busy places are, therefore, recommended. 417]19 However, whether litter box characteristics are a risk factor for house-soiling is still a matter of debate.While Barcelos et al 2 found no statistical association between litter box attributes and house-soiling, multiple litter box attributes were significantly associated with house-soiling in our study.Our results showed conflicting associations between the type of litter box and house-soiling in cats.This is in line with the literature, where a real preference for, or aversion to, open vs covered litter boxes has not been reported, as long as each type of litter box is clean and is appropriate for the size of the cat. 16,18,35In our study, more than one-third of the respondents did not observe cats performing house-soiling and only found soiled materials near the litter box (see Tateo et al 25 ).This suggests that the house-soiling, especially concerning faeces, could be related not to real housesoiling behaviour, but instead to an accidental drop of faeces as the cat left the litter box (eg, possibly as a result of faeces stuck to the fur and then dropped) or to the litter box characteristics precluding an old cat from comfortably entering or posturing to defecate.An open litter box could be more prone to material dispersion, especially if the cat performs a very marked burying behaviour.Moreover, the cat could perceive an open-type litter box as 'less protective' from external stressors, especially in multi-pet households, 16,17 and then choose elimination places where it feels safer.In this study, the number of litter boxes was positively associated with house-soiling expression, with a higher number of litter boxes increasing the likelihood of having cats that were house-soiling.This is contrary to what is generally reported as the rule of thumb, namely having at least one litter box per cat in the household plus one more. 4,16,18However, as mentioned before, a higher number of litter boxes should not be considered as a possible cause of house-soiling but as a possible attempt to minimise the problem.Our study confirms the best practices already suggested, but not statistically validated, by Olm and Houpt, 4 namely increasing the number of litter boxes per cat and trying to use different sizes and types (covered and open) of litter boxes to make them as attractive to the cat as possible and to re-establish their use in cats performing house-soiling. 4 The type of litter and the litter box cleaning frequency have been identified as risk factors, since they can determine substrate aversion in cats. 16,17,34Litter material should meet cats' needs instead of human preferences (eg, aromatic litter). 1819]36 Our findings confirm that biodegradable or other types of litter (ie, papers, lentils) increased the likelihood of cats showing house-soiling compared with the clumping type of litter.However, beyond the type of litter, an even more critical point is litter cleanliness.A study conducted by Ellis et al 20 stated that cats prefer clean litter boxes to dirty ones.In particular, it seems that it is the physical presence of urine and faeces in the litter that deters the cat from reusing the litter box. 20oreover, Lawson et al. 23 found an association between less frequent cleaning of the litter box of both urine and faeces and the occurrence of house-soiling.In our study, we found that highly frequent scooping (ie, twice a day) decreased the likelihood of cats showing house-soiling.The same was true for the frequency of full litter replacement, with cats having their litter boxes cleaned rarely (eg, 'clean when needed', 'never clean') being more likely to develop house-soiling.Our results agree with many studies on the absolute necessity of providing cats with a clean litter box to avoid house-soiling problems. 4,17,18,23herefore, a suggested best practice to follow to guarantee high hygienic standards is to scoop and fully clean the litter box at least once a day and every 1-4 weeks, respectively. 4,17,18here are three types of house-soiling, namely urinary, faecal and both. 4,5Urinary house-soiling was associated with the number of dogs in the household, litter box type and number, and health problems.In agreement with Barcelos et al, 2 the association with the number of dogs may be explained by the social dynamics established in a group of pets, rather than by the number of pets per se.Covered litter boxes were positively associated, probably because covered litter box dirtiness is less perceptible by cats' owners compared with open litter boxes. 18oreover, it has been reported that when the litter box is dirty, cats seem to prefer a litter box soiled with faeces rather than urine. 20This may have motivated them, in the case of dirty litter, to perform urinary house-soiling.However, as mentioned before, our findings regarding the association between house-soiling and type of litter box need to be interpreted with caution.Instead, it is worth highlighting that, in agreement with the literature, 19,22 urinary house-soiling is strongly associated with the presence of urinary tract disease.Feline lower urinary tract disease (FLUTD) includes several pathologies affecting the urinary system, and house-soiling is considered a typical clinical sign of FLUTD. 4,16,18There are many reasons why a cat with FLUTD might eliminate outside the litter box.Cats may develop an aversion to using the litter box because of negative associations due to painful elimination, 4 may have a decreased ability to retain urine 4 or may prefer to urinate on cool surfaces (eg, sinks, bathtubs). 37When managing urinary housesoiling, the cat's medical history should therefore be taken into account, with urinary tract diseases considered among the first possible causes of this behavioural problem and the possible presence of such disease excluded.Behavioural problems may, indeed, be the manifestation of an underlying health problem, and veterinary input becomes essential for the management of the disease, which automatically becomes the management of the behavioural problem. Faecal house-soiling was also associated with health problems, with cats with gastrointestinal and musculoskeletal problems being more likely to develop it.Diarrhoea and constipation are recognised in other studies as possible medical causes. 15,16,18Pain or incontinence, as for urinary house-soiling, may be associated with faecal house-soiling.Moreover, a cat that has diarrhoea and gets its paws dirty by eliminating in the litter box may develop an aversion to eliminating in the same litter box again. 15Musculoskeletal problems can elicit pain in cats. 16,34For this reason, as already discussed for senior cats, cats with musculoskeletal problems may have difficulty reaching the litter box or associate a negative experience with using the litter box and therefore avoid it. 4,16The locations of litter boxes seem to be more important for faecal than urinary housesoiling.Finally, from our results, concurrent urinary and faecal house-soiling seems to be more related to the cat's behavioural preference/aversions. Consequently, the appropriate locations and management of the litter boxes are strongly recommended to prevent and minimise concurrent urinary and faecal house-soiling, and a veterinary examination is suggested as the first step in case of either urinary and/or faecal house-soiling. Our findings need to be interpreted with caution, since this study has several limitations.First, our findings are affected by the common limitations of every survey-based study, 27 and many associations must not be interpreted as causes but as practices already in place to manage house-soiling.Moreover, from the design of our survey, it is, unfortunately, impossible to differentiate between voiding and spraying.Similarly, from our questions, it was impossible to know whether some faeces were dropped accidentally by the cats outside the litter box (ie, stuck on their fur and then dropped) or voluntarily eliminated in different locations.There was also no question about the cats' hair length, and the variable of 'length of cat hair' was assigned to each cat based on the cat's breed.Consequently, there may be uncertainty about the findings related to this variable, and this limitation should be addressed to improve the design of the survey.There were also no questions regarding the arrangement of the litter boxes when placed in the same room.This may have affected the lack of association found in our study between house-soiling occurrence and litter boxes placed in the same location.Moreover, there were no questions about inter-cat relationships in multi-cat households.Therefore, we do not know whether tensions between cats in the same household could have increased the risk of house-soiling compared with multicat households where there were no tensions.Finally, it should be acknowledged that the information related to the cats' health problems was obtained only from the owners claiming to have a cat that was house-soiling, so the presence of health problems could not be tested as a factor for all respondents.Future surveys should add specific questions to address these limitations, including a question related to hair length and open-ended questions where the respondents could provide more detail about the history of the cats.Notwithstanding these limitations, this is, to the authors' knowledge, the study with the largest population ever investigated to identify the factors associated with house-soiling in cats.Our findings provide evidence to prevent and manage this unhygienic behavioural problem and may enhance cat welfare and cat-owner relationships. Conclusions The occurrence of house-soiling in Italian cats was 16.74% and was associated with household composition, litter type and litter box management, and the cat's intrinsic characteristics, such as age and pre-existing health problems.In the case of urinary house-soiling, it seems crucial to double-check whether the cat has a urinary tract disease, which could be the cause of the behavioural problem and needs to be treated.Faecal house-soiling could also be related to gastroenteric and musculoskeletal disorders, while concurrent urinary and faecal house-soiling seem to be more linked to a cat's behavioural preference/ aversions and litter box management.Meeting cats' preferences for litter type, litter box type and location are recommended, as well as strict cleanliness of the litter and litter boxes.Overall, cat owners need to be educated on this matter when they acquire a kitten or adopt an adult cat to prevent the development of this behavioural problem, or deal with it. Acknowledgements The authors would like to thank all associations, social media groups and people who helped in sharing the surveys and all respondents who took their time to fill in the survey.The authors would also like to thank Chiara Zangoli and Elisa Gelli for helping with the data cleaning.The paper falls within the framework of the programmatic initiatives of the ASPA Commission for the Breeding and Feeding of Companion Animals and the ASPA Commission for Animal Welfare. Table 1 Multivariable regression model for the dummy dependent variable of house-soiling in Italian cats values in bold refer to the statistical significance or trend towards the significance of the predictive variable in the model; the significance of a category against the reference is reported in regular font CI = confidence interval; OR = odds ratio; Ref = reference category; SE = standard error P Table 2 Multivariable regression model for the dichotomous dependent variable of urinary house-soiling in Italian cats P values in bold refer to the statistical significance or trend towards the significance of the predictive variable in the model; the significance of a category against the reference is reported in regular font CI = confidence interval; OR = odds ratio; N/A = not applicable due to high standard error estimates; Ref = reference category; SE = standard error Table 3 Multivariable regression model for the dummy dependent variable of faecal house-soiling in Italian cats P values in bold refer to the statistical significance or trend towards the significance of the predictive variable in the model; the significance of a category against the reference is reported in regular font CI = confidence interval; OR = odds ratio; Ref = reference category; SE = standard error Table 4 Multivariable regression model for the dichotomous dependent variable of concurrent expression of urinary and faecal house-soiling in Italian cats
2023-11-19T06:18:22.454Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "3400e1ecf104d2fdd0fe3e7318e285a5daf4d660", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1098612X231202482", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "8c984bfb24a98520b91a2ee2192c672fa1791f15", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
123178226
pes2o/s2orc
v3-fos-license
Searching for Continuous Gravitational Waves with Pulsar Timing Arrays : Detection and Characterization Gravitational Waves (GWs) are tiny ripples in the fabric of space-time predicted by Einsteins theory of General Relativity. Pulsar timing arrays (PTAs) offer a unique opportunity to detect low frequency GWs in the near future. Such a detection would be complementary to both LISA and LIGO GW efforts. In this frequency band, the expected source of GWs are Supermassive Black Hole Binaries (SMBHBs) and they will most likely form in an ensemble creating a stochastic GW background with the possibility of a few nearby/massive sources that will be individually resolvable. A direct detection of GWs will open a new window into the fields of astronomy and astrophysics by allowing us to constrain the coalescence rate of SMBHBs, providing further tests on the theory of General Relativity, and giving us access to properties of black holes not accessible by current astronomical techniques. Here we will discuss the development of a robust detection pipeline for single resolvable GW sources that is fully tailored to the unique aspects of PTA data analysis. Introduction In the next few years pulsar timing arrays (PTAs) are expected to detect gravitational waves (GWs) in the frequency range 10−9 Hz–10−7 Hz. Potential sources of GWs in this frequency range include supermassive black hole binary systems (SMBHBs) Sesana et al. (2008), cosmic (super)strings Olmez et al. (2010), inflation Starobinsky (1979), and a first order phase transition at the QCD scale Caprini et al. (2010). The community has thus far mostly focused on stochastic backgrounds produced by these sources, however; sufficiently nearby single SMBHBs may produce detectable continuous waves with periods on the order of years and masses in the range 10M –10M Wyithe & Loeb (2003); Sesana et al. (2009); Sesana & Vecchio (2010). The concept of a PTA, an array of accurately timed millisecond pulsars, was first conceived of over two decades ago Romani (1989); Foster & Backer (1990). Twenty years later three main PTAs are in full operation: the North American Nanohertz Observatory for Gravitational waves (NANOGrav; Jenet et al. (2009)), the Parkes Pulsar Timing Array (PPTA; Manchester (2008)), and the European Pulsar Timing Array (EPTA; Janssen et al. (2008)). The three PTAs collaborate to form the International Pulsar Timing Array (IPTA; Hobbs et al. (2010)). Many authors have focused on determining the parameter accuracy that we may hope to extract from a future detection of a continuous GW from a SMBMB. Corbin & Cornish (2010) have developed a Bayesian Markov Chain Monte-Carlo (MCMC) data analysis algorithm for JE is partially funded by the Wisconsin Space Grant Consortium and the NSF through PIRE award number 0968126. Introduction In the next few years pulsar timing arrays (PTAs) are expected to detect gravitational waves (GWs) in the frequency range 10 −9 Hz-10 −7 Hz.Potential sources of GWs in this frequency range include supermassive black hole binary systems (SMBHBs) Sesana et al. (2008), cosmic (super)strings Olmez et al. (2010), inflation Starobinsky (1979), and a first order phase transition at the QCD scale Caprini et al. (2010).The community has thus far mostly focused on stochastic backgrounds produced by these sources, however; sufficiently nearby single SMBHBs may produce detectable continuous waves with periods on the order of years and masses in the range 10 8 M -10 9 M Wyithe & Loeb (2003); Sesana et al. (2009); Sesana & Vecchio (2010).The concept of a PTA, an array of accurately timed millisecond pulsars, was first conceived of over two decades ago Romani (1989);Foster & Backer (1990). Many authors have focused on determining the parameter accuracy that we may hope to extract from a future detection of a continuous GW from a SMBMB.Corbin & Cornish (2010) have developed a Bayesian Markov Chain Monte-Carlo (MCMC) data analysis algorithm for parameter estimation of a SMBHB system in which the perturbation due to the GW at the pulsar is taken into account in the detection scheme, thereby increasing the signal-to-noiseratio (SNR) and improving the accuracy of the GW source location on the sky.Recently, Lee et al. (2011) have developed parameter estimation techniques incorporating the pulsar term and have placed limits on the minimum detectable amplitude of a continuous GW source. In this article, we will briefly review detection and characterization techniques developed for analysis of real PTA data (Ellis, 2013). Methods The GW signal from a SMBHB in a circular orbit measured at the earth can be described by 8 parameters: 2 intrinsic to the binary and 5 that are extrinsic and depend on our line of sight to the binary.The intrinsic parameters are the total mass and orbital separation (or equivalently the period of the binary, though Kepler's 3rd law) of the binary system and the extrinsic parameters are the sky location of the binary, initial phase at the time of observation, distance to the binary and orientation of the binary in the sky projected onto our line of sight.Since we are using the pulsars as our GW detector we must know the distance to the pulsars in order to correctly measure the GW parameters.However, typical pulsar distance uncertainties are on the order of tens of percent (Verbiest et al., 2012), in order to attain phase coherence in our search algorithm, we must allow the pulsar distance to vary as a search parameter as well. Our parameter space will be at least 9-dimensional for a PTA comprised of one pulsar and we will gain another parameter for every pulsar that is used in the search.For typical PTAs (20 pulsars), this means that the parameter space will be ∼ 28 dimensional.For this reason we have chosen to use a Markov Chain Monte-Carlo (MCMC) algorithm to perform our search and parameter estimation.MCMC is a stochastic sampling method that will efficiently explore large parameter spaces and map out the probability distribution function (pdf) for the model parameters.This is accomplished through the Metropolis-Hastings algorithm that allows the sampler to focus on high probability areas of parameter space while still exploring the entire prior volume.Next we will describe how this algorithm is used to search the parameter space for the maximum likelihood values, map out the pdfs of all parameters and make statements about the detection of GWs in our data set. Characterization Our goal is to measure both intrinsic and extrinsic parameters of the SMBMB through GW observations.To do this we must explore the large parameter space of the GW parameters in addition to the pulsar distances themselves.To efficiently locate the global maximum in the parameter space we make use of parallel tempering.Parallel tempering involves running several MCMC chains in parallel each evaluating the likelihood function of our data raised to some power 1/T , where T ranges from 1 to T max , where T = 1 represents the true likelihood function and higher temperatures are a flatter version of the true likelihood function.This allows the hotter chains to explore the likelihood surface much more quickly and the algorithm then communicates information from the hotter chains back down to the T = 1 chain.Further discussion of the setup of parallel tempering is beyond the scope of this document, suffice it to say that this step is critical in locating the maximum likelihood quickly.We have simulated a moderately strong GW source and have run our analysis. Figure 1 shows that the algorithm quickly locates the maximum likelihood and injected parameters.Trace plots for the measurable parameters (the inclination angle, initial phase and polarization angle are not well constrained for this realization) for an SNR=20 injection for the first 10 5 steps.In all cases the green line represents the injected parameters and the blue is the chain trace.We can see that the parallel tempering scheme has allowed us to locate the global maxima of the log-likelihood and all parameters within the first ∼ 6 × 10 4 steps. Once we have located the maximum likelihood we can begin to collect samples of the pdfs of the model parameters.This phase of the algorithm is called the characterization phase. During this phase we will learn about any correlations among parameters or about any multimodal structure in the likelihood surface.As an example, we show the 2-d pdfs of the sky location and mass and distance to the source in Figure 2.Here we see that we can recover the sky location with a smaller error box for louder GW signals.In addition, with a louder signal we can break the degeneracy between the mass of the system and distance to the source. Detection Above, we have shown that we are able to characterize the parameters of the SMBHB source if it is loud enough in our data.In this section we will review how we evaluate the evidence for the presence of a GW in our PTA data set.In Bayesian statistics we directly compare the evidence for a model with and without a GW source.In practice computing the Bayes factor is quite difficult because it involves integrating the full dimensional probability distribution function over all model parameters.As mentioned above in our case the parameter space can be up to 28 dimensions or even higher.We have made us of the parallel chains that we have run in the characterization phase to compute the evidence via a thermodynamic integration scheme (see e.g.Littenberg & Cornish (2009) and references therein).After we have computed the evidence for a model with and without a GW signal, we can then construct the ratio of the GW model to the non-GW model, this is known as the Bayes factor.In many cases a Bayes factor larger than 100 is considered decisive evidence and we will adopt that convention here.Figure 3 shows the log of the Bayes factor as a function of the injected signal-to-noise ratio (SNR) for the same noise realization.Here we see that the log Bayes factor increases with injected SNR as expected and that a detection is claimed around the SNR ∼ 5 mark.However, note that this curve can change dramatically based on noise realization. Conclusion In this document, we have reviewed recent progress in the development of a pipeline for detection and characterization of continuous GW sources in PTA data.This algorithm quickly locates the global maximum in parameter space in the search phase, characterizes the GW parameters in the characterization phase, and evaluates the evidence and Bayes factor in the final evaluation stage.In the future we plan on optimizing this algorithm in order to get the quickest possible run time.We also plan to include for the possibility of multiple continuous GW source as opposed to the current default of assuming only one source. Figure1: Trace plots for the measurable parameters (the inclination angle, initial phase and polarization angle are not well constrained for this realization) for an SNR=20 injection for the first 10 5 steps.In all cases the green line represents the injected parameters and the blue is the chain trace.We can see that the parallel tempering scheme has allowed us to locate the global maxima of the log-likelihood and all parameters within the first ∼ 6 × 10 4 steps. Figure 2 : Figure2: Marginalized 2-D posterior pdfs in the sky coordinates (θ, φ) and the log of the chirp mass and distance (log log D L ) for injected SNRs of 7, 14, and 20 shown from top to bottom.Here the injected GW source is in the direction of the Fornax cluster with chirp mass M = 7 × 10 8 M .The distance to the source is varied to achieve the desired SNR.Here the "×" marker indicates the injected parameters and the solid, dashed and dot-dashed lines represent the 1, 2, and 3 sigma credible regions, respectively. Figure 3 : Figure3: Log of the Bayes factor plotted against injected SNR for the same signal and noise realization.The green horizontal line is the threshold in the log of the Bayes factor in which we can claim a detection and the blue points are the log Bayes factor calculated from thermodynamic integration.
2019-04-20T13:08:12.346Z
2014-08-09T00:00:00.000
{ "year": 2014, "sha1": "e8c3ae01444984e719d4c3a4d54cbb3a297c75c5", "oa_license": "CCBYNC", "oa_url": "https://dione.carthage.edu/ojs/index.php/wsc/article/download/7/7", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e8c3ae01444984e719d4c3a4d54cbb3a297c75c5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
246904480
pes2o/s2orc
v3-fos-license
Scattering Amplitudes: Celestial and Carrollian Recent attempts at the construction of holography for asymptotically flat spacetimes have taken two different routes. Celestial holography, involving a two dimensional (2d) CFT dual to 4d Minkowski spacetime, has generated novel results in asymptotic symmetry and scattering amplitudes. A different formulation, using Carrollian CFTs, has been principally used to provide some evidence for flat holography in lower dimensions. Understanding of flatspace scattering has been lacking in the Carroll framework. In this work, using ideas from Celestial holography, we show that 3d Carrollian CFTs living on the null boundary of 4d flatspace can potentially compute bulk scattering amplitudes. 3d Carrollian conformal correlators have two different branches, one depending on the null time direction and one independent of it. We propose that it is the time-dependent branch that is related to bulk scattering. We construct an explicit field theoretic example of a free massless Carrollian scalar that realises some desired properties. Introduction The Holographic Principle has been one of our primary routes to a theory of quantum gravity, formulated in terms of a lower dimensional field theory. Although there has been a great deal of success in understanding holography in Anti de Sitter spacetime through AdS/CFT, a similar understanding for the apparently more straight-forward case of asymptotically flat spacetimes is lacking. There is, however, a consorted recent effort at rectifying this situation. There are two principal avenues of addressing this problem -Celestial Holography and Carrollian Holography. Bondi-Metzner-Sachs (BMS) [1] symmetries, which arise as asymptotic symmetries of flat spacetimes based on the null boundary, are important to both approaches. Celestial holography has grown out of the basic observation due to Strominger [2] that in asymptotically flatspace time soft theorems for S-matrix elements can be thought of as Ward identities for asymptotic symmetries [3,4,[8][9][10][11][12][13][14]. This correspondence is holographic in nature. The fundamental claim of celestial holography is that there is a two dimensional (2d) CFT on the celestial sphere which computes the scattering amplitudes for processes taking place in the four dimensional asymptotically flat space time. This computation is facilitated by writing the S-matrix in boost eigenstates [15][16][17][18][19] in which the Ward identities for asymptotic symmetries take the well known form of Ward identities in a 2d CFT. This CFT is known as the Celestial CFT 1 . This approach to the flat space holography has already produced many novel results about asymptotic symmetries and scattering amplitudes [20][21][22][23][24][25][26][27][28][29][30]32] in four dimensions. The reader is pointed to the excellent recent reviews [35][36][37] for more details on Celestial holography. Another school of thought has been the attempt to build duals of asymptotitcally flat spacetime in terms of a one-dimensional lower field theory that enjoys BMS symmetry. These field theories are conformal theories living on the null boundary of spacetime and can be understood as Carroll contractions of usual relativistic CFTs, which takes the speed of light c to zero [39,40]. We shall call this approach Carroll holography. The success of this formulation has principally been in the three dimensional bulk and two dimensional field theories, where various checks have been performed between the boundary and the bulk, including the matching of entropy [41][42][43], stress-tensor correlations [44], entanglement entropy [45][46][47]. Some other important advances are [48][49][50][51][52][53] and higher dimensional explorations include [54][55][56]. Crucially, the understanding of scattering processes has been lacking in this formulation. In this paper, we will provide a bridge between the two formulations. We will show that using BMS or Conformal Carroll symmetries in a 3d field theory living on null infinity, one can formulate the scattering problem in 4d asymptotically flat spacetimes. We will further demonstrate the plausibility of our proposal by constructing an explicit realisation of Carrollian CFTs in terms of a 3d massless Carroll scalar with some desired features. Note added: When this paper was being readied for submission, [57] appeared on the arXiv. Although both papers attempt to link Carroll and Celestial holography, our approaches are complementary. BMS and Carroll CFTs As is now well known, and has been known since the 1960s, the symmetries of interest in asymptotically flat spacetimes in d = 4 actually extends beyond the Poincare group to an infinite dimensional group discovered initially by Bondi, van der Burgh, Metzner and Sachs [1]. The BMS symmetry algebra of 4d flat spacetime at its null boundary I ± is given by: Here M r,s are the generators of infinite dimensional angle dependent translations at I ± known as supertranslations. The original BMS group was given by these infinite dimensional supertranslations on top of the usual Lorentz group denoted here by the generators {L 0 , L ±1 ,L 0 ,L ±1 }. Following [5,6], there has been an effort to consider the full conformal group on the sphere at infinity and hence all modes of the L n generators, the so-called super-rotations 2 . In 2d Celestial CFT superrotation or local conformal transformations on the celestial sphere are generated by a stress tensor which is the shadow transform of the subleading soft graviton [7,8,13]. After shadow transformation the subleading soft graviton theorem [7] becomes the well known Ward identity for stress tensor in a 2d CFT. Let us now discuss 3d Carrollian CFT. We are interested in defining a 3d conformal field theory on I + which is topologically R u × S 2 , where R u is a null line and S 2 is the sphere at infinity. The null line makes the induced metric of I + degenerate. Hence the Riemannian structure is replaced by a so-called Carrollian structures on the intrinsic geometries of these hypersurfaces. CFTs living on I ± are naturally expected to be invariant under the conformal isometries of these Carrollian structures. We refer the reader to Appendix B for more details on Carrollian and conformal Carrollian isometries. Rather intriguingly, conformal Carrollian symmetries have been shown to be isomorphic to BMS symmetries in one higher dimension [39,50] Hence a 3d Carrollian CFT naturally realises the extended infinite-dimensional BMS 4 symmetry. These 3d Carrollian CFTs would be our field theories of interest, which we will show to be a potential candidate for a holographic description of scattering amplitudes in 4d asymptotically flat spacetimes. For these 3d theories, a particular useful representation of vector fields to consider is [54]: Here z,z are stereographic coordinates on the sphere. We will label the Carroll conformal fields Φ living on I + with their weights under L 0 andL 0 : We will assume the existence of Carrollian primary fields living on I + . The primary conditions are [33,54]: In particular, it is important to stress that the last condition is an additional requirement on these fields, which is unlike a 2d CFT. Also for the supertranslations, any one of r or s being greater than zero annihilates the primary field. The transformation rules of the three dimensional Carrollian primary fields Φ h,h (u, z,z) at an arbitrary point on I + under the infinitesimal BMS transformations are given by There is a similar relation for the antiholomorphic piece. Let us now discuss how the structure of a Carrolian CFT that we have discussed above fits into the framework of Celestial Holography. Relation to 4d scattering amplitudes via Celestial Holography As we have stressed above, one of the main reasons for studying Carrollian CFTs is that its symmetries are the same as the extended BMS algebra. So potentially Carroll CFTs can be a holographic dual of the quantum theory of gravity in asymptotically flat spacetime. Now we know, from general considerations, that the only observables in a quantum theory of gravity in asymptotically flat space time are the S-matrix elements. Therefore, given a holographic dual, one should be able to compute the spacetime S-matrix from this. Moreover, if the dual is a field theory or at least looks like one then presumably the Smatrix elements should be somehow related to the correlation functions of the field theory. This is the point of view that we adopt in this paper. In the next section, we will focus on the correlation functions of the Carrollian CFTs. We will find that there are two kinds of correlation functions or two branches. In one branch, the correlation functions are independent of the null time direction 3 In order to answer this question, we use ideas from Celestial holography. (For a quick recap of the essential features of Celestial holography, the reader is pointed to Appendix A). In Celestial holography the dual theory is conjectured to be a 2d (relativistic) CFT which lives on the celestial sphere. The important point for our purpose is that the correlation functions of the celestial CFT are given by the Mellin transform of the 4d scattering amplitudes [15][16][17][18][19]. Let us briefly describe this. For simplicity let us consider only massless particles. We parametrize the four momentum of a massless particle as We also introduce a symbol which is equal to ±1 if the particle is (outgoing) incoming. Using this parametrization, the Mellin transformation can be written as [16,17], where S is the S-matrix element for n massless particle scattering. Here we have also One can show [16,17] using the Lorentz transformation property of S-matrix that the object M on the LHS indeed transforms like the correlation function of n primary operators of weight (h,h) in a 2d CFT 4 . After Mellin transformation the coordinates (z,z) can be interpreted as the stereographic coordinates of the celestial sphere and physically represent the direction of motion of the massless particle. For our purpose however, we will use a modification [18,19] of (3.2) such that the correlation function M is now defined on a 3d space with coordinates (u, z,z). This space can be interpreted as the (future) null-infinity with u as the retarded time and (z,z) as the stereographic coordinates of the celestial sphere. One can show [18][19][20]33] that under supertranslation, Similarly under superrotation or local conformal transformations Now the modified transformation [18,19] has the following form: One can show [18][19][20]33] using the celebrated Soft Theorem -Ward Identity correspondence [2][3][4][8][9][10][11][12][13] thatM transforms covariantly under the extended BMS 4 transformations. In Celestial holography the modified Mellin transformation (3.6) is used to compute the graviton celestial amplitudes in general relativity because the original Mellin transformation integral (3.2) is not convergent due to bad UV behaviour of graviton scattering amplitudes in GR. It turns out that instead when (3.6) is used the time coordinate u acts as a UV regulator and as a resultM is finite. For more details the reader is referred to [20,33,34]. Now it is useful to write the modified celestial amplitudeM as a correlation function of fields defined on null infinity. So following [18] we define where a( ω, z,z, σ) is the momentum space (creation) annihilation operator of a massless particle with helicity σ when ( = −1) = 1. In terms of these fields we can writẽ Now, the field φ h,h (u, z,z) transforms covariantly under the extended BMS 4 transformation. Under superrotation [18][19][20]33] where the primed coordinates are defined in (3.5). Similarly under supertranslation, It is easy to see that for infinitesimal BMS 4 transformations (3.9) and (3.10) reduce to the equations (2.6) written in terms of the primaries of a Carrollian CFT. Therefore, it is not unreasonable to wonder whether one can identify the Carrollian primaries with the primaries φ h,h (u, z,z) of Celestial Holography. If this is true then this will open the road towards connecting the Carrollian CFT correlation functions with bulk scattering amplitudes because the field φ h,h (u, z,z) is directly related to standard creationannihilation operators by (3.7). The central claim Our central claim in this paper is the following: It is natural to identify the time-dependent correlation functions of primaries in a Carrollian CFT with the modified Mellin amplitudẽ In other words, the time-dependent correlators of a 3d Carrollian CFT compute the 4d scattering amplitudes in the Mellin basis. We would like to emphasize that we are not saying that every Carrollian CFT computes space-time scattering amplitude. But, if a specific Carrollian CFT does so then it does it in the modified Mellin basis (3.6). Now the reader might think that this identification is kinematical because both the objects transform in the same way under relevant symmetries. While this is correct, the dynamics enters non-trivially when we choose one of the branches of the conformal Carrollian correlation functions. Before we end this section we would like to emphasize few points. First of all, Celestial holography, as it stands, requires the existence of an infinite number of conformal primary fields with complex scaling dimensions. So any Carrollian CFT which can compute 4d scattering amplitudes should also have this feature. The second point is regarding the symmetry group of a Carollian CFT. Over the last few years, study of tree level massless scattering amplitudes using the framework of celestial holography has revealed a much larger asymptotic symmetry group than the extended BMS 4 . For example, the SL 2 current algebra at level zero turns out to be a symmetry algebra [20] of tree level graviton scattering amplitudes. In fact it has been shown that the w 1+∞ is a symmetry algebra [22][23][24][25]28] for massless scattering amplitudes. This also holds at the loop level in some special cases. Therefore the asymptotic symmetry algebra for flat space time is expected to be far more richer than the extended BMS 4 algebra. The current Carrollian framework has to be extended in order to accommodate these additional symmetries. Correlation functions in Carrollian CFT and different branches Having already revealed the main punchline of our paper, we now go ahead and show the existence of two different branches of correlation functions for 3d Carroll CFTs. We are interested in computing the two point vacuum correlation functions of primary fields in these 3d Carroll CFTs. We will see that just like relativistic CFTs, it is possible to completely determine (upto constant factors) the two and three-point functions using symmetry arguments. We demand that the correlation functions are invariant under the Poincare subalgebra ({M l,m , L n } with l, m = 0, 1 and n = 0, ±1) of the BMS 4 algebra. Consider the two point function Now considering Carroll boost invariance (u → u + bz +bz) 5 we get, So combining the above equations we get, These equations have two independent solutions that give rise to two different classes of correlators, which has been referred to in the section above 6 . The first class of correlators corresponds to the choice of solution Carroll boosts are identical to spatial translations in 4d Minkowski spacetime. For more details, see Appendices A, B. 6 This was noticed independently in [58]. Using invariances under the subalgebra {L 0,±1 ,L 0,±1 } of BMS 4 , the above gives rise to standard 2d CFT 2-point correlation function [54]: For our discussions in this paper, we will not be interested in this particular branch. The second class of solution corresponds to the choice This gives rise to correlation functions in which we are interested [18]. Thus, We will call this the delta-function branch 7 . By demanding invariance under the subalgebra Here ∆ = (h +h) is the scaling dimension and σ = (h −h) is spin. The solution of the above equations is where C is a constant factor. Hence G(u, z,z, u , z ,z ) = C δ 2 (z − z ) (u − u ) ∆+∆ −2 δ σ+σ ,0 . (4.11) Once this correlator has this form (4.11), the equation which imposes M 11 or the transformation u → u + zz is trivially satisfied. Notice that very unlike a relativistic CFT 2 point function, here one does not have to have equal scaling dimensions for the fields to get a non-zero answer. Thus this branch cannot be accessed by taking a limit from relativistic CFT correlation functions. Let us now discuss how one can obtain the same two point function by modified Mellin transformation (3.6) of scattering amplitudes [18]. Of course in the case of two point function the scattering amplitude is trivial and is given by the inner product Here the notation is standard except that we label the helicity of an external particle as if it were an outgoing particle. Now using the parametrization (3.1) we can write, In our notation δ 2 (z) = δ(x)δ(y). Now the Mellin transformed two point function is given bỹ We can see that, modulo the constant normalization, this has the same structure as the time dependent two point function of the Carrollian CFT. More importantly we can see that the presence of the spatial delta function δ 2 (z 1 − z 2 ) in the Carrollian two point function has the (dual) physical interpretation that the momentum direction of a free particle in the bulk space time does not change. In the same way following [18] one can show that the time dependent three point function in the Carrollian CFT is zero. This has the physical interpretation that in Minkowski signature the scattering amplitude of three massless particles vanish due to energy momentum conservation. Therefore, the peculiarities of the time dependent correlation functions of a Carrollian CFT are precisely what we need to connect to the spacetime scattering amplitudes of massless particles. This is the main message of our paper. Massless Carrollian scalar field In this section, our primary goal is to provide a concrete example of a 3d quantum field theory which is invariant under the BMS 4 algebra and give us the correlation functions in the delta-function branch. We will now focus on a particular simple example, that of a massless Carroll scalar field. The Action The minimally coupled massless scalar field on Carrollian backgrounds is described by where τ µ are vector fields defined in Eq. (B.1). We shall work with flat Carroll background by fixing τ µ = (1, 0) and g ij = δ ij . On flat Carroll background the action takes the simple form The two dimensional version of this action (5.1) has been extensively used to study the tensionless limit of string theory where the BMS 3 algebra replaces the two copies of the Virasoro algebra as worldsheet symmetries (see e.g. [59]). We will see that here this simple 3d action carries the seeds of a potential dual formulation of 4d gravity in asymptotically flat spacetimes. The above action (5.1) also arises as the leading piece in the Carroll expansion (speed of light c → 0) of the action of the free relativistic massless scalar field theory in (2+1) dimensions. Two-point function of the Carroll scalar Now we turn our attention to the construction of the two point correlation function of the Carroll scalar. We shall do this in two ways, first by a Green's function method and then by canonical analysis. We will see that we will end up on the delta-function branch elucidated by our general analysis in the previous sections. Green's function We first compute the two point correlation function of the massless Carroll scalar by computing the Green's functions. The Green's function equation for this theory would be This equation can be solved in usual way by going to Fourier space where this takes the formG Transforming back into position space yields As the equation of motion does not have any spatial derivatives this integral diverges. We regulate this integral by throwing away the troublesome infinite piece: For scalar fields, the spin σ = 0 and conformal weights of Φ(u, z,z) are (e.g. from the action (5.2)) Hence the answer obtained by the Green's function method is in perfect agreement with the 2-point function derived from symmetry arguments in the earlier section. Canonical approach We will now rederive the scalar two point correlation function by taking recourse to cannonical quantisation. To proceed we will put the free scalar theory on the round sphere and then take radius to infinity limit to recover our answer in the plane coordinates. The scalar field action on a manifold with topology R × S 2 would be Here k is related to the radius of the sphere R by k = 1 2R . The metric on the sphere is denoted by q ij and is given by (B.5). The Euler-Lagrange equations of motion for this action isΦ + k 2 Φ 2 = 0 (5.9) Generic real solutions are given by: The canonical commutation relation between the C fields are and the Hamiltonian are respectively: The Hamiltonian has the unphysical zero point energy in the form of the δ 2 (0). The physical part of the Hamiltonian then implies that the time translation symmetric ground state should be annihilated by C: It is therefore straightforward to calculate the 2-point function (keeping in mind that there are no zero modes): G(u, u , z i , z i ) = 0|T Φ(u, z,z)Φ(u , z ,z )|0 . (5.14) Now, taking u > u , we obtain So in the limit R → ∞ , or k → 0 , the two point function becomes We read of the scaling dimension of Φ as ∆ = 1/2, since this is spin-less, the physical part of the 2 point function (5.16) matches exactly with the one computed using symmetry arguments. Discussions In this paper, we have provided evidence that the correlation functions of 3d Carrollian CFTs encode scattering amplitudes for 4d asympotically flat spacetimes, specifically in the Mellin basis. The Carroll correlators have two distinct branches and one of these, the one with explicit Carroll time dependence which we called the delta-function branch, was the one relevant for the connection to flat space scattering. There are a number of intriguing questions that arise from our considerations in this paper. Originally, the version of flat space holography that was envisioned with the connections with Carroll CFTs was one which emerged as a systematic limit from AdS/CFT. It is clear that the correlation functions that we focused on in this work cannot emerge as a Carroll limit from standard relativistic 3d CFT correlators in position space, since e.g. the CFT 2-point function would vanish for unequal weight primaries and in the timedependent Carroll branch, this does not happen. Hence it would seem that the formulation of Carroll holography we would require for connection to scattering amplitudes would be disconnected from AdS/CFT. While this makes sense because flat space and AdS are fundamentally different, how this fits in with e.g. the programme of attempting to find flat space correlations from AdS/CFT (see e.g. [60][61][62]), remains to be seen. Our construction and specifically the emergence of two different branches of correlation functions is also reminiscent of recent advances in the tensionless regime of string theory where three distinct quantum theories appear from a single classical theory [63,64]. Recent findings of different correlation functions in these theories [65,66] is an indication that perhaps there is an interesting non-trivial quantum vacuum structure underlying the Carrollian theories we have discussed in our work. It would be of interest to figure out what, if any, is relation of the 2d CFT branch of the Carroll correlation functions to 4d flatspace physics. We would also like to understand if it is possible to construct explicit examples of 3d Carroll CFTs exhibiting 2d CFT correlation functions. As mentioned before, in order to reproduce flatspace scattering, as in Celestial CFTs, Carroll CFTs need an infinite number of primary fields with complex scaling dimension. Moreover, the celestial CFT enjoys a much larger symmetry [20][21][22][23][24][25][26][27][28][29][30]32] than the extended BMS 4 algebra and these additional symmetries play a central role in holographic computation of scattering amplitudes. We would ideally like to construct an explicit example of such a theory to provide a concrete proposal between a gravitational theory in asymptotically flat spacetimes and a Carroll CFT with additional symmetries living on its null boundary. But it is obvious that this is presently a distant goal. APPENDICES A Brief review of Celestial or Mellin amplitudes for massless particles In this appendix, we provide a lightning review of Celestial amplitudes for massless particles for ready reference. The Celestial or Mellin amplitude for massless particles in four dimensions is defined as the Mellin transformation of the S-matrix element, given by [16,17] where σ i denotes the helicity of the i-th particle and the on-shell momenta are parametrized as, The scaling dimensions (h i ,h i ) are defined as, The Lorentz group SL(2, C) acts on the celestial sphere as the group of global conformal transformations and the Mellin amplitude M n transforms as, This is the familiar transformation law for the correlation function of primary operators of weight (h i ,h i ) in a 2-D CFT under the global conformal group SL(2, C). In Einstein gravity, the Mellin amplitude as defined in (A.1) usually diverges. This divergence can be regulated by defining a modified Mellin amplitude as [18,19], where u can be thought of as a time coordinate and i = ±1 for an outgoing (incoming) particle. Under (Lorentz) conformal tranansformation the modified Mellin amplitude M n transforms as, Now in order to make manifest the conformal nature of the dual theory living on the celestial sphere it is useful to write the (modified) Mellin amplitude as a correlation function of conformal primary operators. So let us define a generic conformal primary operator as, where = ±1 for an annihilation (creation) operator of a massless particle of helicity σ. Under (Lorentz) conformal transformation the conformal primary transforms like a primary operator of scaling dimension (h,h) Similarly in the presence of the time coordinate u we have, In terms of (A.8), the Mellin amplitude can be written as the correlation function of conformal primary operators Similarly using (A.10), the modified Mellin amplitude can be written as, B Conformal Carroll symmetry We will briefly summarize the main points relating to Conformal Carroll symmetry in this appendix. Geometric structures Throughout the paper, we have been interested in defining quantum field theories that live on the null boundary I ± of asymptotically flat spacetimes. The null boundary is topologically R u × S 2 with R u being null. The induced metric of I ± is degenerate and Carroll structures replace usual Riemann structures on it. Carrollian manifolds are endowed with a degenerate, twice symmetric metric g µν and its kernel vector field τ µ . The isometry algebra of this structure L ξ τ µ = 0, L ξ g µν = 0, τ µ g µν = 0 (B.1) generates the Carroll algebra, which for flat Carroll manifolds reduces to the group that is obtained from the Poincare group by sending the speed of light to zero [54]. We shall see the algebra and the contraction below. We have dealt with CFTs on these Carroll backgrounds. These theories are naturally expected to be invariant under the conformal isometries of these Carrollian structures. Conformal Killing equations on these background are The so called dynamical exponent N encapsulates the relative scaling of space and time directions. For N = 2, the space and time dilates homogeneously. This is the case that we will be interested, as we wish to connect the conformal Carroll structures to the asymptotic structure of flat spacetimes, where of course space and time scale in the same way. In (2+1) dimensions, the set of vector fields that solves the above equation for N = 2 is Here q is the determinant of the metric q ij on S 2 and α(x i ) is an arbitrary function of x i , but f i (x i ) need to satisfy the following conformal Killing equation on S 2 : D is the connection compatible with q ij . Choosing the stereographic coordinates (z,z) on 2 sphere, such that ds 2 = 2dz dz (1 + zz) 2 , (B.5) the above equation for the components of f is solved by holomorphic and anti-holomorphic functions, i.e f z ≡ f z (z) and fz ≡ fz(z). The algebra of these vector fields (B.3) is clearly infinite dimensional and interestingly closes to form the BMS 4 algebra (2.1) [50]. The connection between Carrollian conformal symmetries and BMS symmetries for arbitrary dimensions was clarified in [50], following closely related observations in [39]. Carroll contractions of Conformal algebra The Carrollian limit of a relativistic CFT is reached by performing an Inönü-Wigner contraction on the relativistic conformal generators. The corresponding contraction of the spacetime coordinates for a 3d CFT is described as This is equivalent to taking the speed of light, c → 0. Conformal Carroll generators are obtained from their relativistic counterparts in this limit: These generate the 3d finite Conformal Carrollian algebra which is iso(3, 1) and hence isomorphic to the Poincare group in d = 4 [54]. The contracted algebra is given by The sub-algebra consisting of the generators {J ij , B i , P i , H} forms the Carrollian algebra, the c → 0 limit of the 3d Poincaré algebra. The generators {J ij , P i , D, K i } form the conformal algebra of celestial sphere or equivalently the 4d Lorentz algebra. 3d Finite Conformal Carroll = 4d Poincare One of the most crucial observations that our work is based on is the fact the 3d finite Conformal Carroll algebra is isomorphic to the 4d Poincare algebra. Here we summarise the relation below: Here the LHS of the equations are the 4d Poincare generators and the RHS are the finite Conformal Carroll generators. We see that the 4d spacetime translations M r,s arrange themselves into the Hamiltonian, Carroll boosts, and the temporal part of the Carroll SCT. The Lorentz generators L n ,L n become the global conformal generators on the sphere, as expected.
2022-02-18T06:42:34.103Z
2022-02-17T00:00:00.000
{ "year": 2022, "sha1": "788828db35ba5c23dea5f965b42a1c8c5fb391b8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "788828db35ba5c23dea5f965b42a1c8c5fb391b8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
198191313
pes2o/s2orc
v3-fos-license
Comprehensive Characterization of Lignans from Forsythia viridissima by UHPLC-ESI-QTOF-MS, and Their NO Inhibitory Effects on RAW 264.7 Cells Lignans are known to be an important class of phenylpropanoid secondary metabolites. In the course of our studies on the chemodiversity of lignans, the necessity arose to develop a method for the fast detection and identification of bioactive lignan subclasses. In this study, we detected 10 lignan derivatives of different extracts of F. viridissima by UHPLC-ESI-QTOF-MS. Lignan glycosides (1 and 2), lignans (3 and 4), and lignan dimers (5–10) were identified by analysis of their exact masses and MSe spectra along with the characteristic mass fragmentation patterns and molecular formulas. We further investigated NO inhibitory effects of F. viridissima fractions and their major lignan derivatives to evaluate those anti-inflammatory effects. The methylene chloride fraction of F. viridissima as well as compounds 8 and 10 showed potent dose-dependent NO inhibitory effects on RAW 264.7 cells. Corresponding to the NO inhibition by compounds 8 and 10, lipopolysaccharide (LPS)-induced inducible nitric oxide synthase (iNOS) expression was notably reduced by both compounds. Our combined data with the bioactive results and the component analysis by UHPLC-ESI-QTOF-MS suggest that the methylene chloride fraction of F. viridissima roots could be potential anti-inflammatory agents and these are related to major lignans including dimeric dibenzylbutyrolactone lignans. Introduction Lipopolysaccharide (LPS), the major cell wall component of gram-negative bacteria, induces inflammatory responses when administered to cells or animals. It induces the production of inflammatory mediators, including nitric oxide (NO), prostaglandin E 2 , and proinflammatory cytokines [1][2][3][4]. Although enhanced production of inflammatory mediators is important for host defense against external stimuli including LPS, excess production of inflammatory mediators causes severe inflammatory diseases, including septic shock, rheumatoid arthritis, systemic lupus erythematosus (SLE), and inflammatory bowel disease (IBD) [5][6][7]. Therefore, an agent that alleviates excess amounts of inflammatory mediators could be applied to treat various inflammatory diseases. Although various anti-inflammatory drugs, such as non-steroidal anti-inflammatory drugs (NSAIDs), have been Results and Discussion Dried roots of F. viridissima (2.7 kg) were extracted with 80% aqueous MeOH (3 times × 4 L, 90 min, 25 • C) by ultrasonication, and the crude extracts were diluted in H 2 O and partitioned successively with n-hexane, CH 2 Cl 2 , and n-BuOH. Compounds 1-10 for in vitro assay were isolated from CH 2 Cl 2 fraction of the roots of F. viridissima by using chromatographic methods including HPLC, MPLC over C 18 RP column as previously described [16]. The specific lignans of CH 2 Cl 2 and n-BuOH fractions were identified by UHPLC-ESI-QTOF-MS e analysis in both positive and negative ion modes. Lignan glycosides (1 and 2), lignans (3 and 4), and lignan dimers (5)(6)(7)(8)(9)(10) were identified in total, CH 2 Cl 2 , and n-BuOH fractions in both positive and negative ion modes. A total of 10 compounds was detected, and their structures are shown in Figure 1, and their retention time (Rt), error (ppm), molecular ions, and molecular formulas are shown in Table 1. Molecules 2019, 24, x FOR PEER REVIEW 2 of 10 evaluation for the development of new anti-inflammatory drugs due to the severe adverse effects of NSAIDs [8]. 'Yeon-kyo' is the fruit of Forsythia viridissima (Oleaceae) and F. suspensa, listed in the 11 th edition of the Korean Pharmacopoeia (KP11). Arctigenin is known as an indicator component for the quantification of F. viridissima. Lignan is one of the representative secondary metabolite class in F. viridissima. Other lignans, such as matairesinol, arctiin, and matairesinoside [9], have been reported as constituents of F. viridissima along with phenylethanoid glycosides, flavonoids, and triterpenoids [10,11]. Traditionally, 'Yeon-kyo' has been used for antiviral, anti-inflammatory, diuretic, antimicrobial, detoxification, and antipyretic activities. Among the different biological activities, the anti-inflammatory activity as a representative effect has been shown by numerous in vivo and in vitro studies with F. suspensa and F. viridissima [12,13]. Arctigenin, a phytochemical marker of F. viridissima [14], and other lignans [15] are also known to exert anti-inflammatory activities. Previously, we reported six new dimeric lignans and one new lignan glycoside along with nine known lignans from the roots of Forsythia viridissima [16]. Based on our previous research, this study aimed to investigate the profiles of the lignan dimers, lignans, and lignan glycosides with the help of a mass spectrometric technique and to assess the anti-inflammatory activities of isolated compounds of roots of F. viridissima. Results and Discussion Dried roots of F. viridissima (2.7 kg) were extracted with 80% aqueous MeOH (3 times × 4 L, 90 min, 25 °C) by ultrasonication, and the crude extracts were diluted in H2O and partitioned successively with n-hexane, CH2Cl2, and n-BuOH. Compounds 1-10 for in vitro assay were isolated from CH2Cl2 fraction of the roots of F. viridissima by using chromatographic methods including HPLC, MPLC over C18 RP column as previously described [16]. The specific lignans of CH2Cl2 and n-BuOH fractions were identified by UHPLC-ESI-QTOF-MS e analysis in both positive and negative ion modes. Lignan glycosides (1 and 2), lignans (3 and 4), and lignan dimers (5-10) were identified in total, CH2Cl2, and n-BuOH fractions in both positive and negative ion modes. A total of 10 compounds was detected, and their structures are shown in Figure 1, and their retention time (Rt), error (ppm), molecular ions, and molecular formulas are shown in Table 1. HR-MS Characterization of Lignan and Lignan Glycosides The lignans present in F. viridissima were identified by calculation of the molecular formula from exact mass and MS e spectra with the characteristic mass fragmentation patterns. The lignan glycosides, matairesinoside (1) and arctiin (2) were detected mainly in the n-BuOH fractions of F. viridissima ( Figure S1). Matairesinoside and arctiin (m/z, 543.1844 and 557.2019 [M + Na] + ) ( Table 1) Figure S5) was also observed in agreement with [17,18]. Both viridissimaol A (6) and B (7) detected in the dichloromethane fraction were lignan dimers with matairesinol and arctigenin moieties, and therefore, showed similar fragment ion peaks. Viridissimaol A (6), the protonated ion peak and the addition of the sodium ion were detected at m/z 729.2905 and 751.2716, respectively, and loss of one or two water molecules corresponded at m/z 711.2792 or 693.2686 ( Figure 2 and Figure To elucidate whether F. viridissima roots exert anti-inflammatory effects through the inhibition of proinflammatory responses, we performed a cell viability assay and an NO assay using single components from F. viridissima roots in LPS-unstimulated and LPS-stimulated RAW 264.7 macrophage, a murine macrophage cell line which secretes inflammatory mediators through activation with TLR ligands such as LPS. Since the anti-inflammatory effects of the compounds are important when these effects are significant at cytotoxic concentration, we attempted to validate the non-cytotoxic concentrations of the compounds in RAW 264.7 macrophages. As shown in Figure 4, only compounds 3 (at 50 and 100 µ M) and 9 (at 100 µ M) showed significant cytotoxicity, whereas the other compounds did not show cytotoxicity at the highest dose of each compound. Based on the cell viability, the alleviation of NO production by the compounds in LPS-stimulated macrophages was estimated at non-cytotoxic concentrations. Of all compounds, compounds 3, 6, 7, 8, and 10 notably inhibited LPS-mediated production of NO in a dose-dependent manner. Among them, dibenzylbutyrolactone lignans, we selected two compounds which have slightly showed more inhibitory activity and have a different part of molecules. Compounds 8, which have 7, 8 unsaturated double-bond, and compound 10, which have C-O-C bond linkage in dimer structure, represent antiinflammatory potential among lignan subclass from the methylenechloride fraction of F. viridissima roots. To verify that the reduced production of NO by compounds 8 and 10 was due to the transcriptional regulation of iNOS, an enzyme responsible for the production of NO, the iNOS protein expression level was measured by immunoblotting. Corresponding to the NO inhibition by To elucidate whether F. viridissima roots exert anti-inflammatory effects through the inhibition of proinflammatory responses, we performed a cell viability assay and an NO assay using single components from F. viridissima roots in LPS-unstimulated and LPS-stimulated RAW 264.7 macrophage, a murine macrophage cell line which secretes inflammatory mediators through activation with TLR ligands such as LPS. Since the anti-inflammatory effects of the compounds are important when these effects are significant at cytotoxic concentration, we attempted to validate the non-cytotoxic concentrations of the compounds in RAW 264.7 macrophages. As shown in Figure 4, only compounds 3 (at 50 and 100 µM) and 9 (at 100 µM) showed significant cytotoxicity, whereas the other compounds did not show cytotoxicity at the highest dose of each compound. Based on the cell viability, the alleviation of NO production by the compounds in LPS-stimulated macrophages was estimated at non-cytotoxic concentrations. Of all compounds, compounds 3, 6, 7, 8, and 10 notably inhibited LPS-mediated production of NO in a dose-dependent manner. Among them, dibenzylbutyrolactone lignans, we selected two compounds which have slightly showed more inhibitory activity and have a different part of molecules. Compounds 8, which have 7, 8 unsaturated double-bond, and compound 10, which have C-O-C bond linkage in dimer structure, represent anti-inflammatory potential among lignan subclass from the methylenechloride fraction of F. viridissima roots. To verify that the reduced production of NO by compounds 8 and 10 was due to the transcriptional regulation of iNOS, an enzyme responsible for the production of NO, the iNOS protein expression level was measured by immunoblotting. Corresponding to the NO inhibition by compounds 8 and 10, the LPS-induced iNOS expression was notably reduced by these compounds ( Figure 5). These data prompted us to estimate the effects of the methylene chloride fraction on NO production in LPS-stimulated RAW 264.7 macrophages since compounds 8 and 10 were prepared in the methylene chloride fraction. As expected, the methylene chloride fraction notably inhibited the LPS-stimulated NO production without showing any cytotoxicity ( Figure 6). LPS-stimulated NO production without showing any cytotoxicity ( Figure 6). To date, to the best of our knowledge, there are few reports aimed at assessing the in vitro biological effects or identifying the complex bioactive lignans from the root parts of F. viridissima. In our present study, we narrow it down to study chemical analysis including dibenzylbutyrolactone dimer lignans by using UHPLC-ESI-QTOF-MS method, since the methylene chloride fraction of F. viridissima roots showed potent NO inhibitory effects in our experimental systems. Taken together, combined with the bioactive results and the component analysis by UHPLC-ESI-QTOF-MS, we suggest that the roots of F. viridissima-mediated anti-inflammatory effects are related to major lignans including dimeric dibenzylbutyrolactone lignans. 6, 7, 8, 9, and 10). The cell viability data were expressed as relative values to the untreated control group. NO levels were calculated according to a standard curve plotted using nitrite standard solution. Data represent the mean ± SD. * p < 0.05 relative to the control group (untreated group for cell viability and lipopolysaccharide (LPS)-treated group for nitrite level). Each number above graph is the compound number used for experiments. White, black, and gray bar represent Untreated, LPS-treated, and compound-treated groups, respectively. Experiments performed three times independently in three replicates for each experiment. 6, 7, 8, 9, and 10). The cell viability data were expressed as relative values to the untreated control group. NO levels were calculated according to a standard curve plotted using nitrite standard solution. Data represent the mean ± SD. * p < 0.05 relative to the control group (untreated group for cell viability and lipopolysaccharide (LPS)-treated group for nitrite level). Each number above graph is the compound number used for experiments. White, black, and gray bar represent Untreated, LPS-treated, and compound-treated groups, respectively. Experiments performed three times independently in three replicates for each experiment. Plant Materials Forsythia viridissima roots were collected in June 2015 from the Medical Herb Garden, College of Pharmacy, Seoul National University, Goyang-si, Gyeonggi-do, Korea. F. viridissima was identified by S. I. Han (Medical Herb Garden, College of Pharmacy, Seoul National University). Chemicals and Reagents All tested compounds were isolated from CH2Cl2 fraction of F. viridissima as described previously [16]. All isolates were lyophilized to remove any solvent that may be present. After dissolution in dimethyl sulfoxide (DMSO) for use in cell culture, it was diluted to a suitable concentration. LPS and DMSO were purchased from Sigma-Aldrich. LPS was dissolved in phosphate-buffered saline (PBS). Mouse anti-glyceraldehyde 3-phosphate dehydrogenase (GAPDH) was purchased from Santa Cruz Biotechnology (Santa Cruz, CA, USA). Mouse anti-iNOS was purchased from BD Biosciences (Franklin Lakes, NJ, USA). To date, to the best of our knowledge, there are few reports aimed at assessing the in vitro biological effects or identifying the complex bioactive lignans from the root parts of F. viridissima. In our present study, we narrow it down to study chemical analysis including dibenzylbutyrolactone dimer lignans by using UHPLC-ESI-QTOF-MS method, since the methylene chloride fraction of F. viridissima roots showed potent NO inhibitory effects in our experimental systems. Taken together, combined with the bioactive results and the component analysis by UHPLC-ESI-QTOF-MS, we suggest that the roots of F. viridissima-mediated anti-inflammatory effects are related to major lignans including dimeric dibenzylbutyrolactone lignans. Plant Materials Forsythia viridissima roots were collected in June 2015 from the Medical Herb Garden, College of Pharmacy, Seoul National University, Goyang-si, Gyeonggi-do, Korea. F. viridissima was identified by S. I. Han (Medical Herb Garden, College of Pharmacy, Seoul National University). Chemicals and Reagents All tested compounds were isolated from CH 2 Cl 2 fraction of F. viridissima as described previously [16]. All isolates were lyophilized to remove any solvent that may be present. After dissolution in dimethyl sulfoxide (DMSO) for use in cell culture, it was diluted to a suitable concentration. LPS and DMSO were purchased from Sigma-Aldrich. LPS was dissolved in phosphate-buffered saline (PBS). Mouse anti-glyceraldehyde 3-phosphate dehydrogenase (GAPDH) was purchased from Santa Cruz Biotechnology (Santa Cruz, CA, USA). Mouse anti-iNOS was purchased from BD Biosciences (Franklin Lakes, NJ, USA). Chromatographic Profiling of the Compounds Present in F. viridissima Subfractions The analysis of the dichloromethane subfractions was performed on a Waters Xevo G2 Q-TOF mass spectrometer (Waters MS Technologies, Manchester, UK), which was equipped with an electrospray ionization interface with Waters ACQUITY UHPLC system (Waters, Co., Milford, MA, USA). The UHPLC-MS data were obtained by MassLynx 4.1 software (Waters, UK). The separation of the compounds was carried out on ACQUITY UHPLC ® BEH C18 column (100 × 2.1 mm, 1.7 µm, Waters Co.). The mobile phase was composed of 0.1% (w/v) formic acid in water (solvent A) and acetonitrile Cell Viability Assay RAW 264.7 cells (6.0 × 10 4 cells/well) were seeded into a 96-well plate. After overnight incubation, cell culture medium was removed, various concentrations of compound diluents were applied to the cells, and then incubated for 24 h. Same volume of DMSO was treated to compound-untreated group to exclude the effect of DMSO on cell viability. After incubation, the cells were treated with EZ-Cytox solution (DAEIL lab, Seoul, Korea; 1/10 of the culture medium) for additional 1 h at 37 • C. Cell viability was determined by measuring the absorbance of the resulting formazan product at 450 nm using a VersaMax Microplate Reader (Molecular Devices, LLC, Silicon Valley, CA, USA). Nitrite Assay Nitrite assay is based on the fact that measurement of nitrite (NO 2 − ), which is one of two primary, stable and nonvolatile breakdown products of NO, reflects the quantity of NO in the supernatant [19]. RAW 264.7 cells (6.0 × 10 4 cells/well) were seeded into a 96-well plate and incubated overnight for cell adhesion. After adhesion, compounds were applied to the cells with indicated concentrations, LPS (1 µg/mL) was subsequently treated to the cells, and cells were then incubated for 24 h. After 100 µL cultured media was transferred to a new 96-well plate, 100 µL Griess reagent (a mixture of 1% sulfanilamide, 2.5% phosphoric acid (H 3 PO 4 ), and 0.1% N-(1-naphthyl) ethylenediamine in distilled water) was added to each well [19]. Sodium nitrite was serially diluted from 64 µM to 1 µM and a standard curve was generated by measuring absorbance after application of Griess reagent. The absorbance at 540 nm was determined by a VersaMax Microplate Reader. Statistical Analysis and Experimental Replicates Cell viability assay, nitrite assay, and immunoblotting were repeated three times. Each result was represented as mean ± standard deviation (SD). Differences between experimental conditions were assessed by Student's t-test, performed by Prism 3.0 (Graph Pad Software, San Diego, CA, USA), and * p < 0.05 was considered statistically significant.
2019-07-25T13:03:54.679Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "c41e65ad4acd030d773f73298ef9ed06493a30f4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/24/14/2649/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c41e65ad4acd030d773f73298ef9ed06493a30f4", "s2fieldsofstudy": [ "Chemistry", "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
248179529
pes2o/s2orc
v3-fos-license
Cumulative sum learning curves guiding multicenter multidisciplinary quality improvement of EUS-guided tissue acquisition of solid pancreatic lesions Background and study aims  In this study, we evaluated the performance of community hospitals involved in the Dutch quality in endosonography team regarding yield of endoscopic ultrasound (EUS)-guided tissue acquisition (TA) of solid pancreatic lesions using cumulative sum (CUSUM) learning curves. The aims were to assess trends in quality over time and explore potential benefits of CUSUM as a feedback-tool. Patients and methods  All consecutive EUS-guided TA procedures for solid pancreatic lesions were registered in five community hospitals between 2015 and  2018. CUSUM learning curves were plotted for overall performance and for performance per center. The American Society of Gastrointestinal Endoscopy-defined key performance indicators, rate of adequate sample (RAS), and diagnostic yield of malignancy (DYM) were used for this purpose. Feedback regarding performance was provided on multiple occasions at regional interest group meetings during the study period. Results  A total of 431 EUS-guided TA procedures in 403 patients were included in this study. The overall and per center CUSUM curves for RAS improved over time. CUSUM curves for DYM revealed gradual improvement, reaching the predefined performance target (70 %) overall, and in three of five contributing centers in 2018. Analysis of a sudden downslope development in the CUSUM curve of DYM in one center revealed temporary absence of a senior cytopathologist to have had a temporary negative impact on performance. Conclusions  CUSUM-derived learning curves allow for assessment of best practices by comparison among peers in a multidisciplinary multicenter quality improvement initiative and proved to be a valuable and easy-to-interpret means to evaluate EUS performance over time. Introduction Endoscopic ultrasound (EUS)-guided tissue acquisition (TA) is first choice for establishing a tissue diagnosis in suspected pancreatic cancer [1]. The increasing use of neoadjuvant chemotherapy for pancreatic carcinoma, and the fact that neoadjuvant treatments require pathological confirmation of the diagnosis, have rendered quality of EUS-guided TA of solid pancreatic lesions ever more important [2,3]. Proficiency in EUS-guided TA can only be reached in centers in which all its aspects, including TA, tissue handling, microscopic assessment and reporting, are safeguarded. Feedback on performance is key to improving quality [4]. In 2015, the American Society of Gastrointestinal Endoscopy (ASGE) defined the following key performance indicators (KPIs) for EUS-guided TA in solid pancreatic lesions: rate of adequate sample (RAS) with a performance target of 85 %, diagnostic yield of malignancy (DYM) with a performance target of 70 %, and sensitivity for malignancy (SFM) with a performance target of 85 % [5]. RAS mainly reflects the quality of the process within the endoscopy suite (TA, preparation of smears, including transport to the cytopathology lab), whereas DYM and SFM reflect the quality of the entire process, including patient selection, specimen preparation, microscopic assessment and reporting. Currently, quality control for the yield of EUS-guided TA is not customary or required for centers performing EUS-guided TA. Quality measurements for EUS-guided TA procedures were previously described as a monitoring tool during the development of academic or regional EUS programs [6][7][8]. Wani et al. used CUSUM curves to describe the development of competence in advanced endoscopy trainees performing both EUS and ERCP [9][10][11][12][13]. CUSUM curves reflect development of quality delivered in time relative to predefined performance targets. In 2015 the Dutch Quality in Endosonography Team (QUEST) was founded. This is a regional EUS interest group, consisting of endosonographers and pathologists from five community hospitals in the Netherlands. QUEST aims to improve performance of EUS-guided TA by providing feedback on KPIs of individual centers based on a prospective registration of consecutive EUS-guided TA procedures of solid pancreatic lesions. This has led to improvements in RAS (80 % to 95 %), DYM (28 % to 64 %), and SFM (63 % to 84 %) comparing the results of an initial retrospective analysis of yield of EUS-guided TA to the first 21 months of prospective registration [14]. This study evaluated the use of CUSUM curves to monitor performance of contributing centers regarding the yield of EUS-guided TA of solid pancreatic lesions. Using this tool, we aimed to assess trends in KPIs over time, and explore potential benefits of CUSUM curves as a feedback-tool. Patients and methods This was a prospective, multicenter, quality improvement study of consecutive EUS-guided TA procedures on solid pancreatic lesions conducted in five community hospitals in the Netherlands. The local medical ethics committee (METC Zuidwest Holland 17-038) approved the study protocol. Informed consent was obtained from all patients. The study is registered in the Dutch trial registry (NTR) with trial number NL9470. Study population and data collection All patients aged 18 and older with a solid pancreatic lesion with high suspicion of malignancy who underwent an EUSguided TA procedure were eligible for this study. Primary outcome parameters were CUSUM-derived learning curves with RAS and DYM as input parameters. RAS was defined as proportion of procedures yielding specimen sufficient for cytopathological and/or histopathological analysis. DYM was defined as the proportion of procedures yielding a "suspicious for malignancy" or a "malignant" diagnosis. The secondary outcome parameter was SFM. SFM was defined as the total of true positives ("suspected malignancy" or "malignancy" based on EUSguided TA with a malignancy as final diagnosis) divided by all patients with a final diagnosis of malignancy. Collected data on EUS-guided TA procedures included: patient demographics, localization of the pancreatic mass, hospital, endosonographer, pathologist, needle diameter ( < 22gauge or 22-gauge), type of needle (fine-needle aspiration [FNA]/fine-needle biopsy [FNB]), number of passes, use of suction (slow withdrawal of stylet or vacuum suction), availability of rapid on-site specimen evaluation (ROSE), and the result of the cytopathological and/or histopathological evaluation of the EUS-guided TA specimen. Based on current practice guidelines and previous experience of our group, endosonographers were advised to perform at least three passes with FNA needles or at least two passes with FNB needles (unless ROSE detected sufficient material for diagnosis earlier), and to use vacuum suction [14,15]. All other techniques and materials used were at the discretion of the local clinicians and according to local availability of equipment and hospital standards. The results of cytopathological and/or histopathological evaluation were classified as follows: non-diagnostic, benign, atypical, suspicious for malignancy, and malignant. Neuroendocrine tumors were classified as malignant. For the purpose of this study "suspicious for malignancy" and "malignant" den downslope development in the CUSUM curve of DYM in one center revealed temporary absence of a senior cytopathologist to have had a temporary negative impact on performance. Conclusions CUSUM-derived learning curves allow for assessment of best practices by comparison among peers in a multidisciplinary multicenter quality improvement initiative and proved to be a valuable and easy-to-interpret means to evaluate EUS performance over time. were both considered malignant. All types of pancreatic and periampullary malignancies were considered a malignant reference standard. The gold standard for a malignant diagnosis was based on either histopathological diagnosis after surgical resection or progression of disease compatible with malignancy during a minimum of 12 months of follow-up. Feedback on performance Regional interest group meetings were organized three times a year. Prior to meetings, all contributors received data regarding the performance of their individual center accompanied by anonymized benchmark data from the other centers. At the regional interest group meetings, the results of prospective registration, best practices, guidelines, and difficult cases were discussed. Until 2017, feedback on performance overall and per center was provided as RAS, DYM, and SFM (proportions). From 2018 onward, visual feedback by means of CUSUM curves of RAS and DYM was also provided. At meetings all data (numbers and CUSUM curves) were presented (in an anonymized fashion) and subsequently discussed. Participating endosonographers and pathologists were invited to reflect on changes in directions of the curves provided. Significant changes in the direction of the curve were subjected to further analysis, of which, the results were discussed separately with the practitioners from the centers involved, prior to the next general meeting. At a subsequent meeting, the results of these analyses were presented and discussed, with emphasis on potential learning opportunities for all participants. All gastroenterologists and pathologists involved had completed their training at least 3 years before the start of this study [14]. Cumulative sum analysis (CUSUM) Each EUS procedure was scored as a success (adequate sample/ malignant outcome) or failure (inadequate sample/non-malignant outcome). Each success is rewarded with adding score s, each failure results in subtraction of (1s). Each procedure is a dot in the learning curve that is created by a plot of the cumulative sum of all cases in chronological order. The acceptable rates (P0) and unacceptable rates (P1) were defined based on the ASGE KPIs and a previous publication by Eltoum et al. [16]. For inadequate samples, we designated 10% as acceptable (P0) and 15 % as unacceptable (P1) rates. For a nonmalignant outcome of the EUS, the P0 was defined as 25 % and the P1 as 30 %. Decision limits Two decision limits (h1 and h0) were calculated. The decision limits are calculated based on type I (α) and type II (β) errors. A type I error is the risk of rejection of a true null hypothesis and a type II error is the risk of non-rejection of a false null hypothesis. The formulas that are used to calculate h0 and h1 were previously described [16]. The meaning of the decision limits in relation to the curve can be explained as follows: [17,18] 1. If the learning curve crosses the upper decision limit, the failure rate is within the preset acceptable range and it reflects high quality. 2. If the learning curve crosses the lower decision limit, the failure rate is above the preset unacceptable rates and an intervention is needed. 3. If the learning curve remains between the two decision limits, the performance is within the preset acceptable range. CUSUM charts CUSUM charts were constructed using Excel. Each success (adequate sample/malignant outcome) contributes to an upward slope of the CUSUM curve. Each inadequate sample will contribute to a downward slope of the CUSUM curve. A downslope curve means that the key performance indicator is not met. A horizontal curve indicates that quality is up to standards. An upslope curve signifies quality is above the predefined key performance indicator threshold. Multivariable analysis To investigate the association of RAS and DYM with procedure characteristics, we fitted logistic mixed models. Given the limited number of inadequate samples, only two parameters (suction: yes/no and ROSE: yes/no) could be included in the RAS model. The model for the DYM included the variables suction type (no, slow withdrawal of stylet or vacuum), ROSE, number of passes (continuous), needle size (< 22-gauge, 22-gauge) and needle type (FNA or FNB). In both models we used endoscopist specific (random) intercepts to take into account that samples obtained by the same endoscopist may not be independent. The model for DYM also included a pathologist specific (random) intercept. Both models were fitted in the Bayesian framework, which allowed us to include observations for which some of the covariates were missing. We used normal priors with mean 0 and standard deviation 100 for all regression coefficients. The Bayesian models were fitted using Markov chain Monte Carlo, with the help of the freely available and widely used "JAGS" software [19] that uses Gibbs sampling and provides a wide range of samplers to sample from full-conditional distributions that do not have a closed form. Results are presented as posterior mean and 95 % confidence interval (CI). Calculations were performed in R version 4.0.2 (2020-06-22) (R Core Team 2020) and the package JointAI 1.0.0.9000 [20]. Missing observations were imputed during the analysis. Results From January 2015 until December 2018, 431 EUS-guided TA procedures on solid pancreatic lesions in 403 individual patients were included. The median age of the patients was 68 years (range 27-88), and 51 % were men. During follow-up, a pancreatic or periampullary malignancy (reference standard) was diagnosed in 87 % of all cases. Per hospital, two to four endosonographers were involved in these procedures. A wide range of eight to sixteen pathologists per hospital were involved (▶ Table 1). Rate of adequate sample overall and per hospital A total of 399 of 431 procedures yielded an adequate sample. Hence, RAS was 93 % for the complete cohort (range 86 %-99% among individual hospitals). The ASGE-defined KPI of RAS ≥85% was met overall and in each of the individual hospitals (▶ Table 2). This can also be appreciated from the upslope direction of the overall learning curve drawn for this parameter (Supplementary Fig. 1). The RAS learning curves of the individual hospitals indicate adequate and stable quality (curves between the decision limits) in Hospitals A, B, and E, and adequate and improving quality in Hospitals C and D ( Supplementary Fig. 2, Supplementary Fig. 3, Supplementary Fig. 4, Supplementary Fig. 5, Supplementary Fig. 6). Diagnostic yield of malignancy overall and per hospital A total of 285 of 431 procedures yielded a malignant diagnosis. Therefore, the overall DYM was 66 % (ranging from 61 %-75 % in the individual hospitals). This is below the KPI of DYM ≥ 70 % (▶ Table 2). The overall learning curve of this parameter has a downslope direction (crossing the lower decision limit) until January 2018 (▶ Fig. 1a). From this point onward, the curve has a more horizontal direction between the newly constructed decision limits, indicating an adequate and stable quality throughout 2018 (▶ Fig. 1a and ▶ Fig. 1b). In only one of the contributing hospitals (Hospital D) the KPI of DYM ≥ 70 % was met overall (▶ Table 2). However, the learning curves of the individual hospitals for this parameter developed from an initial downslope (Hospitals B and E) or horizontal direction (Hospitals C and D) into a horizontal (Hospitals B, C, and E) or an upslope direction (Hospital D) (▶ Fig. 2a, ▶ Fig. 3a, Supplementary Fig. 7a, Supplementary Fig. 8a, Supplementary Fig. 9a). This indicates a gradual improvement in these centers up to an adequate quality level in 2018. The CUSUM curve for Hospital B started with a downward slope and in January 2018, the curve suddenly improved to a horizontal slope (▶ Fig. 2a and ▶ Fig. 2b). The curve of Hospital C initially showed a stable and adequate quality until May 2017. From this point onward there was a remarkable short and sharp downslope development of the curve, which again developed in a more horizontal direction from September 2017 onward (▶ Fig. 3a and ▶ Fig. 3b). This indicates a 4-month episode during which a significantly lower number of malignant diagnoses were made. During these 4 months, a high proportion of specimens with atypia (40 %) was graded in comparison to the episodes prior to May 2017 (4 %) and from September 2017 onward (11 %) (Supplementary Table 1). The 4-month episode coincided with the temporary absence of the most experienced cytopathologist in this center, who had been involved in all cytopathological evaluations of pancreatic lesions in the previous years in this hospital. Sensitivity for malignancy overall and per hospital The overall SFM for the contributing hospitals throughout the 4 years of this study was 76 %, ranging from 68 % to 87 % among different hospitals. The KPI of SFM ≥ 85 % was not met in four of five contributing hospitals. The developments in the learning curves regarding DYM suggest improvement in quality in the majority of these centers. In 2018, the final year of this study, the overall SFM was 85 %, ranging from 69 % to 96 % among the centers. In this year, the KPI of SFM ≥ 85 % was met in three of five centers (Supplementary Table 2). ▶ Table 1 Characteristics of the participating patients and hospitals. ▶ Table 3). There was no clear evidence that any of the covariates considered was associated with DYM (▶ Table 3). Feedback and interpretation of curve deflections During the 4 years of prospective registration, the following changes were reported by contributing practitioners. Hospitals A, D and E requested ROSE on a regular basis, which they did not do before. Hospital A started with ROSE halfway into 2016, Hospital D from January 2018 onward, and Hospital E at beginning of 2016. In Hospitals B and C, there were changes in the number of pathologists involved in EUS-guided TA procedures of the pancreas. In Hospital B, the group of pathologists that reviewed pancreatic samples collected with EUS was downsized from eight to three in January 2018. The most experienced cytopathologist from Hospital C was temporarily absent during a 4-month period in 2017. The time that the events previously described took place are marked with an arrow in ▶ Fig. 2a, ▶ Fig. 3a, Supplementary Fig. 6, Supplementary Fig. 7a, Supplementary Fig. 8, Supplementary Fig. 8a, and Supplementary Fig. 9a. Discussion This study evaluated the performance of five community hospitals regarding the yield of EUS-guided TA of solid pancreatic lesions using CUSUM curves to assess trends in quality over time and explored potential benefits of CUSUM curves as a feedback tool. Throughout the 4 years of this study, all three ASGE defined KPIs improved. A KPI for RAS ≥ 85 % was met consistently in most of the centers and overall (93 %). A KPI of DYM ≥ 70 % was not met overall throughout the study between 2015 and 2018, but eventually yielded 75 % overall in 2018. Similarly, the KPI for SFM ≥ 85 % was not met overall from 2015 to 2018, but improved to 85 % in 2018. Because not all ASGE-defined KPIs are consistently met in each center, feedback on performance and analyses for potential improvements are indicated and ongoing. The diagnostic yield of EUS-guided TA for solid pancreatic lesions is considered a benchmark for quality measurements in EUS [1]. However, the majority of studies in which the ASGE-defined KPI are based were performed in tertiary care facilities [21]. Moreover, the majority of publications on EUS-guided TA in solid pancreatic lesions were controlled trials focusing on discrete factors influencing the yield, i. e. different types and diameters of needles, use of suction, the use of ROSE, or the optimal number of passes to perform [22][23][24][25][26][27][28][29][30][31][32][33][34]. Therefore, when comparing the current study to these previous publications, it cannot be ruled out that differences regarding patient selection may have influenced yield of EUS-guided TA. Nevertheless, questioning the generalizability of the benchmark data may ▶ Table 3 Odds ratios and corresponding 95 % CIs for the logistic mixed models for RAS and DYM. never be an excuse to stop monitoring and improving your performance. RAS DYM To improve quality of EUS-guided TA, it is necessary to provide feedback on performance. For providing feedback, CU-SUM-derived learning curves have several advantages over tables with numbers. First, their interpretation is easy and does not require any knowledge about specific KPI values (a downward trend is not good, a horizontal line is good, and an upward trend is better). Second, they allow determination of best practices and comparison among peers. Third, they provide a more detailed picture of development over time, allowing for focused analysis of performance within specific timeframes [35]. The analysis of the sudden downslope deflection in the DYM curve of Hospital C, coinciding with the 4-month absence of a senior cytopathologist, is an excellent example of this. Analysis of this specific example teaches us how vulnerable the multistep process of EUS-guided TA is, being dependent on each factor or operator involved. Therefore, the discriminating advantage of learning curves for feedback over tables with numbers is that they provide additional learning opportunities. RAS and DYM are obviously related. However, because CU-SUM curves of these variables reflect quality relative to a predefined quality target, they do not necessarily develop in the same direction. An upward RAS curve, therefore, does not mean the DYM curve has to be upward as well. In other words: Having a sample that contains at least a couple of cells from the target organ (adequate sample) does not automatically mean that a pathologist will be confident about the malignant origin of the lesion. This can lead to a RAS above the performance target and a DYM and SFM below the performance target. Supported by feedback provided by CUSUM analyses, several changes regarding protocols and/or staff involved were made in individual hospitals. In Hospital C today, a pathology report regarding pancreatic cytology or histopathology can only be finalized after consent of a dedicated cytopathologist. Several hospitals implemented routine use of ROSE and the number of pathologists involved was reduced in one of the centers. Although multivariable analysis supports the use of suction and ROSE to be beneficiary to RAS, an overall positive effect of these changes can be assumed. After all, with a RAS of 85 %, the lowest acceptable level according to ASGE definitions, the SFM can never exceed 85 %, and makes DYM ≥ 70 % in patients with solid pancreatic lesions difficult to achieve. To our knowledge, this is the largest prospective multicenter study of EUS-guided TA of solid pancreatic lesions from community hospitals and the first to implement CUSUM-derived learning curves as a tool for monitoring and improving KPI of these procedures. Previous publications on the use of CUSUM curves in EUS-guided TA investigated performance of either cytopathologists or endoscopy trainees [9][10][11][12][13]16]. In contrast to these studies, we used CUSUM curves to evaluate the entire process defining quality and yield of these procedures, including the work of both endosonographers and cytopathologists. Some of the data presented in this study (133 procedures, performed from January 2015 to September 2016) were previously described in the initial publication about this community hospital quality initiative [14]. The current study shows ongoing and persistent improvement in performance and introduces learning curves as a feedback and monitoring tool. The main limitation of this study is the fact that feedback, either in tables with numbers or as learning curves, was not provided real time. Ideally, CUSUM curves would have been drawn three times a year, enabling contributing centers to respond more quickly to changes in curve directions. Because of logistic challenges and the time-consuming nature of data collection, this could not be realized in the current study. Another limitation is the fact that in the current study, no subtypes of FNB needles were recorded. Recent publications indicate improved outcome with a subtype of FNB needles over FNA needles [36]. The fact that no difference between FNA and FNB was detected in our study may be related to the unclear mix of subtypes of FNB needles used. However, other confounders such as the endosonographer learning curve for a new type of needle or pathologist learning curve for evaluating tissue cores may have been involved. Future directions Performing EUS-guided TA comes with the responsibility to measure KPI regarding these procedures. To facilitate this, an automated system is needed allowing EUS-procedural parameters and concomitant pathology reports to be added on regular basis. Subsequently CUSUM curves can be constructed based on KPI data at any point in time, allowing for constant trend analysis thereby providing the fundament for quality improvement. We believe that feedback on KPI is an essential first step for quality improvement. If KPIs are not up to par, this should be followed by a cycle of protocol changes and continued KPI measurements and evaluations (plan-do-check-act cycle), aiming for continuous improvement of quality and lifelong learning opportunities for all collaborators. Changes in protocol are to be tailored and center-specific, depending on KPI measurements and available resources. A measure aiming to increase a low adequate sample rate in a center using 22-gauge FNA needles, three passes and suction, for example could be: 1. The introduction of ROSE; or 2. The introduction of an FNB needle. If the hospital involved does not have its own cytopathology lab, implementation of FNB needles could solve their problem. A measure aiming to increase DYM, with current adequate RAS and high proportions of atypia diagnoses, for example, might be: 1. Reorganization of the workflow in the pathology lab to have all samples evaluated by two cytopathologists instead of seven; 2. Introducing liquid based cytology instead of smears only; or 3. Introducing the use of FNB needles. There is evidence to support that changes made "bottom-up" are more likely to be sustained in comparison to changes implemented "top-down" [37]. Conclusions In conclusion, this prospective multicenter study using CUSUMderived learning curves for both quality monitoring and feedback demonstrates consistent improvement of KPIs RAS, DYM, and SFM over time. It illustrates the benefits of using learning curves with easy-to-interpret feedback regarding performance of a whole process or its individual components while also al-lowing comparison with peers. Use of CUSUM curves is an excellent way for responsible staff to monitor and scrutinize their performance and improve the outcome of KPI up to the desired level.
2022-04-16T05:12:57.411Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "c519e5900471b19448e9af4b5b7d8f9b42b92aaa", "oa_license": "CCBYNCND", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/a-1766-5259.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c52237090ede22ca8deb4064859d75b79157f843", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
12911893
pes2o/s2orc
v3-fos-license
GOParGenPy: a high throughput method to generate Gene Ontology data matrices Background Gene Ontology (GO) is a popular standard in the annotation of gene products and provides information related to genes across all species. The structure of GO is dynamic and is updated on a daily basis. However, the popular existing methods use outdated versions of GO. Moreover, these tools are slow to process large datasets consisting of more than 20,000 genes. Results We have developed GOParGenPy, a platform independent software tool to generate the binary data matrix showing the GO class membership, including parental classes, of a set of GO annotated genes. GOParGenPy is at least an order of magnitude faster than popular tools for Gene Ontology analysis and it can handle larger datasets than the existing tools. It can use any available version of the GO structure and allows the user to select the source of GO annotation. GO structure selection is critical for analysis, as we show that GO classes have rapid turnover between different GO structure releases. Conclusions GOParGenPy is an easy to use software tool which can generate sparse or full binary matrices from GO annotated gene sets. The obtained binary matrix can then be used with any analysis environment and with any analysis methods. Background Gene Ontology (GO) is a popular standard in the annotation of gene products, providing information related to genes across all species. It presents a shared, controlled structured vocabulary of terms that describe the gene products [1]. GO is structured as a Directed Acyclic Graph (DAG) that holds the terms that describe the molecular function, biological process, and cellular component for a gene product. GO has a hierarchical structure that represents the terms from more specific to general terms. GO is currently being used for various analysis tasks like a) over-representation of the GO classes from a selected group of genes [2], b) semantic similarity between two genes [3], c) threshold free gene set analysis [4,5], d) machine learning to classify unknown genes to various GO categories [6,7], and e) explorative analysis of large-scale datasets [8]. The linking of the reported GO categories to the GO DAG structure and their parent nodes is critical for all these tasks. In tasks a, c, d, and e, the parental nodes of the GO structure provide different levels of detail allowing simultaneous monitoring of very detailed and very broad functional classes. In the case of task b, the link to the GO hierarchy is crucial for finding a path between the two genes across the GO graph. There are many existing methods [9][10][11][12][13] freely available for processing (i.e. linking gene products to the GO hierarchy) and analyzing Gene Ontology terms. Most of these tools perform well enough to handle small data sets, but on larger scale, such as in the case of microarray data, the execution time for these tools becomes prohibitive. Moreover, most of these methods use quite old GO structures causing methods to miss a large proportion of the currently used GO classes (see Results). The annotationDbi [10] and GO.db [11] packages in Bioconductor are the most widely used tools for Gene Ontology analysis for the R enviroment. GO.db stores links from GO classes to their parent GO classes, storing all the GO classes, their parents, child terms and ancestor terms in a database for easy retrieval and processing. Despite the Gene Ontology consortium updating GO class annotations and linkages on daily basis, these GO related R packages are updated only biannually. Indeed, the best source of GO information is the annotation files themselves, which are available from the GO consortium web pages. Even the GO consortium cannot help with research carried out with novel species. This is critical as we can expect a growing number of novel sequenced species with next-generation sequencing methods. These will require in-house GO annotation of sequences [7]. Also, with the analysis of more exotic organisms, there might be alternative sources for GO annotations, like species specific databases. Current GO processing tools use only a pre-fixed annotation source for analysis. We present a fast python program named GOParGenPy (GO Parent Generation Python) that can process large annotation files, incorporate any version of OBO structure and can generate GO data matrices. Users of GOParGenPy will mainly be biologists and bioinformaticians who do analysis using languages, such as R, Matlab or python. It is freely available from the project web page (see Availability and requirements). Implementation GOParGenPy has been implemented in Python (version requirement 2.5/2.6) and it is freely available as a standalone tool suitable for any downstream analysis related to GO data across various computing platforms. GOParGenPy generates the binary data matrix from a set of genes with GO annotation. It allows the user to select the GO annotation and the OBO structure file. The obtained GO binary matrix can then be used with any available analysis environment and with any available analysis methods. The main features of GOParGenPy are: 1. Reading in 'gene_ontology_edit.obo' file in standard format, parsing it and storing all the GO classes and their attributes. 2. Reading in the GO annotations of the analyzed genes (various input formats are supported). 3. Links GO annotations to their parent GO classes. The linking also looks for alternative ids for those GO classes which have become obsolete. 4. Outputs a list of genes with added parent GO classes. 5. Outputs a sparse or full matrix with genes as rows and GO classes as columns. The default format is the sparse matrix. Figure 1 shows the workflow of GOParGenPy. It takes in a tab separated input annotation file that contains a list of GO annotated genes, the selected OBO file and a set of parameters. These parameters denote the column number of gene name and the column number(s) of linked GO classes. Depending on the input annotation file type, an intermediate tab-delimited annotation file is then parsed from the annotation file where one row represents the gene name and all the collected GO annotations of this gene. The OBO flat file format stores GO classes and attributes such as id, name, namespaces, definition, etc. OBO file GO classes and their respective attribute values are stored in a hash table using the numeric part of GO id as keys. Hence, the parent or ancestor class(es) for any given GO class can be retrieved recursively by looking through the attribute values of GO classes, namely 'is_a' , 'part_of ' and 'consider' links. Next, the intermediate file obtained in first step is iterated over so that for each gene and its respective GO classes, all shared parent or ancestor GO classes are retrieved recursively using the above hash table. Redundant steps are removed by adding another hash table that is dynamically built as the iteration progresses through the entire file. The main purpose of this hash table is to store the GO class and all its parent or ancestor classes together so that when the same GO class is encountered in further iterations the retrieval does not get referred back to earlier GO hash table. Thus, at any instance the maximum size this data structure is the total number of GO classes present in a given OBO file. Hence, after certain stage the overall processing of input annotation file becomes independent of number of genes and the associated GO annotations. Moreover, the program also does a lookup in the OBO file of alternate ids for any GO class which has become obsolete in order to retrieve parent/ancestor classes also in these cases. This functionality is optional. Finally, user can specify whether a sparse or full binary matrix is generated with genes as row names and GO classes as column names. Reported GO classes are those occurring in the input annotation file and their parent nodes. Selection of the sparse matrix option is highly recommended as the package is intended for large datasets (>20,000 GO annotated genes). Sparse matrices are memory efficient representations for matrices where most of the values are zero. This is the case with GO data matrices as large part of GO classes have less than one percent of genes as members and the nonmembers are given value zero. We use the sparse matrix representation with three columns. These columns represent the row number and column number of non-zero value and the value in the cell. Figure 2 demonstrates this process. The obtained sparse matrix can be further processed with standard analysis pipelines. The sparse matrix format is supported by many analysis environments, like R and Matlab. Instability of OBO files OBO files are central to all GO analysis. However, they vary significantly between GO analysis tools with DA-VID using version 6.7, agriGO using version 1.2 and GO.db/AnnotationDBI from R/Bioconductor using a biannually updated version. Therefore, we highlight the benefits of GOParGenPy's ability to allow selection of any OBO structure by showing the information loss when an older OBO structure is used instead of the latest structure. Here the aim is to find what percentage of current GO classes is missing in these older OBO packages. Hence, respective OBO version corresponding to last update of these packages is downloaded from the GO website. The versions are: These files were parsed for GO classes using GO-ParGenPy. Next we calculated 1) the number of actual GO classes with unaltered definitions, 2) the number of GO classes which became obsolete and 3) the number of GO classes that have an altered definition with respect to the reference OBO file. Finally, we present a Venn diagram to show the percentage of missing GO classes and actual classes present (Figures 3, 4, 5 in Results). Relative execution time The execution time was compared only between the most widely used standalone packages. These are GeneOntology package from Bioperl Toolkit, GO.db and AnnotationDBI from R/Bioconductor. The aim is to compare the performance of GOParGenPy with these packages in processing large datasets. Parent GO classes were generated by GOParGenPy using the current version of GO structure (01.03.2013). First, the methods were tested with a randomly chosen set of 80975 genes from UNIPROT-GOA [15]. This is 2-3 times the size of the largest genomes in gene expression analysis. Next, in order to measure the performance on extremely large file, tools were tested with all the GO annotated sequences (>21 million sequences) available from UNIPROT-GOA. Comparative analysis of GO packages The comparative analysis details for AnnotationDBI/GO. db, agriGO, DAVID, are shown in Figures 3, 4, and 5 respectively. It is evident from above figures that OBO structures in the evaluated GO web tools GO.db, agriGO and DA-VID miss a significant number of currently used GO classes. In Figure 3A, the total number of non-obsolete distinct GO classes from OBO file is 2030 and 171 distinct non-obsolete GO classes in GO.db. Subsequently, in Figure 3B, 107 of these non-obsolete GO classes have their definition as obsolete with respect to the reference version (01.02.2012) of OBO file. Thus, in total 3641 (2030+1611) GO classes has been added or their definition has been altered with respect to nonobsolete GO classes of GO.db package (01.03.2011). This corresponds to 11.30% of GO classes being altered. Similarly, from Figure 4A,B, it can be seen that the total number of non-obsolete distinct GO classes from OBO file is 4252 and 349 distinct non-obsolete GO classes in agriGO. Correspondingly, 157 of these GO classes in agriGO have their definition altered with respect to current version of OBO file. Together, it can be seen that 19.40% of GO classes are altered. Finally, from Figure 5A,B it can be observed that in DAVID a total of 25.2% of GO classes have been altered. From Table 1, it can be found that on average 2572 GO classes are altered each year. This shows the level of change in the number of GO classes and clearly indicates the importance of using the current version of OBO structure. Table 2 compares the running time between GO-ParGenPy, GO.db/AnnotationDbi from Bioconductor and GeneOntology package from Bioperl toolkit. With Table 1 A comparative view of change in total number of GO classes per year throughout 9 years of OBO files data set of 80975 GO annotated genes GOParGenPy took approximately 40 seconds to generate data matrices making it almost 10 times faster than competing methods. GeneOntology package from Bioperl was relatively closer to GOParGenPy's execution time but BioPerl only performed the mapping of annotated genes to parent nodes and did not print any output file, whereas GOParGenPy also generated the output files. With large data set consisting over 21 million sequences from UNIPROT-KB the competing methods were unable to finish in a reasonable time. Although this dataset size is outside the standard analysis requirements, it gives a good extreme performance test. Discussion We present a new standalone software tool GOParGenPy for generating high-throughput GO data matrices for any selected input annotation file and any version of OBO file. We have shown the importance of OBO structure and presented an effective way of storing and retrieving GO classes and their attributes for any downstream analysis involving GO data. All the existing methods, be it web based application or standalone offline tools, utilize an outdated OBO structure from GO consortium. As shown in the Figures 3 -5 we can find that at maximum 25% of GO classes (for DAVID tool) are outdated with respect to current version of OBO file. Hence, any downstream analysis methods that incorporate GO data obtained from these tools may lead to erroneous results. GOParGenPy outperforms all these existing tools in terms of incorporating users' choice of OBO structure and speed of generating GO data matrices. It is also able to process extremely large datasets. It incorporates a dynamic hash table that stores all GO classes from the input file with their parent GO classes retrieved from OBO structure. This unique feature enables generation of data matrices independent of size of input data as the maximum size of this hash table is the total number of GO classes present in the OBO structure file used. Hence, this makes GOParGenPy faster in the generation of GO data matrices for large gene sets. Also, GOParGenPy looks for alternative ids of those GO classes which have become obsolete or have their definition altered. Although, GOParGenPy does not do any actual data analysis or visualization steps itself the output files can be easily imported to environments like Matlab, R or Python. The output GO data can be used as an input for various analysis tasks like prediction of new GO annotations with classifiers [6], for visualization tasks [8] or for correlation analysis between GO data and large-scale data [3]. Thus, GOParGenPy encourages modular thinking in bioinformatics. GOParGenPy allows the user to select the used GO annotation file and the used GO structure file. This allows the usage of latest annotation data files and latest GO structure. However, it can also be used with older annotation files. This is useful when an older work needs to be replicated, or while comparing methods with one that uses old GO structure. Additionally, GOParGenPy features and its application can be extended to other ontology resources and it has been already tested with Plant Ontology (PO). GOParGenPy optional features can incorporate any PO annotated gene lists and corresponding OBO file to generate sparse binary matrix representation. (see Project homepage). Conclusions GOParGenPy is a fast python program for generating GO binary data matrices from annotated set of genes. GOParGenPy outperforms existing tools by allowing any available version of the OBO structure and handling large scale input annotation dataset with over 21 million annotated sequences. The output files can be easily incorporated into various platforms such as MATLAB, R or Python for further GO related downstream analysis.
2015-07-06T21:03:06.000Z
2013-08-08T00:00:00.000
{ "year": 2013, "sha1": "d8cc4564cda551a5a67de61174e5266dd1b2eb9c", "oa_license": "CCBY", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-14-242", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9826c84303d6b737c988b3291f0e95ecaba20136", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
253116751
pes2o/s2orc
v3-fos-license
Guiding Users to Where to Give Color Hints for Efficient Interactive Sketch Colorization via Unsupervised Region Prioritization Existing deep interactive colorization models have focused on ways to utilize various types of interactions, such as point-wise color hints, scribbles, or natural-language texts, as methods to reflect a user's intent at runtime. However, another approach, which actively informs the user of the most effective regions to give hints for sketch image colorization, has been under-explored. This paper proposes a novel model-guided deep interactive colorization framework that reduces the required amount of user interactions, by prioritizing the regions in a colorization model. Our method, called GuidingPainter, prioritizes these regions where the model most needs a color hint, rather than just relying on the user's manual decision on where to give a color hint. In our extensive experiments, we show that our approach outperforms existing interactive colorization methods in terms of the conventional metrics, such as PSNR and FID, and reduces required amount of interactions. Abstract Existing deep interactive colorization models have focused on ways to utilize various types of interactions, such as point-wise color hints, scribbles, or natural-language texts, as methods to reflect a user's intent at runtime. However, another approach, which actively informs the user of the most effective regions to give hints for sketch image colorization, has been under-explored. This paper proposes a novel model-guided deep interactive colorization framework that reduces the required amount of user interactions, by prioritizing the regions in a colorization model. Our method, called GuidingPainter, prioritizes these regions where the model most needs a color hint, rather than just relying on the user's manual decision on where to give a color hint. In our extensive experiments, we show that our approach outperforms existing interactive colorization Introduction The colorization task in computer vision has received considerable attention recently, since it can be widely applied in content creation. Most content creation starts with drawn or sketch images, and these can be accomplished within a reasonable amount of time, but fully colorizing them is a labor-intensive task. For this reason, the ability to automatically colorize sketch images has significant potential values. However, automatic sketch image colorization is still challenging for the following reasons. (i) The information provided by an input sketch image is extremely limited compared to colored images or even gray-scale ones, and (ii) there can be multiple possible outcomes for a given sketch image without any conditional input, which tends to degrade the model performance and introduce bias toward the dominant colors in the dataset. To alleviate these issues, conditional image colorization methods take partial hints in addition to the input image, and attempt to generate a realistic output image that reflects the context of the given hints. Several studies have leveraged user-guided interactions as a form of user-given conditions to the model, assuming that the users would provide a desired color value for a region as a type of point-wise color hint [40] or a scribble [28,3]. Although these approaches have made remarkable progress, there still exist nontrivial limitations. First, existing approaches do not address the issue of estimating semantic regions which indicate how far the user-given color hints should be spread, and thus the colorization model tends to require lots of user hints to produce a desirable output. Second, for every interaction at test time, the users are still expected to provide a local-position information of color hint by pointing out the region of interest (RoI), which increases the user's effort and time commitment. Lastly, since existing approaches typically obtain the color hints on randomized locations at training time, the discrepancies among intervention mechanisms for the training and the test phases need to be addressed. In this work, we propose a novel model-guided framework for the interactive colorization of a sketch image, called GuidingPainter. A key idea behind our work is to make a model actively seek for regions where color hints would be provided, which can significantly improve the efficiency of interactive colorization process. To this end, GuidingPainter consists of two modules: active-guidance module and colorization module. Although colorization module works similar to previous methods, our main contribution is a hint generation mechanism in active-guidance module. The active-guidance module (Section 3.2-3.3) (i) divides the input image into multiple semantic regions and (ii) ranks them in decreasing order of estimated model gains when the region is colorized ( Fig. 1(a)). Since it is extremely expensive to obtain groundtruth for segmentation labels or even their prioritization, we explore a simple yet effective approach that identifies the meaningful regions in an order of their priority without any manually annotated labels. In our active guidance mechanism (Section 3.3), GuidingPainter can learn such regions by intentionally differentiating the frequency of usage for each channel obtained from the segmentation network. Also, we conduct a toy experiment (Section 4.5) to understand the mechanism, and to verify the validity of our approach. We propose several loss terms, e.g. smoothness loss and total variance loss, to improve colorization quality in our framework (Section 3.5), and analyze its effectiveness for both quantitatively and qualitatively (Section 4.6). Note that the only action required of users in our framework is to select one representative color for each region the model provides based on the estimated priorities ( Fig. 1(b)). Afterwards, the colorization network (Section 3.4) generates a high-quality colorized output by taking the given sketch image and the color hints ( Fig. 1(c)). In summary, our contributions are threefold: • We propose a novel model-guided deep image colorization framework, which prioritizes regions of a sketch image in the order of the interest of the colorization model. • GuidingPainter can learn to discover meaningful regions for colorization and arrange them in their priority just by using the groundtruth colorized image, without additional manual supervision. • We demonstrate that our framework can be applied to a variety of datasets by comparing it against previous interactive colorization approaches in terms of various metrics, including our proposed evaluation protocol. Deep Image Colorization Existing deep image colorization methods, which utilize deep neural networks for colorization, can be divided into automatic and conditional approaches, depending on whether conditions are involved or not. Automatic image colorization models [39,29,36, 1] take a gray-scale or sketch image as an input and generate a colorized image. CIC [39] proposed a fully automatic colorization model using convolutional neural networks (CNNs), and Su et al. [29] further improved the model by extracting the features of objects in the input image. Despite the substantial performances of automatic colorization models, a nontrivial amount of user intervention is still required in practice. Conditional image colorization models attempt to resolve these limitations by taking reference images [16] or user interactions [40,3,38,34,37] as additional input. For example, Zhang et al. [40] allowed the users to input the point-wise color hint in real time, and AlacGAN [3] utilized stroke-based user hints by extracting semantic feature maps. Although these studies consider the results are improved by user hints, they generally require a large amount of user interactions. Interactive Image Generation Beyond the colorization task, user interaction is utilized in numerous computer vision tasks, such as image generation, and image segmentation. In image generation, research has been actively conducted to utilize various user interactions as additional input to GANs. A variety of GAN models employ image-related features from users to generate user-driven images [7,17] and face images [26,12,31,15,30]. Several models generate and edit images via natural-language text [35,23,42,2]. In image Figure 2: Hint generation process of our proposed GuidingPainter model. The segmentation network and the hint generation function renders colored hints (C) and condition masks (M ). Based on the guidance results, our colorization network colorizes the sketch image. The example illustrates the hint generation process in the training phase where N h = 3 and N c = 4. First, the groundtruth image is copied as N c times to consider each color segment at each interaction step. After element-wise multiplication with guided regions, (a) averages the color to decide representative colors for each guided region. To restrict the number of hints, we mask out the segments whose iteration step is larger than N h , The masked results are (b). Based on (a) and (b), our module generates the colored condition for each segment as (c). In (d), we combine them into one partially-colorized image C. (e) operates as the same manner with (d) and generates the condition mask M . segmentation, to improve the details of segmentation results, recent models have utilized dots [27,20] and texts [9] from users. Although we surveyed a wide scope of interactive deep learning models beyond sketch image colorization, there is no directly related work with our approach, to the best of our knowledge. Therefore, the use of a deep learning-based guidance system for interactive process can be viewed as a promising but under-explored approach. Problem Setting The goal of the interactive colorization task is to train networks to generate a colored imageŶ ∈ R 3×H×W by taking as input a sketch image X ∈ R 1×H×W along with user-provided partial hints U , where H and W indicate the height and width of the target image, respectively. The userprovided partial hints are defined as a pair U = (C, M ) where C ∈ R 3×H×W is a sparse tensor with RGB values, and M ∈ {0, 1} 1×H×W is a binary mask indicating the region in which the color hints are provided. Our training framework consists of two networks and one function: seg-mentation network f (Section 3.2), colorization network g (Section 3.4), and a hint generation function called h (Section 3.3), which are trained in an end-to-end manner. Segmentation Network The purpose of segmentation network f (·) is to divide the sketch input X into several semantic regions which are expected to be painted in a single color, i.e., where S = (S 1 , S 2 , ..., S Nc ) ∈ {0, 1} Nc×H×W , S i is the ith guided region, and N c denotes the maximum number of hints. Specifically, f contains an encoder-decoder network with skip connections, based on U-Net [10] architecture, to preserve the spatial details of given objects. Since each guided region will be painted with a single color, we have to segment the output of U-Net in a discrete form while taking advantages of end-to-end learning. To this end, after obtaining an output tensor S logit ∈ R Nc×H×W of U-Net, we discretize S logit by applying straight-through (ST) gumbel estimator [11,19] across channel dimensions to obtain S as a differentiable approximation. The result S satisfies indicates the i-th scalar value of the j-th position vector, i.e., every pixel is contained in only one guided region. Here, S i (j) = 1 indicates that the j-th pixel is contained in the i-th guided region while S i (j) = 0 indicates that the pixel is not contained in the guided region. Hint Generation The hint generation function h(·) is a non-parametric function that plays the role of simulating U based on S, a colored image Y , and the number of hints N h , i.e., To this end, we first randomly sample N h from a bounded distribution which is similar to a geometric distribution formulated as where p < 1 is a hyperparameter indicating the probability that the user stops adding a hint on each trial. We set N c = 30 and p = 0.125 for the following experiments. Step1: building masked segmentsS. Given N h , we construct a mask vector m ∈ {0, 1} Nc having each element with the following rule: where m i indicates the i-th scalar value of the vector m. Afterwards, we obtain a masked segmentS ∈ R Nc×H×W by element-wise multiplying the i-th element of m with the i-th channel of S asS where S i ,S i ∈ R 1×H×W denote the i-th channel of S and S, respectively. Step2: building hint maps C. The goal of this step is to find the representative color value of the activated region in each segmentS i , and then to fill the corresponding region with this color. To this end, we calculate a mean RGB color c i ∈ R 3 as where N p = j S i (j) indicates the number of activated pixels of the i-th segment, ⊙ denotes an element-wise multiplication, i.e., the Hadamard product, after each element of S i is broadcast to the RGB channels of Y , and both S i (j) and Y (j) indicate the j-th position vector of each map. Finally, we obtain hint maps C ∈ R 3×H×W as wherec i is repeated to the spatial axis as the form of S i ∈ R 1×H×W similar to Eq. (5) andS i is broadcast to the channel axis as the form ofc i ∈ R 3 as in Eq. (6). In order to indicate the region of given hints, we simply obtain a condition mask M ∈ R 1×H×W as Eventually, the output of this module U = C ⊕ M ∈ R 4×H×W where ⊕ indicates a channel-wise concatenation. Fig. 2 illustrates overall scheme of the hint generation process. At the inference time, we can create U similar to the hint generation process, but without an explicit groundtruth image. Note that a sketch image is all we need to produceS at the inference time. We can obtain C and M by assigning a color to each S i for i = 1, 2, ..., N h . To understand how the hint generation module works, recall that N h is randomly sampled from the bounded geometric distribution G (Eq. (3)) per mini-batch at the training time. Since the probability that i ≤ N h is higher than the probability that j ≤ N h for i < j, S i is more frequently activated than S j during training the model. Hence, we can expect the following effects via this module: i) N h affects in determining how many segments starting from the first channel of S as computed in Eq. (4-5); therefore, this mechanism encourages the segmentation network f (·) to locate relatively important and uncertain regions at the forward indexes of S. Section 4.5 shows this module behaves as our expectation. ii) We can provide more abundant information for the following colorization networks g(·) than previous approaches without requiring additional labels at training time or even interactions at test time, helping to generate better results even with fewer hints than baselines (Section 4.3). Colorization Network The colorization network g(·) aims to generate a colored imageŶ by taking all the information obtained from the previous steps, i.e., a sketch image X, guided regions S, and partial hints U , aŝ The reason for using the segments as input is to provide information about the color relationship, which the segmentation network infers. In order to capture the context of the input and to preserve the spatial information of the sketch image, our colorization networks also adopt the U-Net architecture, the same as in the segmentation network. We then apply a hyperbolic tangent activation function to normalize the output tensor of the U-Net. Objective Functions As shown in Fig. 2, our networks are trained using the combined following objective functions. For simplicity, G denotes the generator of our approach which contains all the procedures, i.e., f , h, g, mentioned above while D denotes training datasets. Smoothness loss. Although adjacent pixels in an image have similar RGB values, our segment guidance networks do not have an explicit mechanism to generate segments containing those locally continuous pixels. To improve the users' ability to interpret the segments, we introduce smoothness loss, as (10) where N i denotes a set of eight nearest neighbor pixels adjacent to the i-th pixel, and S logit (i) indicates the i-th position vector of S logit . Total variance loss. In our framework, the quality of segments from f is important because the hints U are built based on guided regions S = f (X). Although the f can be indirectly trained by the colorization signal, we introduce a total variance loss in order to facilitate this objective directly, i.e., where || · || F denotes a Frobenius norm. That is, L tv attempts to minimize the color variance across pixels in each segment, which helps pixels of similar color form into the same segment. Reconstruction loss. Since both a sketch image X and its corresponding partial hint U are built from a groundtruth image Y in the training phase, we can directly supervise our networks G so that it can generate an output image close to the groundtruth Y . Following the previous work, we select the L 1 distance function as our reconstruction loss, i.e., Adversarial loss. As shown in the image generation work, we adopt an adversarial training [5] strategy, in which our generator G produces a natural output image enough to fool a discriminator D, while D attempts to classify whether the image is real or fake. During the image colorization task, the original contents of a sketch input should be preserved as much as possible. Therefore, we leverage the conditional adversarial [22] loss, written as Finally, our objective function is defined as where each λ indicates the weighting factor for each loss term. We describe the implementation details in the supplementary material. Sketch Image Datasets Yumi's Cells [24] is composed of 10K images from 509 episodes of a web cartoon, named Yumi's Cells, where a small number of characters appear repeatedly. Because it was published in a commercial industry, this dataset includes not only character objects but also non-character objects, e.g., text bubbles, letters, and background gradation. Therefore, we chose this dataset to evaluate the practical effectiveness of our model. Tag2pix [13] consists of over 60K filtered large-scale anime illustrations from the Danbooru dataset [6]. While this dataset consists of images of a single character and a simply colored background, the diversity of each character in terms of pose and scale makes it challenging to generate plausible colored outputs. We chose this dataset to verify that our model reflects various user hints well. CelebA [18] is a representative dataset which contains 203K human face images from diverse races. We chose it to evaluate our model on real-world images rather than artificial ones. We randomly divided each dataset into a training, a validation, and a test set with the ratio of 81:9:10 and resize all images to 256 × 256. Referring to the recipe of Lee et al. (2020) [16], the sketch images were extracted using the XDoG [33] algorithm. Evaluation Metrics Peak signal to noise ratio (PSNR) has been broadly used as a pixel-level evaluation metric for measuring the distortion degree of the generated image in the colorization tasks [39,10]. The metric is computed as the logarithmic quantity of the maximum possible pixel value of the image divided by the root mean squared error between a generated image and its groundtruth. Frechét inception distance (FID). We used FID [8] as an evaluation metric for measuring the model performance by calculating the Wasserstein-2 distance of feature space representations between the generated outputs and the real images. A low FID score means that the generated image is close to the real image distribution. Number of required interactions (NRI). We propose a new evaluation metric to measure how many user interactions are required for the model to produce an image of a certain quality. To this end, we count the number of hints needed by the model to reach a benchmark of PSNR. If the model cannot reach a certain level of accuracy even with the maximum number of hints, we compute the count as the maximum number of hints plus one. The benchmark can be set according to the user's tolerance or the purpose of a framework. We set 20.5, 17.5, and 19.5 as the benchmarks for Yumi, Tag2pix, and CelebA datasets, respectively. Comparisons against Colorization Baselines We compare our GuidingPainter against diverse baseline models for the deep colorization tasks, including an image translation model Pix2Pix [10], an automatic colorization model CIC [39], a point-based interactive colorization model RTUG [40], and a scribble-based interactive colorization model AlacGAN [3]. Since our main focus is interactive colorization, we primarily analyze the performance of GuidingPainter using conditional cases. In order to analyze the colorization efficiency of conditional mod-els, we compute NRI and the expected values of PSNR and FID when the number of color hints follows the distribution G. The color hints are synthesized by their own method used for training, i.e., RTUG and AlacGAN provide hints in random location while our model provides color hints to the regions obtained by the active-guidance module, in order from the front channel (S 1 , S 2 , ...). Table 1 presents the quantitative results of our model and other baselines on each dataset. Our model outperforms all of conditional baselines on all three metrics. This reveals that our model can generate various realistic images, reflecting the given conditions while reducing the interactions. Although our framework is mainly designed to colorize sketch images when conditions are given, our model shows the comparable performances across the automatic colorization setting. We also analyze the effectiveness of our guidance mechanism in situations where real users give hints in Section 4.4. As shown in Fig. 3, our model colorizes each color within each segment by successfully reflecting both the location and the color of hints. The results show that ours is better than other conditional baselines. For a fair qualitative comparison, we equalize the number of hints given to each method and make the locations of the color hints for AlacGAN and RTUG similar to ours, by sampling the points in the regions that our segmentation network produces. The marks in the sketch image in Fig. 3 indicate where the hints are provided for RTUG. Compared with the conditional baselines on the animation dataset, our model reduces the color bleeding artifact, e.g., the second row in Fig. 3, and generates the continuous colors for each segment, e.g., hair in the first row, the sky and the ground in the third row in Fig. 3. This reveals that our model can distinguish the semantic regions of character and background and reflect the color hints into the corresponding regions. Especially, for the last two rows of Fig. 3, our model is superior to colorize the background region, while other baselines colorize the background across the edges or only part of the object. Technically, our approach can be applied to colorize not only a sketch image but also a gray-scale image. User Study on Interactive Colorization Process To validate the practical interactive process of our activeguidance mechanism, we develop a straightforward user interface (UI) that control peripheral variables except for our main algorithm. We conduct an in-depth user evaluation, in which users directly participate in the process of our framework. We then record various metrics to assess the practical usefulness of our method. We choose RTUG as our baseline interactive method since its interactive process is directly comparable to ours. As shown in Table 2, our model shows better time-per-interaction (TPI) scores with less qualitative degradation than RTUG model, confirming the superior time efficiency of our model. The total colorization time is decreased by 14.2% on average compared to RTUG. Furthermore, the improvement in the convenience score (CS) reveals that our approach clearly reduces the users' workload. For more details, e.g., our UI design, see the supplementary material. Effectiveness of Active-Guidance Mechanism To understand the effects of our active-guidance mechanism described in Section 3.2-3.3, we design two subexperiments as follows. Dark Snail. The first one is a simulation to show that the proposed mechanism works as we expected by using the toy example named Dark Snail. As shown in the first row of Fig. 4, squares and rectangles are sequentially placed in a clockwise direction, and a groundtruth is generated at every mini-batch by having randomly sampled colors of red, green, and blue. In this setting, it is impossible for a model to estimate the exact color of each object unless each color hint is provided. Because the size of each rectangle is halved compared to the previous one, querying the largest region first is an optimal choice in terms of the information gain. In other words, this toy experiment is designed to confirm whether our model can (i) divide the semantic regions with the same color and (ii) ask for the color hints of objects in a descending order by their size. Fig. 4 (a) shows the guided regions obtained from a model that is trained by our original mechanism, GuidingPainter. Surprisingly, the original model tends to build the semantic segments, which are i) bounded by only one object and ii) placed in decreasing order based on the segment's size, except for the 4-th case. Alternately, Fig. 4 (b) is retrieved from a modified version of our model that is trained by fixing N h = N c during the training time, i.e., we simply turn off the most critical role of hint generation function. Fig. 4 (b) demonstrates that the modified model totally loses its guiding function, implying that the active-guidance mechanism plays a critical role in our framework. Importance of highly ranked segments. For every dataset described in Section 4.1, we test how each segment provided by the active-guidance module affects the performance of colorization. To assess the importance of the i-th segment, we put the map of the i-th channel in front of remaining channels of S and then give a hint only at the first segment. Fig. 5 shows the tendency that the PSNR score decreases as a hint is given from the rear-ranked segment, which shows that the active-guidance module encourages to locate the important regions in the front channels of S. While following the colorization order suggested by the model is an efficient way to reduce loss at training time, it is also possible to change the colorization order with additional learning. Detailed discussions on our approach, including the learning method for changing the order and limitations, are provided in the supplementary materials. Result 1st 2nd 3rd 1st 2nd 3rd 1st 2nd 3rd 1st 2nd 3rd Table 3: Quantitative results of the ablation study for the losses (a) L rec , (b) L rec + L adv , (c) L rec + L adv + L tv , and (d) L rec + L adv + L tv + L smth . Effectiveness of Loss Functions This section analyzes the effects of each loss function using both quantitative measurements and qualitative results. In this ablation study, we found a trade-off between the pixel-distance-based metric, i.e., PSNR, and the featuredistribution-based metric, i.e., FID, according to the combination of loss functions. Since L rec exactly matches up to the PSNR, Table 3 (a) shows the best score of the PSNRrelated measurement. However, it does not perform well in terms of FID especially in the Tag2pix and CelebA datasets. This phenomenon can also be found in Fig. 6 (a). The character in the first colorization result tends to be painted with grayish color, and overall colorization results loss sharpness. After L adv is added, the FID scores in Table 3 (b) dramatically improve, along with the qualitative results in Fig. 6 (b), but PSNR-based scores slightly decrease. As discussed in a previous work [32], we guess that the PSNR score is not sufficient to measure how naturally a model can generate if only partial conditions are given. Although Fig. 6 (b) shows plausible images, the hair in all the output images are slightly stained. By adding L tv , these stains are removed, and the colors become clear, as shown in Fig. 6 (c). After adding L smth , the guided regions become significantly less sparse than before, and the strange colors on the sleeve of Fig. 6 (c)'s character disappear, as shown in Fig. 6 (d). Table 3 shows the FID score improves after adding L adv , L tv , and L smth one by one from L rec on all datasets. Despite the trade-off, we select (d) as our total loss function, considering the qualitative improvements and the balance between the PSNR-based and FID metrics. Conclusions This work presents a novel interactive deep colorization framework, which enables the model to learn the priority regions of a sketch image that are most in need of color hints. Experimental results show that our framework improves the image quality of interactive colorization models, successfully reflecting the color hints with our active guidance mechanism. Importantly, our work demonstrates that GuidingPainter, without any manual supervision at all, can learn the ability to divide the semantic regions and rank them in decreasing order of priority by utilizing the colorization signal in an end-to-end manner. We expect that our approach can be used to synthesize hints for training other interactive colorization models. Developing a sophisticated UI which integrates our region prioritization algorithm with diverse techniques, such as region refinement, remains as our future work. A. Overview This document addresses with additional information that we does not cover in our main paper due to the page limit. Section B show the applicability of our approach to a gray-scale image on diverse datasets. Section C describes the implementation details for the reproducibility, including the network architectures, the hyperparameters of optimizers we used, and the training details. Section D provides discussions on our approach, including our hinting mechanism, a method to change colorization order, and limitation of our method. Section E gives detailed descriptions of the user studies we conducted in Section 4.4 of our main paper. At the rest of this document (Section F), we demonstrate qualitative results of our proposed method. Note that an attached video material could help to understand the behaviors of GuidingPainter in a visual manner, and source codes are provided for needs of code-level details. B. Application to Gray-scale Colorization We test our approach using the gray-scale input on CelebA [18], Imagenet Car [4], and Summer2Winter Yosemite [41] dataset. To do this, we modify the colorization model to take the L channel of an image and output the AB channel of the image. Table 4 represents the quantitative results of RTUG and our model on each dataset. Our model surpasses the baseline in terms of both PSNR and FID. This demonstrates that our model can produce realistic images while reflecting color hints in grayscale colorization. As shown in Fig. 7 Table 4: Quality comparison of grayscale colorization results in terms of PSNR, FID. We provide RTUG and our model the same amount of color hints which follows G. C. Implementation Details U-Net Architecture. We adopt the U-Net architecture in the segmentation network and colorization network, except for the size of channels in the input layer and output layer. The layer specification is shown in Table. Table 5: The layer specification of the U-Net. Dou-bleConv denotes two consecutive Conv-BatchNorm-ReLU blocks, and Conv denotes a convolution layer. I, O denote the size of input channels, the size of output channels, respectively. pass through D i . Every convolution in E 1 -D 4 is applied with 3 × 3 kernel, whereas the convolution in output layer is applied with 1 × 1 kernel. Discriminator. The discriminator D is implemented with PatchGAN [10], which outputs a 30 × 30 tensor. We use the LSGAN [21] objective to train the GAN architecture. Training Details. We initialize the weight of networks from the normal distribution with a mean of 0 and a standard deviation of 0.02. The Adam optimizer [14] with β 1 = 0.5, β 2 = 0.999 is used to train our networks on all datasets. The learning rate is fixed at 0.0002 for the first half of epochs and linearly decays to zero for the remaining half of epochs. We schedule the temperature τ of ST gumbel estimator with exponential policy τ = 0.1 current epoch/total epochs , adopted from RelGAN [25]. The total numbers of epochs for Yumi's Cells, Tag2Pix, and CelebA datasets are 500, 30, and 20, respectively. The optimization typically takes about 1-2 days on 4 TITAN RTX GPUs. D.1. RoI-based Hinting Mechanism In this study, we mainly compare our RoI-based hinting mechanism with the point and scribble-based hinting mechanisms. Note that there are trade-offs between each method. Point or scribble-based approaches have their own advantages in that they can utilize the location of the color hint. However, our goal is to colorize images with a few hints, not to colorize perfectly with plenty of time. We therefore focus on validating the effectiveness of our region-based guidance system in terms of interaction efficiency. As shown in the user study (Section 4.4 of our main paper), our region-based guidance system can reduce the average time per interac- tion, resulting in the improved convenience score. Since artists spend a lot of time in adding a base color on a sketch image(s) in real-world applications, it is valid work to find efficient method to mitigate such labour-intensive process. In this context, our work will be able to make the labour-intensive process significantly efficient. Although accurately and efficiently colorizing more complex images is still difficult, we expect that combining our RoI hinting mechanism with the point or scribble hinting mechanism would be one of promising works to solve this problem. While GuidingPainter automatically guides color hints to regions in an efficient order, the users may want to paint the regions regardless of this fixed order. We found that it is possible to change the colorization order of Guiding-Painter through two-stage learning. After training Guiding-Painter with the ordinary learning process, the segmentation networks gain the ability to estimate regions from a given sketch image. To make the colorization order changeable, we train only the colorization networks by fixing N h = N c and randomly dropping out some hints produced by the hint generation module, i.e., letting each m i Bernoulli random variable with success probability p = 0.125, in a second learning phase. As a result of this learning, the colorization networks can colorize the sketch image even if only some random regions are given color hints in random order. As shown in Fig. 8, our modified GuidingPainter can colorize a sketch image in different colorization orders. D.3. Effectiveness of the Number of Hints We investigate how the performance of our model and baseline models changes as the number of hints increases. Fig. 12 shows the change of PSNR and FID score when each of 2,4,6,8,10,12 hints are provided to each model. We found that GuidingPainter mostly surpasses performances of the baselines if the same size of hints are given. If more than or equal to two color hints are given, our model surpasses other baseline models in both PSNR and FID scores. The results show that our hint guidance method enables the colorization module to effectively reflect hints. In some cases, the quality of the result image is degraded due to wrong prediction of the segmentation network. In Fig. 9, the region marked as red shows a misaligned segment of the 'sand' region inside the 'sea' one. We found that these minor errors can be slightly refined by the following colorization network. Despite its self-correction, the stain still remains on the result, which is marked as blue. Mitigating this problem through segmentation correction techniques would be one of promising future works. E. User Study This section describes details of the user study in Section 4.4 of our main paper. In addition, we conduct userperception study to evaluate whether our model can reflect unusual color hints. E.1. Efficiency of Interactive Process Since our framework aims to increase the interactionefficiency on colorization process, we conduct a user study to estimate how our model can enhance the overall process when the users intervene in it. As a competitive approach to our framework for estimating interaction-efficiency, we choose RTUG since the interaction process presented in this original paper [40] is most similar to ours and we can easily quantify the amount of interactions in the process. As shown in Fig. 10, we develop a straightforward user interface(UI) to rule out peripheral variables except for the main algorithms as much as possible. The UI consists of a color palette, screens for checking hints and colorization results, and a few buttons to reflect hints or to select next hints. The users test our method and RTUG on three datasets, Yumi's Cells, Tag2pix, and CelebA. If the user tests our method, the user can see a region guided by our mechanism in the left image ( Fig. 10 (1)), and choose a color from the palette at the bottom of the screen (Fig. 10 (2)). After selecting the color, the user can see an inference image on the right of the screen (Fig. 10 (3)). This overall process is same as when the user tests RTUG, except the facts that the location of the hint is displayed in the form of a point on the left image and the user can move the point to the location by clicking the left image. The user can click the next button to add another color hint or click the finish button to end up the colorization process. Before the evaluation of participants, we let the users freely use the UI for about 5 minutes so that the users can be familiar with it. A total of 13 participants comprised of researchers or engineers related to computer science and AI attend our user study. Each user is asked colorizing the given sketch image as naturally as possible without a reference image. For each dataset and method, the user completes three images using the UI. We guide the users to finish each colorization task in roughly one minute, preventing the users from spending too much time on a single task. The evaluation results of the user study are shown in table 2 of our main paper. Table 6: User-perception study results to evaluate how faithfully models reflect user-provided conditions. 'Top-k' indicates how many the generated images are ranked within the top-k among models over three datasets. All numbers are in percentages. E.2. Reflecting Unusual Color Hints We also conduct a user study to evaluate how faithfully our model and the baselines reflect the user interaction, even though the hints can contain unusual colors. To be specific, we randomly select 500 images for each test dataset and prepare images generated by each model with strongly perturbed color hints as shown in Fig. 11 (a). The perturbed color is created by adding random values between -64 and 64 to each RGB value of the groundtruth color. For a fair comparison, we unify the locations of given hints to each model by randomly sampling them within the guided regions produced by segmentation network. Simply, the number of provided hints and their positions are similar across the baselines and ours. The hint images have up to seven hints, and each model generates images based on the same number of hints. With the generated image and the hint map, the user is asked to rank the generated images in the order of how much the hints are properly reflected. Table 6 shows the percentage of generated images within the top-k of the rank over all datasets. Our model does not only get the highest top-1 ratio over all datasets, but also successfully reflect the diverse color conditions, as shown in Fig. 11. This implies that our model can work robustly in terms of color variations. F. Qualitative Results This section provides additional qualitative results with the size of 256 × 256 over three different datasets. Fig. 13 compares qualitative results for both automatic and conditional colorization models. Fig. 14, 15 and 16 show how the output images approach groundtruth when there are interactions between the model and an user on the CelebA [18], Tag2pix [13], and Yumi's Cells [24], respectively. Fig. 17 represents diverse output images according to the colors of given hints on each dataset. Figure 12: Performance changes according to the number of user hints. Columns mean the results of Yumi's Cells [24], Tag2pix [13] and CelebA [18] over baselines and GuidingPainter. The first row shows the PSNR scores and second row presents its corresponding FID [8] scores. Figure 13: Comparison to baselines on diverse datasets. We compare our model in two approaches: automatic and conditional colorization ones. To assess the performance of our model in an automatic setting, we choose CIC, Pix2Pix, AlacGAN, and RTUG as baselines. For the condtional case, we equalize the number of hints given to all baselines and our model. Our model successfully colorizes each segment without color bleeding artifact, e.g., the third and fourth rows, and generates the continuous colors for each segment, e.g., hair in the first two rows, the sky and the ground in the fifth row. Figure 14: Qualitative results on CelebA dataset. For each interaction, which is noted as number of hints, our model estimates a hint region which the model wants to know first. Then, we select a representative color for the region and the color is spread to the region as visualized in accumulated hints. Finally, a colorization result is generated by our colorization network taking the guided regions accumulated hints and the sketch image. In these examples, we do not remove the noise used in the Gumbel Softmax operation to directly represent the guided regions provided to the colorization network. We summarize the interaction process in three rows for each four image. The results show how the input images change along with the color hints at each interaction step. In particular, the 6-th column shows that the model captures the shadow of human face and colorizes it appropriate to the image. Figure 15: Qualitative results on Tag2pix dataset. Each interaction reveals that our model recognizes semantically related segments for each image, e.g., background, clothes, hair, and face of a character. In the 1-st iteration, the model concentrates on the background and adapts a color if the background color is inputted. Especially, for the hair segment in the 3-rd iteration, our model successfully reflects the color changes, not bleeding the color outside of the hair region. Figure 16: Qualitative results on Yumi's Cells dataset. Each intermediate iteration shows that semantically meaningful segments are recommended and colorized, such as, parts of background, speech balloon, clothes, face, and hair for an image. As shown in the rows of colorization result, the automatically colorized images become similar to the groundtruth image by adding each color hint in only eight iteration. This demonstrates that our model not only reflects the color condition in adequate location, but also improve the quality of result images. Especially on the last two images, our model fixes the key color errors, such as the purple night sky, the green clothes, and the yellow speech bubble.
2022-10-27T01:16:27.038Z
2022-10-25T00:00:00.000
{ "year": 2022, "sha1": "2355699c4bb732a927e6c5cc11c4ca8b53f4fb09", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2355699c4bb732a927e6c5cc11c4ca8b53f4fb09", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
10511015
pes2o/s2orc
v3-fos-license
Accelerating Reinforcement Learning through Implicit Imitation Imitation can be viewed as a means of enhancing learning in multiagent environments. It augments an agent's ability to learn useful behaviors by making intelligent use of the knowledge implicit in behaviors demonstrated by cooperative teachers or other more experienced agents. We propose and study a formal model of implicit imitation that can accelerate reinforcement learning dramatically in certain cases. Roughly, by observing a mentor, a reinforcement-learning agent can extract information about its own capabilities in, and the relative value of, unvisited parts of the state space. We study two specific instantiations of this model, one in which the learning agent and the mentor have identical abilities, and one designed to deal with agents and mentors with different action sets. We illustrate the benefits of implicit imitation by integrating it with prioritized sweeping, and demonstrating improved performance and convergence through observation of single and multiple mentors. Though we make some stringent assumptions regarding observability and possible interactions, we briefly comment on extensions of the model that relax these restricitions. Introduction The application of reinforcement learning to multiagent systems offers unique opportunities and challenges. When agents are viewed as independently trying to achieve their own ends, interesting issues in the interaction of agent policies (Littman, 1994) must be resolved (e.g., by appeal to equilibrium concepts). However, the fact that agents may share information for mutual gain (Tan, 1993) or distribute their search for optimal policies and communicate reinforcement signals to one another (Mataric, 1998) offers intriguing possibilities for accelerating reinforcement learning and enhancing agent performance. Another way in which individual agent performance can be improved is by having a novice agent learn reasonable behavior from an expert mentor. This type of learning can be brought about through explicit teaching or demonstration Lin, 1992;Whitehead, 1991a), by sharing of privileged information (Mataric, 1998), or through an explicit cognitive representation of imitation (Bakker & Kuniyoshi, 1996). In imitation, the agent's own exploration is used to ground its observations of other agents' behaviors in its own capabilities and resolve any ambiguities in observations arising from partial observability and noise. A common thread in all of this work is the use of a mentor to guide the exploration of the observer. Typically, guidance is achieved through some form of explicit communication between mentor and observer. A less direct form of teaching involves an observer extracting information from a mentor without the mentor making an explicit attempt to demonstrate a specific behavior of interest (Mitchell, Mahadevan, & Steinberg, 1985). In this paper we develop an imitation model we call implicit imitation that allows an agent to accelerate the reinforcement learning process through the observation of an expert mentor (or mentors). The agent observes the state transitions induced by the mentor's actions and uses the information gleaned from these observations to update the estimated value of its own states and actions. We will distinguish two settings in which implicit imitation can occur: homogeneous settings, in which the learning agent and the mentor have identical actions; and heterogeneous settings, where their capabilities may differ. In the homogeneous setting, the learner can use the observed mentor transitions directly to update its own estimated model of its actions, or to update its value function. In addition, a mentor can provide hints to the observer about the parts of the state space on which it may be worth focusing attention. The observer's attention to an area might take the form of additional exploration of the area or additional computation brought to bear on the agent's prior beliefs about the area. In the heterogeneous setting, similar benefits accrue, but with the potential for an agent to be misled by a mentor that possesses abilities different from its own. In this case, the learner needs some mechanism to detect such situations and to make efforts to temper the influence of these observations. We derive several new techniques to support implicit imitation that are largely independent of any specific reinforcement learning algorithm, though they are best suited for use with model-based methods. These include model extraction, augmented backups, feasibility testing, and k-step repair. We first describe implicit imitation in homogeneous domains, then we describe the extension to heterogeneous settings. We illustrate its effectiveness empirically by incorporating it into Moore and Atkeson's (1993) prioritized sweeping algorithm. The implicit imitation model has several advantages over more direct forms of imitation and teaching. It does not require any agent to explicitly play the role of mentor or teacher. Observers learn simply by watching the behavior of other agents; if an observed "mentor" shares certain subtasks with the observer, the observed behavior can be incorporated (indirectly) by the observer to improve its estimate of its own value function. This is important because there are many situations in which an observer can learn from a mentor that is unwilling or unable to alter its behavior to accommodate the observer, or even communicate information to it. For example, common communication protocols may be unavailable to agents designed by different developers (e.g., Internet agents); agents may find themselves in a competitive situation in which there is disincentive to share information or skills; or there may simply be no incentive for one agent to provide information to another. 1 Another key advantage of our approach-which arises from formalizing imitation in the reinforcement learning context-is the fact that the observer is not constrained to directly imitate (i.e., duplicate the actions of) the mentor. The learner can decide whether such "explicit imitation" is worthwhile. Implicit imitation can thus be seen as blending the advantages of explicit teaching or explicit knowledge transfer with those of independent learning. In addition, because an agent learns by observation, it can exploit the existence of multiple mentors, essentially distributing its search. Finally, we do not assume that the observer knows the actual actions taken by the mentor, or that the mentor shares a reward function (or goals) with the mentor. Again, this stands in sharp contrast with many existing models of teaching, imitation, and behavior learning by observation. While we make some strict assumptions in this paper with respect to observability, complete knowledge of reward functions, and the existence of mappings between agent state spaces, the model can be generalized in interesting ways. We will elaborate on some of these generalizations near the end of the paper. The remainder of the paper is structured as follows. We provide the necessary background on Markov decision processes and reinforcement learning for the development of our implicit imitation model in Section 2. In Section 3, we describe a general formal framework for the study of implicit imitation in reinforcement learning. Two specific instantiations of this framework are then developed. In Section 4, a model for homogeneous agents is developed. The model extraction technique is explained and the augmented Bellman backup is proposed as a mechanism for incorporating observations into model-based reinforcement learning algorithms. Model confidence testing is then introduced to ensure that misleading information does not have undue influence on a learner's exploration policy. The use of mentor observations to to focus attention on interesting parts of the state space is also introduced. Section 5 develops a model for heterogeneous agents. The model extends the homogeneous model through feasibility testing, a device by which a learner can detect whether the mentor's abilities are similar to its own, and k-step repair, whereby a learner can attempt to "mimic" the trajectory of a mentor that cannot be duplicated exactly. Both of these techniques prove crucial in heterogeneous settings. The effectiveness of these models is demonstrated on a number of carefully chosen navigation problems. Section 6 examines conditions under which implicit imitation will and will not work well. Section 7 describes several promising extensions to the model. Section 8 examines the implicit imitation model in the context of related work and Section 9 considers future work before drawing some general conclusions about implicit imitation and the field of computational imitation more broadly. Reinforcement Learning Our aim is to provide a formal model of implicit imitation, whereby an agent can learn how to act optimally by combining its own experience with its observations of the behavior of an expert mentor. Before doing so, we describe in this section the standard model of reinforcement learning used in artificial intelligence. Our model will build on this singleagent view of learning how to act. We begin by reviewing Markov decision processes, which provide a model for sequential decision making under uncertainty, and then move on to describe reinforcement learning, with an emphasis on model-based methods. Markov Decision Processes Markov decision processes (MDPs) have proven very useful in modeling stochastic sequential decision problems, and have been widely used in decision-theoretic planning to model domains in which an agent's actions have uncertain effects, an agent's knowledge of the environment is uncertain, and the agent can have multiple, possibly conflicting objectives. In this section, we describe the basic MDP model and consider one classical solution procedure. We do not consider action costs in our formulation of MDPs, though these pose no special complications. Finally, we make the assumption of full observability. Partially observable MDPs (POMDPs) (Cassandra, Kaelbling, & Littman, 1994;Lovejoy, 1991;Smallwood & Sondik, 1973) are much more computationally demanding than fully observable MDPs. Our imitation model will be based on a fully observable model, though some of the generalizations of our model mentioned in the concluding section build on POMDPs. We refer the reader to Bertsekas (1987) ;Boutilier, Dean and Hanks (1999); and Puterman (1994) for further material on MDPs. An MDP can be viewed as a stochastic automaton in which actions induce transitions between states, and rewards are obtained depending on the states visited by an agent. Formally, an MDP can be defined as a tuple S, A, T, R , where S is a finite set of states or possible worlds, A is a finite set of actions, T is a state transition function, and R is a reward function. The agent can control the state of the system to some extent by performing actions a ∈ A that cause state transitions, movement from the current state to some new state. Actions are stochastic in that the actual transition caused cannot generally be predicted with certainty. The transition function T : S × A → ∆(S) describes the effects of each action at each state. T (s i , a) is a probability distribution over S; specifically, T (s i , a)(s j ) is the probability of ending up in state s j ∈ S when action a is performed at state s i . We will denote this quantity by Pr(s i , a, s j ). We require that 0 ≤ Pr(s i , a, s j ) ≤ 1 for all s i , s j , and that for all s i , s j ∈S Pr(s i , a, s j ) = 1. The components S, A and T determine the dynamics of the system being controlled. The assumption that the system is fully observable means that the agent knows the true state at each time t (once that stage is reached), and its decisions can be based solely on this knowledge. Thus, uncertainty lies only in the prediction of an action's effects, not in determining its actual effect after its execution. A (deterministic, stationary, Markovian) policy π : S → A describes a course of action to be adopted by an agent controlling the system. An agent adopting such a policy performs action π(s) whenever it finds itself in state s. Policies of this form are Markovian since the action choice at any state does not depend on the system history, and are stationary since action choice does not depend on the stage of the decision problem. For the problems we consider, optimal stationary Markovian policies always exist. We assume a bounded, real-valued reward function R : S → ℜ. R(s) is the instantaneous reward an agent receives for occupying state s. A number of optimality criteria can be adopted to measure the value of a policy π, all measuring in some way the reward accumulated by an agent as it traverses the state space through the execution of π. In this work, we focus on discounted infinite-horizon problems: the current value of a reward received t stages in the future is discounted by some factor γ t (0 ≤ γ < 1). This allows simpler computational methods to be used, as discounted total reward will be finite. Discounting can be justified on other (e.g., economic) grounds in many situations as well. The value function V π : S → ℜ reflects the value of a policy π at any state s; this is simply the expected sum of discounted future rewards obtained by executing π beginning at s. A policy π * is optimal if, for all s ∈ S and all policies π, we have V π * (s) ≥ V π (s). We are guaranteed that such optimal (stationary) policies exist in our setting (Puterman, 1994). The (optimal) value of a state V * (s) is its value V π * (s) under any optimal policy π * . By solving an MDP, we refer to the problem of constructing an optimal policy. Value iteration (Bellman, 1957) is a simple iterative approximation algorithm for optimal policy construction. Given some arbitrary estimate V 0 of the true value function V * , we iteratively improve this estimate as follows: The computation of V n (s) given V n−1 is known as a Bellman backup. The sequence of value functions V n produced by value iteration converges linearly to V * . Each iteration of value iteration requires O(|S| 2 |A|) computation time, and the number of iterations is polynomial in |S|. For some finite n, the actions a that maximize the right-hand side of Equation 1 form an optimal policy, and V n approximates its value. Various termination criteria can be applied; for example, one might terminate the algorithm when (where X = max{|x| : x ∈ X} denotes the supremum norm). This ensures the resulting value function V i+1 is within ε 2 of the optimal function V * at any state, and that the induced policy is ε-optimal (i.e., its value is within ε of V * ) (Puterman, 1994). A concept that will be useful later is that of a Q-function. Given an arbitrary value function V , we define Q V a (s i ) as Intuitively, Q V a (s) denotes the value of performing action a at state s and then acting in a manner that has value V (Watkins & Dayan, 1992). In particular, we define Q * a to be the Q-function defined with respect to V * , and Q n a to be the Q-function defined with respect to V n−1 . In this manner, we can rewrite Equation 1 as: We define an ergodic MDP as an MDP in which every state is reachable from any other state in a finite number of steps with non-zero probability. Model-based Reinforcement Learning One difficulty with the use of MDPs is that the construction of an optimal policy requires that the agent know the exact transition probabilities Pr and reward model R. In the specification of a decision problem, these requirements, especially the detailed specification of the domain's dynamics, can impose an undue burden on the agent's designer. Reinforcement learning can be viewed as solving an MDP in which the full details of the model, in particular Pr and R, are not known to the agent. Instead, the agent learns how to act optimally through experience with its environment. We provide a brief overview of reinforcement learning in this section (with an emphasis on model-based approaches). For further details, please refer to the texts of Sutton and Barto (1998) and Bertsekas and Tsitsiklis (1996), and the survey of Kaelbling, Littman and Moore (1996). In the general model, we assume that an agent is controlling an MDP S, A, T, R and initially knows its state and action spaces, S and A, but not the transition model T or reward function R. The agent acts in its environment, and at each stage of the process makes a "transition" s, a, r, t ; that is, it takes action a at state s, receives reward r and moves to state t. Based on repeated experiences of this type it can determine an optimal policy in one of two ways: (a) in model-based reinforcement learning, these experiences can be used to learn the true nature of T and R, and the MDP can be solved using standard methods (e.g., value iteration); or (b) in model-free reinforcement learning, these experiences can be used to directly update an estimate of the optimal value function or Q-function. Probably the simplest model-based reinforcement learning scheme is the certainty equivalence approach. Intuitively, a learning agent is assumed to have some current estimated transition model T of its environment consisting of estimated probabilities Pr(s, a, t) and an estimated rewards model R(s). With each experience s, a, r, t the agent updates its estimated models, solves the estimated MDP M to obtain an policy π that would be optimal if its estimated models were correct, and acts according to that policy. To make the certainty equivalence approach precise, a specific form of estimated model and update procedure must be adopted. A common approach is to used the empirical distribution of observed state transitions and rewards as the estimated model. For instance, if action a has been attempted C(s, a) times at state s, and on C(s, a, t) of those occasions state t has been reached, then the estimate Pr(s, a, t) = C(s, a, t)/C(s, a). If C(s, a) = 0, some prior estimate is used (e.g., one might assume all state transitions are equiprobable). A Bayesian approach (Dearden, Friedman, & Andre, 1999) uses an explicit prior distribution over the parameters of the transition distribution Pr(s, a, ·), and then updates these with each experienced transition. For instance, we might assume a Dirichlet (Generalized Beta) distribution (DeGroot, 1975) with parameters n(s, a, t) associated with each possible successor state t. The Dirichlet parameters are equal to the experience-based counts C(s, a, t) plus a "prior count" P (s, a, t) representing the agent's prior beliefs about the distribution (i.e., n(s, a, t) = C(s, a, t) + P (s, a, t)). The expected transition probability Pr(s, a, t) is then n(s, a, t)/ t ′ n(s, a, t ′ ). Assuming parameter independence, the MDP M can be solved using these expected values. Furthermore, the model can be updated with ease, simply increasing n(s, a, t) by one with each observation s, a, r, t . This model has the advantage over a counter-based approach of allowing a flexible prior model and generally does not assign probability zero to unobserved transitions. We will adopt this Bayesian perspective in our imitation model. One difficulty with the certainty equivalence approach is the computational burden of resolving an MDP M with each update of the models T and R (i.e., with each experience). One could circumvent this to some extent by batching experiences and updating (and re-solving) the model only periodically. Alternatively, one could use computational effort judiciously to apply Bellman backups only at those states whose values (or Q-values) are likely to change the most given a change in the model. Moore and Atkeson's (1993) prioritized sweeping algorithm does just this. When T is updated by changing Pr(s, a, t), a Bellman backup is applied at s to update its estimated value V , as well as the Q-value Q(s, a). Suppose the magnitude of the change in V (s) is given by ∆ V (s). For any predecessor w, the Q-values Q(w, a ′ )-hence values V (w)-can change if Pr(w, a ′ , s) > 0. The magnitude of the change is bounded by Pr(w, a ′ , s)∆ V (s). All such predecessors w of s are placed in a priority queue with Pr(w, a ′ , s)∆ V (s) serving as the priority. A fixed number of Bellman backups are applied to states in the order in which they appear in the queue. With each backup, any change in value can cause new predecessors to be inserted into the queue. In this way, computational effort is focused on those states where a Bellman backup has the greatest impact due to the model change. Furthermore, the backups are applied only to a subset of states, and are generally only applied a fixed number of times. By way of contrast, in the certainty equivalence approach, backups are applied until convergence. Thus prioritized sweeping can be viewed as a specific form of asynchronous value iteration, and has appealing computational properties (Moore & Atkeson, 1993). Under certainty equivalence, the agent acts as if the current approximation of the model is correct, even though the model is likely to be inaccurate early in the learning process. If the optimal policy for this inaccurate model prevents the agent from exploring the transitions which form part of the optimal policy for the true model, then the agent will fail to find the optimal policy. For this reason, explicit exploration policies are invariably used to ensure that each action is tried at each state sufficiently often. By acting randomly (assuming an ergodic MDP), an agent is assured of sampling each action at each state infinitely often in the limit. Unfortunately, the actions of such an agent will fail to exploit (in fact, will be completely uninfluenced by) its knowledge of the optimal policy. This explorationexploitation tradeoff refers to the tension between trying new actions in order to find out more about the environment and executing actions believed to be optimal on the basis of the current estimated model. The most common method for exploration is the ε-greedy method in which the agent chooses a random action a fraction ε of the time, where 0 < ε < 1. Typically, ε is decayed over time to increase the agent's exploitation of its knowledge. In the Boltzmann approach, each action is selected with a probability proportional to its value: The proportionality can be adjusted nonlinearly with the temperature parameter τ . As τ → 0 the probability of selecting the action with the highest value tends to 1. Typically, τ is started high so that actions are randomly explored during the early stages of learning. As the agent gains knowledge about the effects of its actions and the value of these effects, the parameter τ is decayed so that the agent spends more time exploiting actions known to be valuable and less time randomly exploring actions. More sophisticated methods attempt to use information about model confidence and value magnitudes to plan a utility-maximizing exploration plan. An early approximation of this scheme can be found in the interval estimation method (Kaelbling, 1993). Bayesian methods have also been used to calculate the expected value of information to be gained from exploration (Meuleau & Bourgine, 1999;Dearden et al., 1999). We concentrate in this paper on model-based approaches to reinforcement learning. However, we should point out that model-free methods-those in which an estimate of the optimal value function or Q-function is learned directly, without recourse to a domain model-have attracted much attention. For example, TD-methods (Sutton, 1988) and Q-learning (Watkins & Dayan, 1992) have both proven to be among the more popular methods for reinforcement learning. Our methods can be modified to deal with model-free approaches, as we discuss in the concluding section. We also focus on so-called tablebased (or explicit) representations of models and value functions. When state and action spaces are large, table-based approaches become unwieldy, and the associated algorithms are generally intractable. In these situations, approximators are often used to estimate the values of states. We will discuss ways in which our techniques can be extended to allow for function approximation in the concluding section. A Formal Framework for Implicit Imitation To model the influence that a mentor agent can have on the decision process or the learning behavior of an observer, we must extend the single-agent decision model of MDPs to account for the actions and objectives of multiple agents. In this section, we introduce a formal framework for studying implicit imitation. We begin by introducing a general model for stochastic games (Shapley, 1953;Myerson, 1991), and then impose various assumptions and restrictions on this general model that allow us to focus on the key aspects of implicit imitation. We note that the framework proposed here is useful for the study of other forms of knowledge transfer in multiagent systems, and we briefly point out various extensions of the framework that would permit implicit imitation, and other forms of knowledge transfer, in more general settings. Non-Interacting Stochastic Games Stochastic games can be viewed as a multiagent extension of Markov decision processes. Though Shapley's (1953) original formulation of stochastic games involved a zero-sum (fully competitive) assumption, various generalizations of the model have been proposed allowing for arbitrary relationships between agents' utility functions (Myerson, 1991). 2 Formally, an n-agent stochastic game S, {A i : i ≤ n}, T, {R i : i ≤ n} comprises a set of n agents (1 ≤ i ≤ n), a set of states S, a set of actions A i for each agent i, a state transition function T , and a reward function R i for each agent i. Unlike an MDP, individual agent actions do not determine state transitions; rather it is the joint action taken by the collection of agents that determines how the system evolves at any point in time. Let A = A 1 × · · · × A n be the set of joint actions; then T : S × A → ∆(S), with T (s i , a)(s j ) = Pr(s i , a, s j ) denoting the probability of ending up in state s j ∈ S when joint action a is performed at state s i . For convenience, we introduce the notation A −i to denote the set of joint actions A 1 × · · · × A i−1 × A i+1 × · · · × A n involving all agents except i. We use a i · a −i to denote the (full) joint action obtained by conjoining a i ∈ A i with a −i ∈ A −i . Because the interests of the individual agents may be at odds, strategic reasoning and notions of equilibrium are generally involved in the solution of stochastic games. Because our aim is to study how a reinforcement agent might learn by observing the behavior of an expert mentor, we wish to restrict the model in such a way that strategic interactions need not be considered: we want to focus on settings in which the actions of the observer and the mentor do not interact. Furthermore, we want to assume that the reward functions of the agents do not conflict in a way that requires strategic reasoning. We define noninteracting stochastic games by appealing to the notion of an agent projection function which is used to extract an agent's local state from the underlying game. In these games, an agent's local state determines all aspects of the global state that are relevant to its decision making process, while the projection function determines which global states are identical from an agent's local perspective. Formally, for each agent i, we assume a local state space S i , and a projection function L i : S → S i . For any s, t ∈ S, we write s ∼ i t iff L i (s) = L i (t). This equivalence relation partitions S into a set of equivalence classes such that the elements within a specific class (i.e., L −1 i (s) for some s ∈ S i ) need not be distinguished by agent i for the purposes of individual decision making. We say a stochastic game is noninteracting if there exists a local state space S i and projection function L i for each agent i such that: Intuitively, condition 1 above imposes two distinct requirements on the game from the perspective of agent i. First, if we ignore the existence of other agents, it provides a notion of state space abstraction suitable for agent i. Specifically, L i clusters together states s ∈ S only if each state in an equivalence class has identical dynamics with respect to the abstraction induced by L i . This type of abstraction is a form of bisimulation of the type studied in automaton minimization (Hartmanis & Stearns, 1966;Lee & Yannakakis, 1992) and automatic abstraction methods developed for MDPs (Dearden & Boutilier, 1997;Dean & Givan, 1997). It is not hard to show-ignoring the presence of other agents-that the underlying system is Markovian with respect to the abstraction (or equivalently, w.r.t. S i ) if condition 1 is met. The quantification over all a −i imposes a strong noninteraction requirement, namely, that the dynamics of the game from the perspective of agent i is independent of the strategies of the other agents. Condition 2 simply requires that all states within a given equivalence class for agent i have the same reward for agent i. This means that no states within a class need to be distinguished-each local state can be viewed as atomic. A noninteracting game induces an MDP where Pr i is given by condition (1) above. Specifically, for each s i , t i ∈ S i : where s is any state in L −1 i (s i ) and a −i is any element of A −i . Let π i : S a → A i be an optimal policy for M i . We can extend this to a strategy π G i : S → A i for the underlying stochastic game by simply applying π i (s i ) to every state s ∈ S such that L i (s) = s i . The following proposition shows that the term "noninteracting" indeed provides an appropriate description of such a game. Proposition 1 Let G be a noninteracting stochastic game, M i the induced MDP for agent i, and π i some optimal policy for M i . The strategy π G i extending π i to G is dominant for agent i. Thus each agent can solve the noninteracting game by abstracting away irrelevant aspects of the state space, ignoring other agent actions, and solving its "personal" MDP M i . Given an arbitrary stochastic game, it can generally be quite difficult to discover whether it is noninteracting, requiring the construction of appropriate projection functions. In what follows, we will simply assume that the underlying multiagent system is a noninteracting game. Rather than specifying the game and projection functions, we will specify the individual MDPs M i themselves. The noninteracting game induced by the set of individual MDPs is simply the "cross product" of the individual MDPs. Such a view is often quite natural. Consider the example of three robots moving in some two-dimensional office domain. If we are able to neglect the possibility of interaction-for example, if the robots can occupy the same 2-D position (at a suitable level of granularity) and do not require the same resources to achieve their tasks-then we might specify an individual MDP for each robot. The local state might be determined by the robot's x, y-position, orientation, and the status of its own tasks. The global state space would be the cross product S 1 × S 2 × S 3 of the local spaces. The individual components of any joint action would affect only the local state, and each agent would care (through its reward function R i ) only about its local state. We note that the projection function L i should not be viewed as equivalent to an observation function. We do not assume that agent i can only distinguish elements of S i -in fact, observations of other agents' states will be crucial for imitation. Rather the existence of L i simply means that, from the point of view of decision making with a known model, the agent need not worry about distinctions other than those made by L i . Assuming no computational limitations, an agent i need only solve M i , but may use observations of other agents in order to improve its knowledge about M i 's dynamics. 3 Implicit Imitation Despite the very independent nature of the agent subprocesses in a noninteracting multiagent system, there are circumstances in which the behavior of one agent may be relevant to another. To keep the discussion simple, we assume the existence of an expert mentor agent m, which is implementing some stationary (and presumably optimal) policy π m over its local MDP M m = S m , A m , Pr m , R m . We also assume a second agent o, the observer, with While nothing about the mentor's behavior is relevant to the observer if it knows its own MDP (and can solve it without computational difficulty), the situation can be quite different if o is a reinforcement learner without complete knowledge of the model M o . It may well be that the observed behavior of the mentor provides valuable information to the observer in its quest to learn how to act optimally within M o . To take an extreme case, if mentor's MDP is identical to the observer's, and the mentor is an expert (in the sense of acting optimally), then the behavior of the mentor indicates exactly what the observer should do. Even if the mentor is not acting optimally, or if the mentor and observer have different reward functions, mentor state transitions observed by the learner can provide valuable information about the dynamics of the domain. Thus we see that when one agent is learning how to act, the behavior of another can potentially be relevant to the learner, even if the underlying multiagent system is noninteracting. Similar remarks, of course, apply to the case where the observer knows the MDP M o , but computational restrictions make solving this difficult-observed mentor transitions might provide valuable information about where to focus computational effort. 4 The main motivation underlying our model of implicit imitation is that the behavior of an expert mentor can provide hints as to appropriate courses of action for a reinforcement learning agent. Intuitively, implicit imitation is a mechanism by which a learning agent attempts to incorporate the observed experience of an expert mentor agent into its learning process. Like more classical forms of learning by imitation, the learner considers the effects of the mentor's action (or action sequence) in its own context. Unlike direct imitation, however, we do not assume that the learner must "physically" attempt to duplicate the mentor's behavior, nor do we assume that the mentor's behavior is necessarily appropriate for the observer. Instead, the influence of the mentor is on the agent's transition model and its estimate of value of various states and actions. We elaborate on these points below. In what follows, we assume a mentor m and associated MDP M m , and a learner or observer o and associated MDP M o , as described above. These MDPs are fully observable. We focus on the reinforcement learning problem faced by agent o. The extension to multiple mentors is straightforward and will be discussed below, but for clarity we assume only one mentor in our description of the abstract framework. It is clear that certain conditions must be met for the observer to extract useful information from the mentor. We list a number of assumptions that we make at different points in the development of our model. Observability: We must assume that the learner can observe certain aspects of the mentor's behavior. In this work, we assume that state of the mentor's MDP is fully observable to the learner. Equivalently, we interpret this as full observability of the underlying noninteracting game, together with knowledge of the mentor's projection function L m . A more general partially observable model would require the specification of an observation or signal set Z and an observation function where O(s o , s m )(z) denotes the probability with which the observer obtains signal z when the local states of the observer and mentor are s o and s m , respectively. We do not pursue such a model here. It is important to note that we do not assume that the observer has access to the action taken by m at any point in time. Since actions are stochastic, the state (even if fully observable) that results from the mentor invoking a specific control signal is generally insufficient to determine that signal. Thus it seems much more reasonable to assume that states (and transitions) are observable than the actions that gave rise to them. Analogy: If the observer and the mentor are acting in different local state spaces, it is clear that observations made of the mentor's state transitions can offer no useful information to the observer unless there is some relationship between the two state spaces. There are several ways in which this relationship can be specified. Dautenhahn and Nehaniv (1998) use a homomorphism to define the relationship between mentor and observer for a specific family of trajectories (see Section 8 for further discussion). A slightly different notion might involve the use of some analogical mapping h : S m → S o such that an observed state transition s → t provides some information to the observer about the dynamics or value of state h(s) ∈ S o . In certain circumstances, we might require the mapping h to be homomorphic with respect to Pr(·, a, ·) (for some, or all, a), and perhaps even with respect to R. We discuss these issues in further detail below. In order to simplify our model and avoid undue attention to the (admittedly important) topic of constructing suitable analogical mappings, we will simply assume that the mentor and the observer have "identical" state spaces; that is, S m and S o are in some sense isomorphic. The precise sense in which the spaces are isomorphic-or in some cases, presumed to be isomorphic until proven otherwise-is elaborated below when we discuss the relationship between agent abilities. Thus from this point we simply refer to the state S without distinguishing the mentor's local space S m from the observer's S o . Abilities: Even with a mapping between states, observations of a mentor's state transitions only tell the observer something about the mentor's abilities, not its own. We must assume that the observer can in some way "duplicate" the actions taken by the mentor to induce analogous transitions in its own local state space. In other words, there must be some presumption that the mentor and the observer have similar abilities. It is in this sense that the analogical mapping between state spaces can be taken to be a homomorphism. Specifically, we might assume that the mentor and the observer have the same actions available to them (i.e., A m = A o = A) and that h : S m → S o is homomorphic with respect to Pr(·, a, ·) for all a ∈ A. This requirement can be weakened substantially, without diminishing its utility, by requiring only that the observer be able to implement the actions actually taken by the mentor at a given state s. Finally, we might have an observer that assumes that it can duplicate the actions taken by the mentor until it finds evidence to the contrary. In this case, there is a presumed homomorphism between the state spaces. In what follows, we will distinguish between implicit imitation in homogeneous action settings-domains in which the analogical mapping is indeed homomorphic-and heterogeneous action settings-where the mapping may not be a homomorphism. There are more general ways of defining similarity of ability, for example, by assuming that the observer may be able to move through state space in a similar fashion to the mentor without following the same trajectories (Nehaniv & Dautenhahn, 1998). For instance, the mentor may have a way of moving directly between key locations in state space, while the observer may be able to move between analogous locations in a less direct fashion. In such a case, the analogy between states may not be determined by single actions, but rather by sequences of actions or local policies. We will suggest ways for dealing with restricted forms of analogy of this type in Section 5. Objectives: Even when the observer and mentor have similar or identical abilities, the value to the observer of the information gleaned from the mentor may depend on the actual policy being implemented by the mentor. We might suppose that the more closely related a mentor's policy is to the optimal policy of the observer, the more useful the information will be. Thus, to some extent, we expect that the more closely aligned the objectives of the mentor and the observer are, the more valuable the guidance provided by the mentor. Unlike in existing teaching models, we do not suppose that the mentor is making any explicit efforts to instruct the observer. And because their objectives may not be identical, we do not force the observer to (attempt to) explicitly imitate the behavior of the mentor. In general, we will make no explicit assumptions about the relationship between the objectives of the mentor and the observer. However, we will see that, to some extent, the "closer" they are, the more utility can be derived from implicit imitation. Finally, we remark on an important assumption we make throughout the remainder of this paper: the observer knows its reward function R o ; that is, for each state s, the observer can evaluate R o (s) without having visited state s. This view is consistent with view of reinforcement learning as "automatic programming." A user may easily specify a reward function (e.g., in the form of a set of predicates that can be evaluated at any state) prior to learning. It may be more difficult to specify a domain model or optimal policy. In such a setting, the only unknown component of the MDP M o is the transition function Pr o . We believe this approach to reinforcement learning is, in fact, more common in practice than the approach in which the reward function must be sampled. To reiterate, our aim is to describe a mechanism by which the observer can accelerate its learning; but we emphasize our position that implicit imitation-in contrast to explicit imitation-is not merely replicating the behaviors (or state trajectories) observed in another agent, nor even attempting to reach "similar states". We believe the agent must learn about its own capabilities and adapt the information contained in observed behavior to these. Agents must also explore the appropriate application (if any) of observed behaviors, integrating these with their own, as appropriate, to achieve their own ends. We therefore see imitation as an interactive process in which the behavior of one agent is used to guide the learning of another. Given this setting, we can list possible ways in which an observer and a mentor can (and cannot) interact, contrasting along the way our perspective and assumptions with those of existing models in the literature. 5 First, the observer could attempt to directly infer a policy from its observations of mentor state-action pairs. This model has a conceptual simplicity and intuitive appeal, and forms the basis of the behavioral cloning paradigm (Sammut, Hurst, Kedzier, & Michie, 1992;Urbancic & Bratko, 1994). However, it assumes that the observer and mentor share the same reward function and action capabilities. It also assumes that complete and unambiguous trajectories (including action choices) can be observed. A related approach attempts to deduce constraints on the value function from the inferred action preferences of the mentor agent (Utgoff & Clouse, 1991;Šuc & Bratko, 1997). Again, however, this approach assumes congruity of objectives. Our model is also distinct from models of explicit teaching (Lin, 1992;Whitehead, 1991b): we do not assume that the mentor has any incentive to move through its environment in a way that explicitly guides the learner to explore its own environment and action space more effectively. Instead of trying to directly learn a policy, an observer could attempt to use observed state transitions of other agents to improve its own environment model Pr o (s, a, t). With a more accurate model and its own reward function, the observer could calculate more accurate values for states. The state values could then be used to guide the agent towards distant rewards and reduce the need for random exploration. This insight forms the core of our implicit imitation model. This approach has not been developed in the literature, and is appropriate under the conditions listed above, specifically, under conditions where the mentor's actions are unobservable, and the mentor and observer have different reward functions or objectives. Thus, this approach is applicable under more general conditions than many existing models of imitation learning and teaching. In addition to model information, mentors may also communicate information about the relevance or irrelevance of regions of the state space for certain classes of reward functions. An observer can use the set of states visited by the mentor as heuristic guidance about where to perform backup computations in the state space. In the next two sections, we develop specific algorithms from our insights about how agents can use observations of others to both improve their own models and assess the relevance of regions within their state spaces. We first focus on the homogeneous action case, then extend the model to deal with heterogeneous actions. Implicit Imitation in Homogeneous Settings We begin by describing implicit imitation in homogeneous action settings-the extension to heterogeneous settings will build on the insights developed in this section. We develop a technique called implicit imitation through which observations of a mentor can be used to accelerate reinforcement learning. First, we define the homogeneous setting. Then we develop the implicit imitation algorithm. Finally, we demonstrate how implicit imitation works on a number of simple problems designed to illustrate the role of the various mechanisms we describe. 5. We will describe other models in more detail in Section 8. Homogeneous Actions The homogeneous action setting is defined as follows. We assume a single mentor m and respectively. Note that the agents share the same state space (more precisely, we assume a trivial isomorphic mapping that allows us to identify their local states). We also assume that the mentor is executing some stationary policy π m . We will often treat this policy as deterministic, but most of our remarks apply to stochastic policies as well. Let the support set Supp(π m , s) for π m at state s be the set of actions a ∈ A m accorded nonzero probability by π m at state s. We assume that the observer has the same abilities as the mentor in the following sense: ∀s, t ∈ S, a m ∈ Supp(π m , s), there exists an action a o ∈ A o such that Pr o (s, a o , t) = Pr m (s, a m , t). In other words, the observer is able to duplicate (in a the sense of inducing the same distribution over successor states) the actual behavior of the mentor; or equivalently, the agents' local state spaces are isomorphic with respect to the actions actually taken by the mentor at the subset of states where those actions might be taken. This is much weaker than requiring a full homomorphism from S m to S o . Of course, the existence of a full homomorphism is sufficient from our perspective; but our results do not require this. The Implicit Imitation Algorithm The implicit imitation algorithm can be understood in terms of its component processes. First, we extract action models from a mentor. Then we integrate this information into the observer's own value estimates by augmenting the usual Bellman backup with mentor action models. A confidence testing procedure ensures that we only use this augmented model when the observer's model of the mentor is more reliable than the observer's model of its own behavior. We also extract occupancy information from the observations of mentor trajectories in order to focus the observer's computational effort (to some extent) in specific parts of the state space. Finally, we augment our action selection process to choose actions that will explore high-value regions revealed by the mentor. The remainder of this section expands upon each of these processes and how they fit together. Model Extraction The information available to the observer in its quest to learn how to act optimally can be divided into two categories. First, with each action it takes, it receives an experience tuple s, a, r, t ; in fact, we will often ignore the sampled reward r, since we assume the reward function R is known in advance. As in standard model-based learning, each such experience can be used to update its own transition model Pr o (s, a, ·). Second, with each mentor transition, the observer obtains an experience tuple s, t . Note again that the observer does not have direct access to the action taken by the mentor, only the induced state transition. Assume the mentor is implementing a deterministic, stationary policy π m , with π m (s) denoting the mentor's choice of action at state s. This policy induces a Markov chain Pr m (·, ·) over S, with Pr m (s, t) = Pr(s, π m (s), t) denoting the probability of a transition from s to t. 6 Since the learner observes the mentor's state transitions, it can construct an estimate Pr m of this chain: Pr m (s, t) is simply estimated by the relative observed frequency of mentor transitions s → t (w.r.t. all transitions taken from s). If the observer has some prior over the possible mentor transitions, standard Bayesian update techniques can be used instead. We use the term model extraction for this process of estimating the mentor's Markov chain. Augmented Bellman Backups Suppose the observer has constructed an estimate Pr m of the mentor's Markov chain. By the homogeneity assumption, the action π m (s) can be replicated exactly by the observer at state s. Thus, the policy π m can, in principle, be duplicated by the observer (were it able to identify the actual actions used). As such, we can define the value of the mentor's policy from the observer's perspective: Notice that Equation 6 uses the mentor's dynamics but the observer's reward function. Letting V denote the optimal (observer's) value function, clearly V (s) ≥ V m (s), so V m provides a lower bound on the observer's value function. More importantly, the terms making up V m (s) can be integrated directly into the Bellman equation for the observer's MDP, forming the augmented Bellman equation: This is the usual Bellman equation with an extra term added, namely, the second summation, t∈S Pr m (s, t)V (t) denoting the expected value of duplicating the mentor's action a m . Since this (unknown) action is identical to one of the observer's actions, the term is redundant and the augmented value equation is valid. Of course, the observer using the augmented backup operation must rely on estimates of these quantities. If the observer exploration policy ensures that each state is visited infinitely often, the estimates of the Pr o terms will converge to their true values. If the mentor's policy is ergodic over state space S, then Pr m will also converge to its true value. If the mentor's policy is restricted to a subset of states S ′ ⊆ S (those forming the basis of its Markov chain), then the estimates of Pr m for the subset will converge correctly with respect to S ′ if the chain is ergodic. The states in S − S ′ will remain unvisited and the estimates will remain uninformed by data. Since the mentor's policy is not under the control of the observer, there is no way for the observer to influence the distribution of samples attained for Pr m . An observer must therefore be able to reason about the accuracy of the estimated model Pr m for any s and restrict the application of the augmented equation to those states where Pr m is known with sufficient accuracy. While Pr m cannot be used indiscriminately, we argue that it can be highly informative early in the learning process. Assuming that the mentor is pursuing an optimal policy (or at least is behaving in some way so that it tends to visit certain states more frequently), there will be many states for which the observer has much more accurate estimates of Pr m (s, t) than it does for Pr o (s, a, t) for any specific a. Since the observer is learning, it must explore both its state space-causing less frequent visits to s-and its action space-thus spreading its experience at s over all actions a. This generally ensures that the sample size upon which Pr m is based is greater than that for Pr o for any action that forms part of the mentor's policy. Apart from being more accurate, the use of Pr m (s, t) can often give more informed value estimates at state s, since prior action models are often "flat" or uniform, and only become distinguishable at a given state when the observer has sufficient experience at state s. We note that the reasoning above holds even if the mentor is implementing a (stationary) stochastic policy (since the expected value of stochastic policy for a fully-observable MDP cannot be greater than that of an optimal deterministic policy). While the "direction" offered by a mentor implementing a deterministic policy tends to be more focused, empirically we have found that mentors offer broader guidance in moderately stochastic environments or when they implement stochastic policies, since they tend to visit more of the state space. We note that the extension to multiple mentors is straightforward-each mentor model can be incorporated into the augmented Bellman equation without difficulty. Model Confidence When the mentor's Markov chain is not ergodic, or if the mixing rate 7 is sufficiently low, the mentor may visit a certain state s relatively infrequently. The estimated mentor transition model corresponding to a state that is rarely (or never) visited by the mentor may provide a very misleading estimate-based on the small sample or the prior for the mentor's chain-of the value of the mentor's (unknown) action at s; and since the mentor's policy is not under the control of the observer, this misleading value may persist for an extended period. Since the augmented Bellman equation does not consider relative reliability of the mentor and observer models, the value of such a state s may be overestimated; 8 that is, the observer can be tricked into overvaluing the mentor's (unknown) action, and consequently overestimating the value of state s. To overcome this, we incorporate an estimate of model confidence into our augmented backups. For both the mentor's Markov chain and the observer's action transitions, we assume a Dirichlet prior over the parameters of each of these multinomial distributions (DeGroot, 1975). These reflect the observer's initial uncertainty about the possible transition probabilities. From sample counts of mentor and observer transitions, we update these distributions. With this information, we could attempt to perform optimal Bayesian estimation of the value function; but when the sample counts are small (and normal approximations are not appropriate), there is no simple, closed form expression for the resultant distributions over values. We could attempt to employ sampling methods, but in the in-7. The mixing rate refers to how quickly a Markov chain approaches its stationary distribution. 8. Note that underestimates based on such considerations are not problematic, since the augmented Bellman equation then reduces to the usual Bellman equation. terest of simplicity we have employed an approximate method for combining information sources inspired by Kaelbling's (1993) interval estimation method. Let V denote the current estimated augmented value function, and Pr o and Pr m denote the estimated observer and mentor transition models. We let σ 2 o and σ 2 m denote the variance in these model parameters. An augmented Bellman backup with respect to V using confidence testing proceeds as follows. We first compute the observer's optimal action a * o based on the estimated augmented values for each of the observer's actions. Let Q(a * o , s) = V o (s) denote its value. For the best action, we use the model uncertainty encoded by the Dirichlet distribution to construct a lower bound V − o (s) on the value of the state to the observer using the model (at state s) derived from its own behavior (i.e., ignoring its observations of the mentor). We employ transition counts n o (s, a, t) and n m (s, t) to denote the number of times the observer has made the transition from state s to state t when the action a was performed, and the number of times the mentor was observed making the transition from state s to t, respectively. From these counts, we estimate the uncertainty in the model using the variance of a Dirichlet distribution. Let α = n o (s, a, t) and β = t ′ ∈S−t n o (s, a, t ′ ). Then the model variance is: The variance in the Q-value of an action due to the uncertainty in the local model can be found by simple application of the rule for combining linear combinations of variances, V ar(cX + dY ) = c 2 V ar(X) + d 2 V ar(Y ) to the expression for the Bellman backup, V ar(R(s) + γ t P r(t|s, a)V (t). The result is: Using Chebychev's inequality, 9 we can obtain a confidence level even though the Dirichlet distributions for small sample counts are highly non-normal. The lower bound is then for some suitable constant c. One may interpret this as penalizing 9. Chebychev's inequality states that 1 − 1 k 2 of the probability mass for an arbitrary distribution will be within k standard deviations of the mean. Table 1: Implicit Backup the value of a state by subtracting its "uncertainty" from it (see Figure 1). 10 The value V m (s) of the mentor's action π m (s) is estimated similarly and an analogous lower bound s) then either the mentor-inspired model has, in fact, a lower expected value (within a specified degree of confidence) and uses a nonoptimal action (from the observer's perspective), or the mentor-inspired model has lower confidence. In either case, we reject the information provided by the mentor and use a standard Bellman backup using the action model derived solely from the observer's experience (thus suppressing the augmented backup)-the backed up value is V o (s) in this case. An algorithm for computing an augmented backup using this confidence test is shown in Table 1. The algorithm parameters include the current estimate of the augmented value function V , the current estimated model Pr o and its associated local variance σ 2 omodel , and the model of the mentor's Markov chain Pr m and its associated variance σ 2 mmodel . It calculates lower bounds and returns the mean value, V o or V m , with the greatest lower bound. The parameter c determines the width of the confidence interval used in the mentor rejection test. Focusing The augmented Bellman backups improves the accuracy of the observer's model. A second way in which an observer can exploit its observations of the mentor is to focus attention on the states visited by the mentor. In a model-based approach, the specific focusing mecha-nism we adopt is to require the observer to perform a (possibly augmented) Bellman backup at state s whenever the mentor makes a transition from s. This has three effects. First, if the mentor tends to visit interesting regions of space (e.g., if it shares a certain reward structure with the observer), then the significant values backed up from mentor-visited states will bias the observer's exploration towards these regions. Second, computational effort will be concentrated toward parts of state space where the estimated model Pr m (s, t) changes, and hence where the estimated value of one of the observer's actions may change. Third, computation is focused where the model is likely to be more accurate (as discussed above). Action Selection The integration of exploration techniques in the action selection policy is important for any reinforcement learning algorithm to guarantee convergence. In implicit imitation, it plays a second, crucial role in helping the agent exploit the information extracted from the mentor. Our improved convergence results rely on the greedy quality of the exploration strategy to bias an observer towards the higher-valued trajectories revealed by the mentor. For expediency, we have adopted the ε-greedy action selection method, using an exploration rate ε that decays over time. We could easily have employed other semi-greedy methods such as Boltzmann exploration. In the presence of a mentor, greedy action selection becomes more complex. The observer examines its own actions at state s in the usual way and obtains a best action a * o which has a corresponding value V o (s). A value is also calculated for the mentor's action V m (s). If V o (s) ≻ V m (s), then the observer's own action model is used and the greedy action is defined exactly as if the mentor were not present. If, however, V m (s) ≻ V o (s) then we would like to define the greedy action to be the action dictated by the mentor's policy at state s. Unfortunately, the observer does not know which action this is, so we define the greedy action to be the observer's action "closest" to the mentor's action according to the observer's current model estimates at s. More precisely, the action most similar to the mentor's at state s, denoted κ m (s), is that whose outcome distribution has minimum Kullback-Leibler divergence from the mentor's action outcome distribution: The observer's own experience-based action models will be poor early in training, so there is a chance that the closest action computation will select the wrong action. We rely on the exploration policy to ensure that each of the observer's actions is sampled appropriately in the long run. 11 In our present work we have assumed that the state space is large and that the agent will therefore not be able to completely update the Q-function over the whole space. (The intractability of updating the entire state space is one of the motivations for using imitation techniques). In the absence of information about the state's true values, we would like to bias the value of the states along the mentor's trajectories so that they look worthwhile to explore. We do this by assuming bounds on the reward function and setting the initial Qvalues over the entire space below this bound. In our simple examples, rewards are strictly 11. If the mentor is executing a stochastic policy, the test based on KL-divergence can mislead the learner. positive so we set the bounds to zero. If mentor trajectories intersect any states valued by the observing agent, backups will cause the states on these trajectories to have a higher value than the surrounding states. This causes the greedy step in the exploration method to prefer actions that lead to mentor-visited states over actions for which the agent has no information. Model Extraction in Specific Reinforcement Learning Algorithms Model extraction, augmented backups, the focusing mechanism, and our extended notion of the greedy action selection, can be integrated into model-based reinforcement learning algorithms with relative ease. Generically, our implicit imitation algorithm requires that: (a) the observer maintain an estimate Pr m (s, t) of the Markov chain induced by the mentor's policy-this estimate is updated with every observed transition; and (b) that all backups performed to estimate its value function use the augmented backup (Equation 7) with confidence testing. Of course, these backups are implemented using estimated models Pr o (s, a, t) and Pr m (s, t). In addition, the focusing mechanism requires that an augmented backup be performed at any state visited by the mentor. We demonstrate the generality of these mechanisms by combining them with the wellknown and efficient prioritized sweeping algorithm (Moore & Atkeson, 1993). As outlined in Section 2.2, prioritized sweeping works by maintaining an estimated transition model Pr and reward model R. Whenever an experience tuple s, a, r, t is sampled, the estimated model at state s can change; a Bellman backup is performed at s to incorporate the revised model and some (usually fixed) number of additional backups are performed at selected states. States are selected using a priority that estimates the potential change in their values based on the changes precipitated by earlier backups. Effectively, computational resources (backups) are focused on those states that can most "benefit" from those backups. Incorporating our ideas into prioritized sweeping simply requires the following changes: • With each transition s, a, t the observer takes, the estimated model Pr o (s, a, t) is updated and an augmented backup is performed at state s. Augmented backups are then performed at a fixed number of states using the usual priority queue implementation. • With each observed mentor transition s, t , the estimated model Pr m (s, t) is updated and an augmented backup is performed at s. Augmented backups are then performed at a fixed number of states using the usual priority queue implementation. Keeping samples of mentor behavior implements model extraction. Augmented backups integrate this information into the observer's value function, and performing augmented backups at observed transitions (in addition to experienced transitions) incorporates our focusing mechanism. The observer is not forced to "follow" or otherwise mimic the actions of the mentor directly. But it does back up value information along the mentor's trajectory as if it had. Ultimately, the observer must move to those states to discover which actions are to be used; in the meantime, important value information is being propagated that can guide its exploration. Implicit imitation does not alter the long run theoretical convergence properties of the underlying reinforcement learning algorithm. The implicit imitation framework is orthogonal to ε-greedy exploration, as it alters only the definition of the "greedy" action, not when the greedy action is taken. Given a theoretically appropriate decay factor, the ε-greedy strategy will thus ensure that the distributions for the action models at each state are sampled infinitely often in the limit and converge to their true values. Since the extracted model from the mentor corresponds to one of the observer's own actions, its effect on the value function calculations is no different than the effect of the observer's own sampled action models. The confidence mechanism ensures that the model with more samples will eventually come to dominate if it is, in fact, better. We can therefore be sure that the convergence properties of reinforcement learning with implicit imitation are identical to that of the underlying reinforcement learning algorithm. The benefit of implicit imitation lies in the way in which the models extracted from the mentor allow the observer to calculate a lower bound on the value function and use this lower bound to choose its greedy actions to move the agent towards higher-valued regions of state space. The result is quicker convergence to optimal policies and better short-term practical performance with respect to accumulated discounted reward while learning. Extensions The implicit imitation model can easily be extended to extract model information from multiple mentors, mixing and matching pieces extracted from each mentor to achieve good results. It does this by searching, at each state, the set of mentors it knows about to find the mentor with the highest value estimate. The value estimate of the "best" mentor is then compared using the confidence test described above with the observer's own value estimate. The formal expression of the algorithm is given by the multi-augmented Bellman equation: where M is the set of candidate mentors. Ideally, confidence estimates should be taken into account when comparing mentor estimates with each other, as we may get a mentor with a high mean value estimate but large variance. If the observer has any experience with the state at all, this mentor will likely be rejected as having poorer quality information than the observer already has from its own experience. The observer might have been better off picking a mentor with a lower mean but more confident estimate that would have succeeded in the test against the observer's own model. In the interests of simplicity, however, we investigate multiple mentor combination without confidence testing. Up to now, we have assumed no action costs (i.e., the agent's rewards depend only on the state and not on the action selected in the state); however, we can use more general reward functions (e.g., where reward has the form R(s, a)). The difficulty lies in backing up action costs when the mentor's chosen action is unknown. In Section 4.2.5 we defined the closest action function κ. The κ function can be used to choose the appropriate reward. The augmented Bellman equation with generalized rewards takes the following form: We note that Bayesian methods could be used could be used to estimate action costs in the mentor's chain as well. In any case, the generalized reward augmented equation can readily be amended to use confidence estimates in a similar fashion to the transition model. Empirical Demonstrations The following empirical tests incorporate model extraction and our focusing mechanism into prioritized sweeping. The results illustrate the types of problems and scenarios in which implicit imitation can provide advantages to a reinforcement learning agent. In each of the experiments, an expert mentor is introduced into the experiment to serve as a model for the observer. In each case, the mentor is following an ε-greedy policy with a very small ε (on the order of 0.01). This tends to cause the mentor's trajectories to lie within a "cluster" surrounding optimal trajectories (and reflect good if not optimal policies). Even with a small amount of exploration and some environment stochasticity, mentors generally do not "cover" the entire state space, so confidence testing is important. In all of these experiments, prioritized sweeping is used with a fixed number of backups per observed or experienced sample. 12 ε-greedy exploration is used with decaying ε. Observer agents are given uniform Dirichlet priors and Q-values are initialized to zero. Observer agents are compared to control agents that do not benefit from a mentor's experience, but are otherwise identical (implementing prioritized sweeping with similar parameters and exploration policies). The tests are all performed on stochastic grid world domains, since these make it clear to what extent the observer's and mentor's optimal policies overlap (or fail to). In Figure 2, a simple 10 × 10 example shows a start and end state on a grid. A typical optimal mentor trajectory is illustrated by the solid line between the start and end states. The dotted line shows that a typical mentor-influenced trajectory will be quite similar to the observed mentor trajectory. We assume eight-connectivity between cells so that any state in the grid has nine neighbors including itself, but agents have only four possible actions. In most experiments, the four actions move the agent in the compass directions (North, South, East and West), although the agent will not initially know which action does which. We focus primarily on whether imitation improves performance during learning, since the learner will converge to an optimal policy whether it uses imitation or not. Experiment 1: The Imitation Effect In our first experiment we compare the performance of an observer using model extraction and an expert mentor with the performance of a control agent using independent reinforcement learning. Given the uniform nature of this grid world and the lack of intermediate rewards, confidence testing is not required. Both agents attempt to learn a policy that maximizes discounted return in a 10 × 10 grid world. They start in the upper-left corner and seek a goal with value 1.0 in the lower-right corner. Upon reaching the goal, the agents Generally the mentor will follow a similar if not identical trajectory each run, as the mentors were trained using a greedy strategy that leaves one path slightly more highly valued than the rest. Action dynamics are noisy, with the "intended" direction being realized 90% of the time, and one of the other directions taken otherwise (uniformly). The discount factor is 0.9. In Figure 3, we plot the cumulative number of goals obtained over the previous 1000 time steps for the observer "Obs" and control "Ctrl" agents (results are averaged over ten runs). The observer is able to quickly incorporate a policy learned from the mentor into its value estimates. This results in a steeper learning curve. In contrast, the control agent slowly explores the space to build a model first. The "Delta" curve shows the difference in performance between the agents. Both agents converge to the same optimal value function. The next experiment illustrates the sensitivity of imitation to the size of the state space and action noise level. Again, the observer uses model-extraction but not confidence testing. In Figure 4, we plot the Delta curves (i.e., difference in performance between observer and control agents) for the "Basic" scenario just described, the "Scale" scenario in which the state space size is increased 69 percent (to a 13 × 13 grid), and the "Stoch" scenario in which the noise level is increased to 40 percent (results are averaged over ten runs). The total gain represented by the area under the curves for the observer and the non-imitating prioritized sweeping agent increases with the state space size. This reflects Whitehead's (1991a) observation that for grid worlds, exploration requirements can increase quickly with state space size, but that the optimal path length increases only linearly. Here we see that the guidance of the mentor can help more in larger state spaces. Increasing the noise level reduces the observer's ability to act upon the information received from the mentor and therefore erodes its advantage over the control agent. We note, however, that the benefit of imitation degrades gracefully with increased noise and is present even at this relatively extreme noise level. Experiment 3: Confidence Testing Sometimes the observer's prior beliefs about the transition probabilities of the mentor can mislead the observer and cause it to generate inappropriate values. The confidence mechanism proposed in the previous section can prevent the observer from being fooled by misleading priors over the mentor's transition probabilities. To demonstrate the role of the confidence mechanism in implicit imitation, we designed an experiment based on the scenario illustrated in Figure 5. Again, the agent's task is to navigate from the top-left corner to the bottom-right corner of a 10 × 10 grid in order to attain a reward of +1. We have cre- Since the observer's priors reflect eight-connectivity and are uniform, the high-valued cells in the middle of each island are believed to be reachable from the states diagonally adjacent with some small prior probability. In reality, however, the agent's action set precludes this and the agent will therefore never be able to realize this value. The four islands in this scenario thus create a fairly large region in the center of the space with a high estimated value, which could potentially trap an observer if it persisted in its prior beliefs. Notice that a standard reinforcement learner will "quickly" learn that none of its actions take it to the rewarding islands; in contrast, an implicit imitator using augmented backups could be fooled by its prior mentor model. If the mentor does not visit the states neighboring the island, the observer will not have any evidence upon which to change its prior belief that the mentor actions are equally likely to take one in any of the eight possible directions. The imitator may falsely conclude on the basis of the mentor action model that an action does exist which would allow it to access the islands of value. The observer therefore needs a confidence mechanism to detect when the mentor model is less reliable than its own model. To test the confidence mechanism, we have the mentor follows a path around the outside of the obstacles so that its path cannot lead the observer out of the trap (i.e., it provides no evidence to the observer that the diagonal moves into the islands are not feasible). The combination of a high initial exploration rate and the ability of prioritized sweeping to spread value across large distances then virtually guarantees that the observer will be led to the trap. Given this scenario, we ran two observer agents and a control. The first observer used a confidence interval with width given by 5σ, which, according to the Chebychev rule, should cover approximately 96 percent of an arbitrary distribution. The second observer was given a 0σ interval, which effectively disables confidence testing. The observer with no confidence testing consistently became stuck. Examination of the value function revealed consistent peaks within the trap region, and inspection of the agent state trajectories showed that it was stuck in the trap. The observer with confidence testing consistently escaped the trap. Observation of its value function over time shows that the trap formed, but faded away as the observer gained enough experience to with its own actions to allow it to ignore Figure 6, the performance of the observer with confidence testing is shown with the performance of the control agent (results are averaged over 10 runs). We see that the observer's performance is only slightly degraded from that of the unaugmented control agent even in this pathological case. Experiment 4: Qualitative Difficulty The next experiment demonstrates how the potential gains of imitation can increase with the (qualitative) difficulty of the problem. The observer employs both model extraction and confidence testing, though confidence testing will not play a significant role here. 13 In the "maze" scenario, we introduce obstacles in order to increase the difficulty of the learning problem. The maze is set on a 25 × 25 grid (Figure 7) with 286 obstacles complicating the agent's journey from the top-left to the bottom-right corner. The optimal solution takes the form of a snaking 133-step path, with distracting paths (up to length 22) branching off from the solution path necessitating frequent backtracking. The discount factor is 0.98. With 10 percent noise, the optimal goal-attainment rate is about six goals per 1000 steps. From the graph in Figure 8 (with results averaged over ten runs), we see that the control agent takes on the order of 200,000 steps to build a decent value function that reliably leads to the goal. At this point, it is only achieving four goals per 1000 steps on average, as its exploration rate is still reasonably high (unfortunately, decreasing exploration more quickly leads to slower value function formation). The imitation agent is able to take advantage of the mentor's expertise to build a reliable value function in about 20,000 steps. Since the control agent has been unable to reach the goal at all in the first 20,000 steps, the Delta between the control and the imitator is simply equal to the imitator's performance. The 13. The mentor does not provide evidence about some path choices in this problem, but there are no intermediate rewards which would cause the observer to make use of the misleading mentor priors at these states. imitator can quickly achieve the optimal goal attainment rate of six goals per 1000 steps, as its exploration rate decays much more quickly. Experiment 5: Improving Suboptimal Policies by Imitation The augmented backup rule does not require that the reward structure of the mentor and observer be identical. There are many useful scenarios where rewards are dissimilar but the value functions and policies induced share some structure. In this experiment, we demonstrate one interesting scenario in which it is relatively easy to find a suboptimal solution, but difficult to find the optimal solution. Once the observer finds this suboptimal path, however, it is able to exploit its observations of the mentor to see that there is a shortcut that significantly shortens the path to the goal. The structure of the scenario is shown in Figure 9. The suboptimal solution lies on the path from location 1 around the "scenic route" to location 2 and on to the goal at location 3. The mentor takes the vertical path from location 4 to location 5 through the shortcut. 14 To discourage the use of the shortcut by novice agents, it is lined with cells (marked "*") from which the agent immediately jumps back to the start state. It is therefore difficult for a novice agent executing random exploratory moves to make it all the way to the end of the shortcut and obtain the value which would reinforce its future use. Both the observer and control therefore generally find the scenic route first. In Figure 10, the performance (measured using goals reached over the previous 1000 steps) of the control and observer are compared (averaged over ten runs), indicating the value of these observations. We see that the observer and control agent both find the longer scenic route, though the control agent takes longer to find it. The observer goes on to find the shortcut and increases its return to almost double the goal rate. This experiment shows that mentors can improve observer policies even when the observer's goals are not on the mentor's path. Experiment 6: Multiple Mentors The final experiment illustrates how model extraction can be readily extended so that the observer can extract models from multiple mentors and exploit the most valuable parts of each. Again, the observer employs model extraction and confidence testing. In Figure 11, the learner must move from start location 1 to goal location 4. Two expert agents with different start and goal states serve as potential mentors. One mentor repeatedly moves from location 3 to location 5 along the dotted line, while a second mentor departs from location 2 and ends at location 4 along the dashed line. In this experiment, the observer must In Figure 12, we see that the observer successfully pulls together these information sources in order to learn much more quickly than the control agent (results are averaged over 10 runs). We see that the use of a value-based technique allows the observer to choose which mentor's influence to use on a state-by-state basis in order to get the best solution to the problem. Implicit Imitation in Heterogeneous Settings When the homogeneity assumption is violated, the implicit imitation framework described above can cause the learner's convergence rate to slow dramatically and, in some cases, cause the learner to become stuck in a small neighborhood of state space. In particular, if the learner is unable to make the same state transition (or a transition with the same probability) as the mentor at a given state, it may drastically overestimate the value of that state. The inflated value estimate causes the learner to return repeatedly to this state even though its exploration will never produce a feasible action that attains the inflated estimated value. There is no mechanism for removing the influence of the mentor's Markov chain on value estimates-the observer can be extremely (and correctly) confident in its observations about the mentor's model. The problem lies in the fact that the augmented Bellman backup is justified by the assumption that the observer can duplicate every mentor action. That is, at each state s, there is some a ∈ A such that Pr o (s, a, t) = Pr m (s, t) for all t. When an equivalent action a does not exist, there is no guarantee that the value calculated using the mentor action model can, in fact, be achieved. Feasibility Testing In such heterogeneous settings, we can prevent "lock-up" and poor convergence through the use of an explicit action feasibility test: before an augmented backup is performed at s, the observer tests whether the mentor's action a m "differs" from each of its actions at s, given its current estimated models. If so, the augmented backup is suppressed and a standard Bellman backup is used to update the value function. 15 By default, mentor actions are 15. The decision is binary; but we could envision a smoother decision criterion that measures the extent to which the mentor's action can be duplicated. assumed to be feasible for the observer; however, once the observer is reasonably confident that a m is infeasible at state s, augmented backups are suppressed at s. Recall that uncertainty about the agent's true transition probabilities are captured by a Dirichlet distribution derived from sampled transitions. Comparing a m with a o is effected by a difference of means test with respect to the corresponding Dirichlets. This is complicated by the fact that Dirichlets are highly non-normal for small parameter values and transition distributions are multinomial. We deal with the non-normality by requiring a minimum number of samples and using robust Chebychev bounds on the pooled variance of the distributions to be compared. Conceptually, we will evaluate Equation 12: Here Z α/2 is the critical value of the test. The parameter α is the significance of the test, or the probability that we will falsely reject two actions as being different when they are actually the same. Given our highly non-normal distributions early in the training process, the appropriate Z value for a given α can be computed from Chebychev's bound by solving 2α = 1 − 1 Z 2 for Z α/2 . When we have too few samples to do an accurate test, we persist with augmented backups (embodying our default assumption of homogeneity). If the value estimate is inflated by these backups, the agent will be biased to obtain additional samples, which will then allow the agent to perform the required feasibility test. Our assumption is therefore self-correcting. We deal with the multivariate complications by performing the Bonferroni test (Seber, 1984), which has been shown to give good results in practice (Mi & Sampson, 1993), is efficient to compute, and is known to be robust to dependence between variables. A Bonferroni hypothesis test is obtained by conjoining several single variable tests. Suppose the actions a o and a m result in r possible successor states, s 1 , · · · , s r (i.e., r transition probabilities to compare). For each s i , the hypothesis E i denotes that a o and a m have the same transition probability to successor state s i ; that is Pr(s, a m , s i ) = Pr(s, a o , s i ). We let E i denote the complementary hypothesis (i.e., that the transition probabilities differ). The Bonferroni inequality states: Pr Ē i Thus we can test the joint hypothesis r i=1 E i -the two action models are the same-by testing each of the r complementary hypothesesĒ i at confidence level α/r. If we reject any of the hypotheses we reject the notion that the two actions are equal with confidence at least α. The mentor action a m is deemed infeasible if for every observer action a o , the multivariate Bonferroni test just described rejects the hypothesis that the action is the same as the mentor's. Pseudo-code for the Bonferroni component of the feasibility test appears in Table 2. It assumes a sufficient number of samples. For efficiency reasons, we cache the results of the feasibility testing. When the duplication of the mentor's action at state s is first determined to be infeasible, we set a flag for state s to this effect. k-step Similarity and Repair Action feasibility testing essentially makes a strict decision as to whether the agent can duplicate the mentor's action at a specific state: once it is decided that the mentor's action is infeasible, augmented backups are suppressed and all potential guidance offered is eliminated at that state. Unfortunately, the strictness of the test results in a somewhat impoverished notion of similarity between mentor and observer. This, in turn, unnecessarily limits the transfer between mentor and observer. We propose a mechanism whereby the mentor's influence may persist even if the specific action it chooses is not feasible for the mentor; we instead rely on the possibility that the observer may approximately duplicate the mentor's trajectory instead of exactly duplicating it. Suppose an observer has previously constructed an estimated value function using augmented backups. Using the mentor action model (i.e., the mentor's chain Pr m (s, t)), a high value has been calculated for state s. Subsequently, suppose the mentor's action at state s is judged to be infeasible. This is illustrated in Figure 13, where the estimated value at state s is originally due to the mentor's action π m (s), which for the sake of illustration moves with high probability to state t, which itself can lead to some highly-rewarding region of state space. After some number of experiences at state s, however, the learner concludes that the action π m (s)-and the associated high probability transition to t-is not feasible. At this point, one of two things must occur: either (a) the value calculated for state s and its predecessors will "collapse" and all exploration towards highly-valued regions beyond state s ceases; or (b) the estimated value drops slightly but exploration continues towards the highly-valued regions. The latter case may arise as follows. If the observer has previously explored in the vicinity of state s, the observer's own action model may be sufficiently developed that they still connect the higher value-regions beyond state s to state s through Bellman backups. For example, if the learner has sufficient experience to have learned that the highly-valued region can be reached through the alternative trajectory s − u − v − w, the newly discovered infeasibility of the mentor's transition s − t will not have a deleterious effect on the value estimate at s. If s is highly-valued, it is likely that states close to the mentor's trajectory will be explored to some degree. In this case, state s will Figure 13: An alternative path can bridge value backups around infeasible paths not be as highly-valued as it was when using the mentor's action model, but it will still be valued highly enough that it will likely to guide further exploration toward the area. We call this alternative (in this case s − u − v − w) to the mentor's action a bridge, because it allows value from higher value regions to "flow over" an infeasible mentor transition. Because the bridge was formed without the intention of the agent, we call this process spontaneous bridging. Where a spontaneous bridge does not exist, the observer's own action models are generally undeveloped (e.g., they are close to their uniform prior distributions). Typically, these undeveloped models assign a small probability to every possible outcome and therefore diffuse value from higher valued regions and lead to a very poor value estimate for state s. The result is often a dramatic drop in the value of state s and all of its predecessors; and exploration towards the highly-valued region through the neighborhood of state s ceases. In our example, this could occur if the observer's transition models at state s assign low probability (e.g., close to prior probability) of moving to state u due to lack of experience (or similarly if the surrounding states, such as u or v, have been insufficiently explored). The spontaneous bridging effect motivates a broader notion of similarity. When the observer can find a "short" sequence of actions that bridges an infeasible action on the mentor's trajectory, the mentor's example can still provide extremely useful guidance. For the moment, we assume a short path is any path of length no greater than some given integer k. We say an observer is k-step similar to a mentor at state s if the observer can duplicate in k or fewer steps the mentor's nominal transition at state s with "sufficiently high" probability. Given this notion of similarity, an observer can now test whether a spontaneous bridge exists and determine whether the observer is in danger of value function collapse and the concomitant loss of guidance if it decides to suppress an augmented backup at state s. To do this, the observer initiates a reachability analysis starting from state s using its own action model Pr o (s, a, t) to determine if there is a sequence of actions with leads with sufficiently high probability from state s to some state t on the mentor's trajectory downstream of the infeasible action. 16 If a k-step bridge already exists, augmented backups can be safely suppressed at state s. For efficiency, we maintain a flag at each state to mark it as "bridged." Once a state is known to be bridged, the k-step reachability analysis need not be repeated. If a spontaneous bridge cannot be found, it might still be possible to intentionally set out to build one. To build a bridge, the observer must explore from state s up to k-steps away, hoping to make contact with the mentor's trajectory downstream of the infeasible mentor action. We implement a single search attempt as a k 2 -step random walk, which will result in a trajectory on average k steps away from s as long ergodicity and local connectivity assumptions are satisfied. In order for the search to occur, we must motivate the observer to return to the state s and engage in repeated exploration. We could provide motivation to the observer by asking the observer to assume that the infeasible action will be repairable. The observer will therefore continue the augmented backups which support high-value estimates at the state s and the observer will repeatedly engage in exploration from this point. The danger, of course, is that there may not in fact be a bridge, in which case the observer will repeat this search for a bridge indefinitely. We therefore need a mechanism to terminate the repair process when a k-step repair is infeasible. We could attempt to explicitly keep track of all of the possible paths open to the observer and all of the paths explicitly tried by the observer and determine the repair possibilities had been exhausted. Instead, we elect to follow a probabilistic search that eliminates the need for bookkeeping: if a bridge cannot be constructed within n attempts of k-step random walk, the "repairability assumption" is judged falsified, the augmented backup at state s is suppressed and the observer's bias to explore the vicinity of state s is eliminated. If no bridge is found for state s, a flag is used to mark the state as "irreparable." This approach is, of course, a very naïve heuristic strategy; but it illustrates the basic import of bridging. More systematic strategies could be used, involving explicit "planning" to find a bridge using, say, local search (Alissandrakis, Nehaniv, & Dautenhahn, 2000). Another aspect of this problem that we do not address is the persistence of search for bridges. In a specific domain, after some number of unsuccessful attempts to find bridges, a learner may conclude that it is unable to reconstruct a mentor's behavior, in which case the search for bridges may be abandoned. This involves simple, higher-level inference, and some notion of (or prior beliefs about) "similarity" of capabilities. These notions could also be used to automatically determine parameter settings (discussed below). The parameters k and n must be tuned empirically, but can be estimated given knowledge of the connectivity of the domain and prior beliefs about how similar (in terms of length of average repair) the trajectories of the mentor and observer will be. For instance, n > 8k − 4 seems suitable in an 8-connected grid world with low noise, based on the number of trajectories required to cover the perimeter states of a k-step rectangle around a state. We note that very large values of n can reduce performance below that of non-imitating agents as it results in temporary "lock up." Feasibility and k-step repair are easily integrated into the homogeneous implicit imitation framework. Essentially, we simply elaborate the conditions under which the augmented backup will be employed. Of course, some additional representation will be introduced to keep track of whether a state is feasible, bridged, or repairable, and how many repair attempts have been made. The action selection mechanism will also be overridden by the bridge-building algorithm when required in order to search for a bridge. Bridge building always terminates after n attempts, however, so it cannot affect long run convergence. All other aspects of the algorithm, however, such as the exploration policy, are unchanged. The complete elaborated decision procedure used to determine when augmented backups will be employed at state s with respect to mentor m appears in Table 3. It uses some internal state to make its decisions. As in the original model, we first check to see if the observer's experience-based calculation for the value of the state supersedes the mentor- if so, then the observer uses its own experience-based calculation. If the mentor's action is feasible, then we accept the value calculated using the observationbased value function. If the action is infeasible we check to see if the state is bridged. The first time the test is requested, a reachability analysis is performed, but the results will be drawn from a cache for subsequent requests. If the state has been bridged, we suppress augmented backups, confident that this will not cause value function collapse. If the state is not bridged, we ask if it is repairable. For the first n requests, the agent will attempt a k-step repair. If the repair succeeds, the state is marked as bridged. If we cannot repair the infeasible transition, we mark it not-repairable and suppress augmented backups. We may wish to employ implicit imitation with feasibility testing in a multiple-mentor scenario. The key change from implicit imitation without feasibility testing is that the observer will only imitate feasible actions. When the observer searches through the set of mentors for the one with the action that results in the highest value estimate, the observer must consider only those mentors whose actions are still considered feasible (or assumed to be repairable). Empirical Demonstrations In this section, we empirically demonstrate the utility of feasibility testing and k-step repair and show how the techniques can be used to surmount both differences in actions between agents and small local differences in state-space topology. The problems here have been chosen specifically to demonstrate the necessity and utility of both feasibility testing and k-step repair. Experiment 1: Necessity of Feasibility Testing Our first experiment shows the importance of feasibility testing in implicit imitation when agents have heterogeneous actions. In this scenario, all agents must navigate across an obstacle-free, 10 × 10 grid world from the upper-left corner to a goal location in the lowerright. The agent is then reset to the upper-left corner. The first agent is a mentor with the "NEWS" action set (North, South, East, and West movement actions). The mentor is given an optimal stationary policy for this problem. We study the performance of three learners, each with the "Skew" action set (N, S, NE, SW) and unable to duplicate the mentor exactly (e.g., duplicating a mentor's E-move requires the learner to move NE followed by S, or move SE then N). Due to the nature of the grid world, the control and imitation agents will actually have to execute more actions to get to the goal than the mentor and the optimal goal rate for both the control and imitator are therefore lower than that of the mentor. The first learner employs implicit imitation with feasibility testing, the second uses imitation without feasibility testing, and the third control agent uses no imitation (i.e., is a standard reinforcement learning agent). All agents experience limited stochasticity in the form of a 5% chance that their action will be randomly perturbed. As in the last section, the agents use model-based reinforcement learning with prioritized sweeping. We set k = 3 and n = 20. The effectiveness of feasibility testing in implicit imitation can be seen in Figure 14. The horizontal axis represents time in simulation steps and the vertical axis represents the average number of goals achieved per 1000 time steps (averaged over 10 runs). We see that the imitation agent with feasibility testing converges much more quickly to the optimal goal-attainment rate than the other agents. The agent without feasibility testing achieves sporadic success early on, but frequently "locks up" due to repeated attempts to duplicate infeasible mentor actions. The agent still manages to reach the goal from time to time, as the stochastic actions do not permit the agent to become permanently stuck in this obstaclefree scenario. The control agent without any form of imitation demonstrates a significant delay in convergence relative to the imitation agents due to the lack of any form of guidance, but easily surpasses the agent without feasibility testing in the long run. The more gradual slope of the control agent is due to the higher variance in the control agent's discovery time for the optimal path, but both the feasibility-testing imitator and the control agent converge to optimal solutions. As shown by the comparison of the two imitation agents, feasibility testing is necessary to adapt implicit imitation to contexts involving heterogeneous actions. Experiment 2: Changes to State Space We developed feasibility testing and bridging primarily to deal with the problem of adapting to agents with heterogeneous actions. The same techniques, however, can be applied to agents with differences in their state-space connectivity (ultimately, these are equivalent notions). To test this, we constructed a domain where all agents have the same NEWS action set, but we alter the environment of the learners by introducing obstacles that aren't present for the mentor. In Figure 15, the learners find that the mentor's path is obstructed In this sense, its action has a different effect than the mentor's. In Figure 16, we see that the results are qualitatively similar to the previous experiment. In contrast to the previous experiment, both imitator and control use the "NEWS" action set and therefore have a shortest path with the same length as that of the mentor. Consequently, the optimal goal rate of the imitators and control is higher than in the previous experiment. The observer without feasibility testing has difficulty with the maze, as the value function augmented by mentor observations consistently leads the observer to states whose path to the goal is directly blocked. The agent with feasibility testing quickly discovers that the mentor's influence is inappropriate at such states. We conclude that local differences in state are well handled by feasibility testing. Next, we demonstrate how feasibility testing can completely generalize the mentor's trajectory. Here, the mentor follows a path which is completely infeasible for the imitating agent. We fix the mentor's path for all runs and give the imitating agent the maze shown in Figure 17 in which all but two of the states the mentor visits are blocked by an obstacle. The imitating agent is able to use the mentor's trajectory for guidance and builds its own parallel trajectory which is completely disjoint from the mentor's. The results in Figure 18 show that gain of the imitator with feasibility testing over the control agent diminishes, but still exists marginally when the imitator is forced to generalize a completely infeasible mentor trajectory. The agent without feasibility testing does very poorly, even when compared to the control agent. This is because it gets stuck around the doorway. The high value gradient backed up along the mentor's path becomes accessible to the agents at the doorway. The imitation agent with feasibility will conclude that it cannot proceed south from the doorway (into the wall) and it will then try a different strategy. The imitator without feasibility testing never explores far enough away from the doorway to setup an independent value gradient that will guide it to the goal. With a slower decay schedule for exploration, the imitator without feasibility testing would find the goal, but this would still reduce its performance below that of the imitator with feasibility testing. The imitator with feasibility testing makes use of its prior beliefs that it can follow the mentor to backup value perpendicular to the mentor's path. A value gradient will therefore form parallel to the infeasible mentor path and the imitator can follow along side the infeasible path towards the doorway where it makes the necessary feasibility test and then proceeds to the goal. As explained earlier, in simple problems there is a good chance that the informal effects of prior value leakage and stochastic exploration may form bridges before feasibility testing cuts off the value propagation that guides exploration. In more difficult problems where the agent spends a lot more time exploring, it will accumulate sufficient samples to conclude that the mentor's actions are infeasible long before the agent has constructed its own bridge. The imitator's performance would then drop down to that of an unaugmented reinforcement learner. To demonstrate bridging, we devised a domain in which agents must navigate from the upper-left corner to the bottom-right corner, across a "river" which is three steps wide and exacts a penalty of −0.2 per step (see Figure 19). The goal state is worth 1.0. In the figure, the path of the mentor is shown starting from the top corner, proceeding along the edge of the river and then crossing the river to the goal. The mentor employs the "NEWS" action set. The observer uses the "Skew" action set (N, NE, S, SW) and attempts to reproduce the mentor trajectory. It will fail to reproduce the critical transition at the border of the river (because the "East" action is infeasible for a "Skew" agent). The mentor action can no longer be used to backup value from the rewarding state and there will be no alternative paths because the river blocks greedy exploration in this region. Without bridging or an optimistic and lengthly exploration phase, observer agents quickly discover the negative states of the river and curtail exploration in this direction before actually making it across. Figure 19: River scenario If we examine the value function estimate (after 1000 steps) of an imitator with feasibility testing but no repair capabilities, we see that, due to suppression by feasibility testing, the darkly shaded high-value states in Figure 19 (backed up from the goal) terminate abruptly at an infeasible transition without making it across the river. In fact, they are dominated by the lighter grey circles showing negative values. In this experiment, we show that bridging can prolong the exploration phase in just the right way. We employ the k-step repair procedure with k = 3. Examining the graph in Figure 20, we see that both imitation agents experience an early negative dip as they are guided deep into the river by the mentor's influence. The agent without repair eventually decides the mentor's action is infeasible, and thereafter avoids the river (and the possibility of finding the goal). The imitator with repair also discovers the mentor's action to be infeasible, but does not immediately dispense with the mentor's guidance. It keeps exploring in the area of the mentor's trajectory using a random walk, all the while accumulating a negative reward until it suddenly finds a bridge and rapidly converges on the optimal solution. 17 The control agent discovers the goal only once in the ten runs. Applicability The simple experiments presented above demonstrate the major qualitative issues confronting an implicit imitation agent and how the specific mechanisms of implicit imitation address these issues. In this section, we examine how the assumptions and the mechanisms we presented in the previous sections determine the types of problems suitable for implicit imitation. We then present several dimensions that prove useful for predicting the performance of implicit imitation in these types of problems. 17. While repair steps take place in an area of negative reward in this scenario, this need not be the case. Repair doesn't imply short-term negative return. We have already identified a number of assumptions under which implicit imitation is applicable-some assumptions under which other models of imitation or teaching cannot be applied, and some assumptions that restrict the applicability of our model. These include: lack of explicit communication between mentors and observer; independent objectives for mentors and observer; full observability of mentors by observer; unobservability of mentors' actions; and (bounded) heterogeneity. Assumptions such as full observability are necessary for our model-as formulated-to work (though we discuss extension to the partially observable case in Section 7). Assumptions of lack of communication and unobservable actions extend the applicability of implicit imitation beyond other models in the literature; if these conditions do not hold, a simpler form of explicit communication may be preferable. Finally, the assumptions of bounded heterogeneity and independent objectives also ensure implicit imitation can be applied widely. However, the degree to which rewards are the same and actions are homogeneous can have an impact on the utility (i.e., the acceleration of learning offered by) implicit imitation. We turn our attention to predicting the performance of implicit imitation as a function of certain domain characteristics. Predicting Performance In this section we examine two questions: first, given that implicit imitation is applicable, when can implicit imitation bias an agent to a suboptimal solution; and second, how will the performance of implicit imitation vary with structural characteristics of the domains one might want to apply it to? We show how analysis of the internal structure of state space can be used to motivate a metric that (roughly) predicts implicit imitation performance. We conclude with an analysis of how the problem space can be understood in terms of distinct regions playing different roles within an imitation context. In the implicit imitation model, we use observations of other agents to improve the observer's knowledge about its environment and then rely on a sensible exploration policy to exploit this additional knowledge. A clear understanding of how knowledge of the environment affects exploration is therefore central to understanding how implicit imitation will perform in a domain. Within the implicit imitation framework, agents know their reward functions, so knowledge of the environment consists solely of knowledge about the agent's action models. In general, these models can take any form. For simplicity, we have restricted ourselves to models that can be decomposed into local models for each possible combination of a system state and agent action. The local models for state-action pairs allow the prediction of a j-step successor state distribution given any initial state and sequence of actions or local policy. The quality of the j-step state predictions will be a function of every action model encountered between the initial state and the states at time j − 1. Unfortunately, the quality of the j-step estimate can be drastically altered by the quality of even a single intermediate state-action model. This suggests that connected regions of state space, the states of which all have fairly accurate models, will allow reasonably accurate future state predictions. Since the estimated value of a state s is based on both the immediate reward and the reward expected to be received in subsequent states, the quality of this value estimate will also depend on the quality of the action models in those states connected to s. Now, since greedy exploration methods bias their exploration according to the estimated value of actions, the exploratory choices of an agent at state s will also be dependent on the connectivity of reliable action models at those states reachable from s. Our analysis of implicit imitation performance with respect to domain characteristics is therefore organized around the idea of state space connectivity and the regions such connectivity defines. The Imitation Regions Framework Since connected regions play an important role in implicit imitation, we introduce a classification of different regions within the state space shown graphically in Figure 21. In what follows, we describe of how these regions affect imitation performance in our model. We first observe that many tasks can be carried out by an agent in a small subset of states within the state space defined for the problem. More precisely, in many MDPs, the optimal policy will ensure that an agent remains in a small subspace of state space. This leads us to the definition of our first regional distinction: relevant vs. irrelevant regions. The relevant region is the set of states with non-zero probability of occupancy under the optimal policy. 18 An ε-relevant region is a natural generalization in which the optimal policy keeps the system within the region a fraction 1 − ε of the time. Within the relevant region, we distinguish three additional subregions. The explored region contains those states where the observer has formulated reliable action models on the basis of its own experience. The augmented region contains those states where the observer lacks reliable action models but has improved value estimates due to mentor observations. Note that both the explored and augmented regions are created as the result of observations made by the learner (of either its own transitions or those of a mentor). These regions will therefore have significant "connected components;" that is, contiguous regions of state space where reliable action or mentor models are available. Finally, the blind region designates those states where the observer has neither (significant) personal experience nor the benefit of mentor observations. Any information about states within the blind region will come (largely) from the agent's prior beliefs. 19 We can now ask how these regions interact with an imitation agent. First we consider the impact of relevance. Implicit imitation makes the assumption that more accurate dynamics models allow an observer to make better decisions which will, in turn, result in higher returns sooner in the learning process. However, not all model information is equally helpful: the imitator needs only enough information about the irrelevant region to be able to avoid it. Since action choices are influenced by the relative value of actions, the irrelevant region will be avoided when it looks worse than the relevant region. Given diffuse priors on action models, none of the actions open to an agent will initially appear particularly attractive. However, a mentor that provides observations within the relevant region can quickly make the relevant region look much more promising as a method of achieving higher returns and therefore constrain exploration significantly. Therefore, considering problems just from the point of view of relevance, a problem with a small relevant region relative to the entire space combined with a mentor that operates within the relevant region will result in maximum advantage for an imitation agent over a non-imitating agent. Region In the explored region, the observer has sufficiently accurate models to compute a good policy with respect to rewards within the explored region. Additional observations on 19. Our partitioning of states into explored, blind and augmented regions bears some resemblance to Kearns and Singh's (1998) partitioning of state space into known and unknown regions. Unlike Kearns and Singh, however, we use the partitions only for analysis. The implicit imitation algorithm does not explicitly maintain these partitions or use them in any way to compute its policy. the states within the explored region provided by the mentor can still improve performance somewhat if significant evidence is required to accurately discriminate between the expected value of two actions. Hence, mentor observations in the explored region can help, but will not result in dramatic speedups in convergence. Now, we consider the augmented region in which the observer's Q-values have been augmented with observations of a mentor. In experiments in previous sections, we have seen that an observer entering an augmented region can experience significant speedups in convergence due to the information inherent in the augmented value function about the location of rewards in the region. Characteristics of the augmented zone, however, can affect the degree to which augmentation improves convergence speed. Since the observer receives observations of only the mentor's state, and not its actions, the observer has improved value estimates for states in the augmented region, but no policy. The observer must therefore infer which actions should be taken to duplicate the mentor's behavior. Where the observer has prior beliefs about the effects of its actions, it may be able to perform immediate inference about the mentor's actual choice of action (perhaps using KL-divergence or maximum likelihood). Where the observer's prior model is uninformative, the observer will have to explore the local action space. In exploring a local action space, however, the agent must take an action and this action will have an effect. Since there is no guarantee that the agent took the action that duplicates the mentor's action, it may end up somewhere different than the mentor. If the action causes the observer to fall outside of the augmented region, the observer will lose the guidance that the augmented value function provides and fall back to the performance level of a non-imitating agent. An important consideration, then, is the probability that the observer will remain in augmented regions and continue to receive guidance. One quality of the augmented region that affects the observer's probability of staying within its boundaries is its relative coverage of the state space. The policy of the mentor may be sparse or complete. In a relatively deterministic domain with defined begin and end states, a sparse policy covering few states may be adequate. In a highly stochastic domain with many start and end states, an agent may need a complete policy (i.e., covering every state). Implicit imitation will provide more guidance to the agent in domains that are more stochastic and require more complete policies, since the policy will cover a larger part of the state space. As important as the completeness of a policy is in predicting its guidance, we must also take into account the probability of transitions into and out of the augmented region. Where the actions in a domain are largely invertible (directly, or effectively so), the agent has a chance of re-entering the augmented region. Where ergodicity is lacking, however, the agent may have to wait until the process undergoes some form of "reset" before it has the opportunity to gather additional evidence regarding the identity of the mentor's actions in the augmented region. The reset places the agent back into the explored region, from which it can make its way to the frontier where it last explored. The lack of ergodicity would reduce the agent's ability to make progress towards high-value regions before resets, but the agent is still guided on each attempt by the augmented region. Effectively, the agent will concentrate its exploration on the boundary between the explored region and the mentor augmented region. The utility of mentor observations will depend on the probability of the augmented and explored regions overlapping in the course of the agent's exploration. In the explored regions, accurate action models allow the agent to move as quickly as possible to high value regions. In augmented regions, augmented Q-values inform agents about which states lead to highly-valued outcomes. When an augmented region abuts an explored region, the improved value estimates from the augmented region are rapidly communicated across the explored region by accurate action models. The observer can use the resultant improved value estimates in the explored region, together with the accurate action models in the explored region, to rapidly move towards the most promising states on the frontier of the explored region. From these states, the observer can explore outward and thereby eventually expand the explored region to encompass the augmented region. In the case where the explored region and augmented region do not overlap, we have a blind region. Since the observer has no information beyond its priors for the blind region, the observer is reduced to random exploration. In a non-imitation context, any states that are not explored are blind. However, in an imitation context, the blind area is reduced in effective size by the augmented area. Hence, implicit imitation effectively shrinks the size of the search space of the problem even when there is no overlap between explored and augmented spaces. The most challenging case for implicit imitation transfer occurs when the region augmented by mentor observations fails to connect to both the observer explored region and the regions with significant reward values. In this case, the augmented region will initially provide no guidance. Once the observer has independently located rewarding states, the augmented regions can be used to highlight "shortcuts". These shortcuts represent improvements on the agent's policy. In domains where a feasible solution is easy to find, but optimal solutions are difficult, implicit imitation can be used to convert a feasible solution to an increasingly optimal solution. Cross regional textures We have seen how distinctive regions can be used to provide a certain level of insight into how imitation will perform in various domains. We can also analyze imitation performance in terms of properties that cut across the state space. In our analysis of how model information impacts imitation performance, we saw that regions connected by accurate action models allowed an observer to use mentor observations to learn about the most promising direction for exploration. We see, then, that any set of mentor observations will be more useful if it is concentrated on a connected region and less useful if dispersed about the state space in unconnected components. We are fortunate in completely observable environments that observations of mentors tend to capture continuous trajectories, thereby providing continuous regions of augmented states. In partially observable environments, occlusion and noise could lessen the value of mentor observations in the absence of a model to predict the mentor's state. The effects of heterogeneity, whether due to differences in action capabilities in the mentor and observer or due to differences in the environment of the two agents, can also be understood in terms of the connectivity of action models. Value can propagate along chains of action models until we hit a state in which the mentor and observer have different action capabilities. At this state, it may not be possible to achieve the mentor's value and therefore, value propagation is blocked. Again, the sequential decision making aspect of reinforcement learning leads to the conclusion that many scattered differences between mentor and observer will create discontinuity throughout the problem space, whereas a contiguous region of differences between mentor and observer will cause discontinuity in a region, but leave other large regions fully connected. Hence, the distribution pattern of differences between mentor and observer capabilities is as important as the prevalence of difference. We will explore this pattern in the next section. The Fracture Metric We now try to characterize connectivity in the form of a metric. Since differences in reward structure, environment dynamics and action models that affect connectivity all would manifest themselves as differences in policies between mentor and observer, we designed a metric based on differences in the agents' optimal policies. We call this metric fracture. Essentially, it computes the average minimum distance from a state in which a mentor and observer disagree on a policy to a state in which mentor and observer agree on the policy. This measure roughly captures the difficulty the observer faces in profitably exploiting mentor observations to reduce its exploration demands. More formally, let π m be the mentor's optimal policy and π o be the observer's. Let S be the state space and S πm =πo be the set of disputed states where the mentor and observer have different optimal actions. A set of neighboring disputed states constitutes a disputed region. The set S − S πm =πo will be called the undisputed states. Let M be a distance metric on the space S. This metric corresponds to the number of transitions along the "minimal length" path between states (i.e., the shortest path using nonzero probability observer transitions). 20 In a standard grid world, it will correspond to the Manhattan distance. We define the fracture Φ(S) of state space S to be the average minimal distance between a disputed state and the closest undisputed state: Other things being equal, a lower fracture value will tend to increase the propagation of value information across the state space, potentially resulting in less exploration being required. To test our metric, we applied it to a number of scenarios with varying fracture coefficients. It is difficult to construct scenarios which vary in their fracture coefficient yet have the same expected value. The scenarios in Figure 22 have been constructed so that the length of all possible paths from the start state s to the goal state x are the same in each scenario. In each scenario, however, there is an upper path and a lower path. The mentor is trained in a scenario that penalizes the lower path and so the mentor learns to take the upper path. The imitator is trained in a scenario in which the upper path is penalized and should therefore take the lower path. We equalized the difficulty of these problems as follows: using a generic ε-greedy learning agent with a fixed exploration schedule (i.e., a fixed initial rate and decay) in one scenario, we tuned the magnitude of penalties and their exact placement along loops in their other scenarios so that a learner using the same exploration policy would converge to the optimal policy in roughly the same number of steps in each. Observer Initial Exploration Rate δ I Φ 5 × 10 −2 1 × 10 −2 5 × 10 −3 1 × 10 −3 5 × 10 −4 1 × 10 −4 5 × 10 −5 1 × 10 −5 0.5 60% 70% 90% 1.7 0% 80% 90% 90 % 3.5 30% 100 % 6.0 30 % 70 % 100 % 100 % Figure 23: Percentage of runs (of ten) converging to optimal policy given fracture Φ and initial exploration rate δ I In Figure 22(a), the mentor takes the top of each loop and in an optimal run, the imitator would take the bottom of each loop. Since the loops are short and the length of the common path is long, the average fracture is low. When we compare this to Figure 22(d), we see that the loops are very long-the majority of states in the scenario are on loops. Each of these states on a loop has a distance to the nearest state where the observer and mentor policies agree, namely, a state not on the loop. This scenario therefore has a high average fracture coefficient. Since the loops in the various scenarios differ in length, penalties inserted in the loops vary with respect to their distance from the goal state and therefore affect the total discounted expected reward in different ways. The penalties may also cause the agent to become stuck in a local minimum in order to avoid the penalties if the exploration rate is too low. In this set of experiments, we therefore compare observer agents on the basis of how likely they are to converge to the optimal solution given the mentor example. Figure 23 presents the percentage of runs (out of ten) in which the imitator converged to the optimal solution (i.e., taking only the lower loops) as a function of exploration rate and scenario fracture. 21 We can see a distinct diagonal trend in the table illustrating that increasing fracture requires the imitator to increase its levels of exploration in order to find 21. For reasons of computational expediency, only the entries near the diagonal have been computed. Sampling of other entries confirms the trend. the optimal policy. This suggests that fracture reflects a feature of RL domains that is may be important in predicting the efficacy of implicit imitation. Suboptimality and Bias Implicit imitation is fundamentally about biasing the exploration of the observer. As such, it is worthwhile to ask when this has a positive effect on observer performance. The short answer is that a mentor following an optimal policy for an observer will cause an observer to explore in the neighborhood of the optimal policy and this will generally bias the observer towards finding the optimal policy. A more detailed answer requires looking explicitly at exploration in reinforcement learning. In theory, an ε-greedy exploration policy with a suitable rate of decay will cause implicit imitators to eventually converge to the same optimal solution as their unassisted counterparts. However, in practice, the exploration rate is typically decayed more quickly in order to improve early exploitation of mentor input. Given practical, but theoretically unsound exploration rates, an observer may settle for a mentor strategy that is feasible, but non-optimal. We can easily imagine examples: consider a situation in which an agent is observing a mentor following some policy. Early in the learning process, the value of the policy followed by the mentor may look better than the estimated value of the alternative policies available to the observer. It could be the case that the mentor's policy actually is the optimal policy. On the other hand, it may be the case that one of the alternative policies, with which the observer has neither personal experience, nor observations from a mentor, is actually superior. Given the lack of information, an aggressive exploitation policy might lead the observer to falsely conclude that the mentor's policy is optimal. While implicit imitation can bias the agent to a suboptimal policy, we have no reason to expect that an agent learning in a domain sufficiently challenging to warrant the use of imitation would have discovered a better alternative. We emphasize that even if the mentor's policy is suboptimal, it still provides a feasible solution which will be preferable to no solution for many practical problems. In this regard, we see that the classic exploration/exploitation tradeoff has an additional interpretation in the implicit imitation setting. A component of the exploration rate will correspond to the observer's belief about the sufficiency of the mentor's policy. In this paradigm, then, it seems somewhat misleading to think in terms of a decision about whether to "follow" a specific mentor or not. It is more a question of how much exploration to perform in addition to that required to reconstruct the mentor's policy. Specific Applications We see applications for implicit imitation in a variety of contexts. The emerging electronic commerce and information infrastructure is driving the development of vast networks of multi-agent systems. In networks used for competitive purposes such as trade, implicit imitation can be used by an RL agent to learn about buying strategies or information filtering policies of other agents in order to improve its own behavior. In control, implicit imitation could be used to transfer knowledge from an existing learned controller which has already adapted to its clients to a new learning controller with a completely different architecture. Many modern products such as elevator controllers (Crites & Barto, 1998), cell traffic routers (Singh & Bertsekas, 1997) and automotive fuel injection systems use adaptive controllers to optimize the performance of a system for specific user profiles. When upgrading the technology of the underlying system, it is quite possible that sensors, actuators and the internal representation of the new system will be incompatible with the old system. Implicit imitation provides a method of transferring valuable user information between systems without any explicit communication. A traditional application for imitation-like technologies lies in the area of bootstrapping intelligent artifacts using traces of human behavior. Research within the behavioral cloning paradigm has investigated transfer in applications such as piloting aircraft (Sammut et al., 1992) and controlling loading cranes (Šuc & Bratko, 1997). Other researchers have investigated the use of imitation to simplify the programming of robots (Kuniyoshi, Inaba, & Inoue, 1994). The ability of imitation to transfer complex, nonlinear and dynamic behaviors from existing human agents makes it particularly attractive for control problems. Extensions The model of implicit imitation presented above makes certain restrictive assumptions regarding the structure of the decision problem being solved (e.g., full observability, knowledge of reward function, discrete state and action space). While these simplifying assumptions aided the detailed development of the model, we believe the basic intuitions and much of the technical development can be extended to richer problem classes. We suggest several possible extensions in this section, each of which provides a very interesting avenue for future research. Unknown Reward Functions Our current paradigm assumes that the observer knows its own reward function. This assumption is consistent with the view of RL as a form of automatic programming. We can, however, relax this constraint assuming some ability to generalize observed rewards. Suppose that the expected reward can be expressed in terms of a probability distribution over features of the observer's state, Pr(r|f (s o )). In model-based RL, this distribution can be learned by the agent through its own experience. If the same features can be applied to the mentor's state s m , then the observer can use what it has learned about the reward distribution to estimate expected reward for mentor states as well. This extends the paradigm to domains in which rewards are unknown, but preserves the ability of the observer to evaluate mentor experiences on its "own terms." Imitation techniques designed around the assumption that the observer and the mentor share identical rewards, such as Utgoff's (1991), would of course work in the absence of a reward function. The notion of inverse reinforcement learning (Ng & Russell, 2000) could be adapted to this case as well. A challenge for future research would be to explore a synthesis between implicit imitation and reward inversion approaches to handle an observer's prior beliefs about some intermediate level of correlation between the reward function of observer and mentor. Interaction of agents While we cast the general imitation model in the framework of stochastic games, the restriction of the model presented thus far to noninteracting games essentially means that the standard issues associated with multiagent interaction do not arise. There are, of course, many tasks that require interactions between agents; in such cases, implicit imitation offers the potential to accelerate learning. A general solution requires the integration of imitation into more general models for multiagent RL based on stochastic or Markov games (Littman, 1994;Hu & Wellman, 1998;Bowling & Veloso, 2001). This would no doubt be a rather challenging, yet rewarding endeavor. To take a simple example, in simple coordination problems (e.g., two mobile agents trying to avoid each other while carrying out related tasks) we might imagine an imitator learning from a mentor by reversing the roles of their roles when considering how the observed state transition is influenced by their joint action. In this and more general settings, learning typically requires great care, since agents learning in a nonstationary environment may not converge (say, to equilibrium). Again, imitation techniques offer certain advantages: for instance, mentor expertise can suggest means of coordinating with other agents (e.g., by providing a focal point for equilibrium selection, or by making clear a specific convention such as always "passing to the right" to avoid collision). Other challenges and opportunities present themselves when imitation is used in multiagent settings. For example, in competitive or educational domains, agents not only have to choose actions that maximize information from exploration and returns from exploitation; they must also reason about how their actions communicate information to other agents. In a competitive setting, one agent may wish to disguise its intentions, while in the context of teaching, a mentor may wish to choose actions whose purpose is abundantly clear. These considerations must become part of any action selection process. Partially Observable Domains The extension of this model to partially observable domains is critical, since it is unrealistic in many settings to suppose that a learner can constantly monitor the activities of a mentor. The central idea of implicit imitation is to extract model information from observations of the mentor, rather than duplicating mentor behavior. This means that the mentor's internal belief state and policy are not (directly) relevant to the learner. We take a somewhat behaviorist stance and concern ourselves only with what the mentor's observed behaviors tell us about the possibilities inherent in the environment. The observer does have to keep a belief state about the mentor's current state, but this can be done using the same estimated world model the observer uses to update its own belief state. Preliminary investigation of such a model suggests that dealing with partial observability is viable. We have derived update rules for augmented partially observable updates. These updates are based on a Bayesian formulation of implicit imitation which is, in turn, based on Bayesian RL (Dearden et al., 1999). In fully observable contexts, we have seen that more effective exploration using mentor observations is possible in fully observable domains when this Bayesian model of imitation is used (Price & Boutilier, 2003). The extension of this model to cases where the mentor's state is partially observable is reasonably straightforward. We anticipate that updates performed using a belief state about the mentor's state and action will help to alleviate fracture that could be caused by incomplete observation of behavior. More interesting is dealing with an additional factor in the usual exploration-exploitation tradeoff: determining whether it is worthwhile to take actions that render the mentor "more visible" (e.g., ensuring the mentor remains in view so that this source of information remains available while learning). Continuous and Model-Free Learning In many realistic domains, continuous attributes and large state and action spaces prohibit the use of explicit table-based representations. Reinforcement learning in these domains is typically modified to make use of function approximators to estimate the Q-function at points where no direct evidence has been received. Two important approaches are parameter-based models (e.g., neural networks) (Bertsekas & Tsitsiklis, 1996) and the memory-based approaches (Atkeson, Moore, & Schaal, 1997). In both these approaches, model-free learning is generally employed. That is, the agent keeps a value function but uses the environment as an implicit model to perform backups using the sampling distribution provided by environment observations. One straightforward approach to casting implicit imitation in a continuous setting would employ a model-free learning paradigm (Watkins & Dayan, 1992). First, recall the augmented Bellman backup function used in implicit imitation: When we examine the augmented backup equation, we see that it can be converted to a model-free form in much the same way as the ordinary Bellman backup. We use a standard Q-function with observer actions, but we will add one additional action which corresponds to the action a m taken by the mentor. 22 Now imagine that the observer was in state s o , took action a o and ended up in state s ′ o . At the same time, the mentor made the transition from state s m to s ′ m . We can then write: As discussed earlier, the relative quality of mentor and observer estimates of the Qfunction at specific states may vary. Again, in order to avoid having inaccurate prior beliefs about the mentor's action models bias exploration, we need to employ a confidence measure to decide when to apply these augmented equations. We feel the most natural setting for these kind of tests is in the memory-based approaches to function approximation. Memorybased approaches, such as locally-weighted regression , not only provide estimates for functions at points previously unvisited, they also maintain the evidence set used to generate these estimates. We note that the implicit bias of memory-based approaches assumes smoothness between points unless additional data proves otherwise. On the basis of this bias, we propose to compare the average squared distance of the query from the exemplars used in the estimate of the mentor's Q-value to the average squared distance from the query to the exemplars used in the observer-based estimate to heuristically decide which agent has the more reliable Q-value. The approach suggested here does not benefit from prioritized sweeping. Prioritizedsweeping, has however, been adapted to continuous settings (Forbes & Andre, 2000). We feel a reasonably efficient technique could be made to work. Related Work Research into imitation spans a broad range of dimensions, from ethological studies, to abstract algebraic formulations, to industrial control algorithms. As these fields have crossfertilized and informed each other, we have come to stronger conceptual definitions and a better understanding of the limits and capabilities of imitation. Many computational models have been proposed to exploit specialized niches in a variety of control paradigms, and imitation techniques have been applied to a variety of real-world control problems. The conceptual foundations of imitation have been clarified by work on natural imitation. From work on apes (Russon & Galdikas, 1993), octopi (Fiorito & Scotto, 1992), and other animals, we know that socially facilitated learning is widespread throughout the animal kingdom. A number of researchers have pointed out, however, that social facilitation can take many forms (Conte, 2000;Noble & Todd, 1999). For instance, a mentor's attention to an object can draw an observer's attention to it and thereby lead the observer to manipulate the object independently of the model provided by the mentor. "True imitation" is therefore typically defined in a more restrictive fashion. Visalberghi and Fragazy (1990) cite Mitchell's definition: 1. something C (the copy of the behavior) is produced by an organism 2. where C is similar to something else M (the Model behavior) 3. observation of M is necessary for the production of C (above baseline levels of C occurring spontaneously) 4. C is designed to be similar to M 5. the behavior C must be a novel behavior not already organized in that precise way in the organism's repertoire. This definition perhaps presupposes a cognitive stance towards imitation in which an agent explicitly reasons about the behaviors of other agents and how these behaviors relate to its own action capabilities and goals. Imitation can be further analyzed in terms of the type of correspondence demonstrated by the mentor's behavior and the observer's acquired behavior (Nehaniv & Dautenhahn, 1998;Byrne & Russon, 1998). Correspondence types are distinguished by level. At the action level, there is a correspondence between actions. At the program level, the actions may be completely different but correspondence may be found between subgoals. At the effect level, the agent plans a set of actions that achieve the same effect as the demonstrated behavior but there is no direct correspondence between subcomponents of the observer's actions and the mentor's actions. The term abstract imitation has been proposed in the case where agents imitate behaviors which come from imitating the mental state of other agents (Demiris & Hayes, 1997). The study of specific computational models of imitation has yielded insights into the nature of the observer-mentor relationship and how it affects the acquisition of behaviors by observers. For instance, in the related field of behavioral cloning, it has been observed that mentors that implement conservative policies generally yield more reliable clones (Urbancic & Bratko, 1994). Highly-trained mentors following an optimal policy with small coverage of the state space yield less reliable clones than those that make more mistakes (Sammut et al., 1992). For partially observable problems, learning from perfect oracles can be disastrous, as they may choose policies based on perceptions not available to the observer. The observer is therefore incorrectly biased away from less risky policies that do not require the additional perceptual capabilities (Scheffer, Greiner, & Darken, 1997). Finally, it has been observed that successful clones would often outperform the original mentor due to the "cleanup effect" (Sammut et al., 1992). One of the original goals of behavioral cloning (Michie, 1993) was to extract knowledge from humans to speed up the design of controllers. For the extracted knowledge to be useful, it has been argued that rule-based systems offer the best chance of intelligibility (van Lent & Laird, 1999). It has become clear, however, that symbolic representations are not a complete answer. Representational capacity is also an issue. Humans often organize control tasks by time, which is typically lacking in state and perception-based approaches to control. Humans also naturally break tasks down into independent components and subgoals (Urbancic & Bratko, 1994). Studies have also demonstrated that humans will give verbal descriptions of their control policies which do not match their actual actions (Urbancic & Bratko, 1994). The potential for saving time in acquisition has been borne out by one study which explicitly compared the time to extract rules with the time required to program a controller (van Lent & Laird, 1999). In addition to what has traditionally been considered imitation, an agent may also face the problem of "learning to imitate" or finding a correspondence between the actions and states of the observer and mentor (Nehaniv & Dautenhahn, 1998). A fully credible approach to learning by observation in the absence of communication protocols will have to deal with this issue. The theoretical developments in imitation research have been accompanied by a number of practical implementations. These implementations take advantage of properties of different control paradigms to demonstrate various aspects of imitation. Early behavioral cloning research took advantage of supervised learning techniques such as decision trees (Sammut et al., 1992). The decision tree was used to learn how a human operator mapped perceptions to actions. Perceptions were encoded as discrete values. A time delay was inserted in order to synchronize perceptions with the actions they trigger. Learning apprentice systems (Mitchell et al., 1985) also attempted to extract useful knowledge by watching users, but the goal of apprentices is not to independently solve problems. Learning apprentices are closely related to programming by demonstration systems (Lieberman, 1993). Later efforts used more sophisticated techniques to extract actions from visual perceptions and abstract these actions for future use (Kuniyoshi et al., 1994). Work on associative and recurrent learning models has allowed work in the area to be extended to learning of temporal sequences . Associative learning has been used together with innate following behaviors to acquire navigation expertise from other agents (Billard & Hayes, 1997). A related but slightly different form of imitation has been studied in the multi-agent reinforcement learning community. An early precursor to imitation can be found in work on sharing of perceptions between agents (Tan, 1993). Closer to imitation is the idea of replaying the perceptions and actions of one agent for a second agent (Lin, 1991;Whitehead, 1991a). Here, the transfer is from one agent to another, in contrast to behavioral cloning's transfer from human to agent. The representation is also different. Reinforcement learning provides agents with the ability to reason about the effects of current actions on expected future utility so agents can integrate their own knowledge with knowledge extracted from other agents by comparing the relative utility of the actions suggested by each knowledge source. The "seeding approaches" are closely related. Trajectories recorded from human subjects are used to initialize a planner which subsequently optimizes the plan in order to account for differences between the human effector and the robotic effector . This technique has been extended to handle the notion of subgoals within a task . Subgoals are also addressed by others (Šuc & Bratko, 1997). Our own work is based on the idea of an agent extracting a model from a mentor and using this model information to place bounds on the value of actions using its own reward function. Agents can therefore learn from mentors with reward functions different than their own. Another approach in this family is based on the assumption that the mentor is rational (i.e., follows an optimal policy), has the same reward function as the observer and chooses from the same set of actions. Given these assumptions, we can conclude that the action chosen by a mentor in a particular state must have higher value to the mentor than the alternatives open to the mentor (Utgoff & Clouse, 1991) and therefore higher value to the observer than any alternative. The system of Utgoff and Clouse therefore iteratively adjusts the values of the actions until this constraint is satisfied in its model. A related approach uses the methodology of linear-quadratic control (Šuc & Bratko, 1997). First a model of the system is constructed. Then the inverse control problem is solved to find a cost matrix that would result in the observed controller behavior given an environment model. Recent work on inverse reinforcement learning takes a related approach to reconstructing reward functions from observed behavior (Ng & Russell, 2000). It is similar to the inversion of the quadratic control approach, but is formulated for discrete domains. Several researchers have picked up on the idea of common representations for perceptual functions and action planning. One approach to using the same representation for perception and control is based on the PID controller model. The PID controller represents the behavior. Its output is compared with observed behaviors in order to select the action which is closest to the observed behavior (Demiris & Hayes, 1999). Explicit motor action schema have also been investigated in the dual role of perceptual and motor representations (Matarić, Williamson, Demiris, & Mohan, 1998). Concluding Remarks We have described a formal and principled approach to imitation called implicit imitation. For stochastic problems in which explicit forms of communication are not possible, the underlying model-based framework combined with model extraction provides an alternative to other imitation and learning-by-observation systems. Our new approach makes use of a model to compute the actions an imitator should take without requiring that the observer duplicate the mentor's actions exactly. We have shown implicit imitation to offer significant transfer capability on several test problems, where it proves to be robust in the face of noise, capable of integrating subskills from multiple mentors, and able to provide benefits that increase with the difficulty of the problem. We have seen that feasibility testing extends implicit imitation in a principled manner to deal with the situations where the homogeneous action assumption is invalid. Adding bridging capabilities preserves and extends the mentor's guidance in the presence of infeasible actions, whether due to differences in action capabilities or local differences in state spaces. Our approach also relates to the idea of "following" in the sense that the imitator uses local search in its model to repair discontinuities in its augmented value function before acting in the world. In the process of applying imitation to various domains, we have learned more about its properties. In particular we have developed the fracture metric to characterize the effectiveness of a mentor for a given observer in a specific domain. We have also made considerable progress in extending imitation to new problem classes. The model we have developed is rather flexible and can be extended in several ways: for example, a Bayesian approach to imitation building on this work shows great potential (2003); and we have initial formulations of promising approaches to extending implicit imitation to multiagent problems, partially observable domains and domains in which the reward function is not specified a priori. A number of challenges remain in the field of imitation. Bakker and Kuniyoshi (1996) describe a number of these. Among the more intriguing problems unique to imitation are: the evaluation of the expected payoff for observing a mentor; inferring useful state and reward mappings between the domains of mentors and those of observers; and repairing or locally searching in order to fit observed behaviors to an observer's own capabilities and goals. We have also raised the possibility of agents attempting to reason about the information revealed by their actions in addition to whatever concrete value the actions have for the agent. Model-based reinforcement has been applied to numerous problems. Since implicit imitation can be added to model-based reinforcement learning with relatively little effort, we expect that it can be applied to many of the same problems. Its basis in the simple but elegant theory of Markov decision processes makes it easy to describe and analyze. Though we have focused on some simple examples designed to illustrate the different mechanisms required for implicit imitation, we expect that variations on our approach will provide interesting directions for future research.
2011-06-03T07:57:02.000Z
2003-07-01T00:00:00.000
{ "year": 2011, "sha1": "3cce92bc77e86fcf6f31d8b0b9d57502eb92934a", "oa_license": "publisher-specific license", "oa_url": "https://jair.org/index.php/jair/article/download/10348/24745", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "10e4fcdfb0adeca14bd96667ed06e120cac1ce96", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
119238829
pes2o/s2orc
v3-fos-license
A new test for the Galactic formation and evolution -- prediction for the orbital eccentricity distribution of the halo stars We present theoretical calculations for the differential distribution of stellar orbital eccentricity in a galaxy halo, assuming that the stars constitute a spherical, collisionless system in dynamical equilibrium with a dark matter halo. In order to define the eccentricity e of a halo star for given energy E and angular momentum L, we adopt two types of gravitational potential, such as an isochrone potential and a Navarro-Frenk-White potential, that could form two ends covering in-between any realistic potential of dark matter halo. Based on a distribution function of the form f(E,L) that allows constant anisotropy in velocity dispersions characterized by a parameter \beta, we find that the eccentricity distribution is a monotonically increasing function of e for the case of highly radially anisotropic velocity dispersions (\beta>0.6), while showing a hump-like shape for the cases from radial through tangential velocity anisotropy (\beta<0.6). We also find that when the velocity anisotropy agrees with that observed for the Milky Way halo stars (\beta = 0.5-0.7), a nearly linear eccentricity distribution of N(e) \alpha e results at e<0.7, largely independent of the potential adopted. Our theoretical eccentricity distribution would be a vital tool of examining how far out in the halo the dynamical equilibrium has been achieved, through comparison with kinematics of halo stars sampled at greater distances. Given that large surveys of the SEGUE and Gaia projects would be in progress, we discuss how our results would serve as a new guide in exploring the formation and evolution of the Milky Way halo. INTRODUCTION Studies of large-scale structures in the universe and fluctuations in the cosmic microwave background strongly favor a Λ-cold dark matter (ΛCDM) cosmology (e.g., Cole et al. 2005;Dunkley et al. 2009). The formation of structures in this cosmology is a process of hierarchical clustering, in the sense that numerous CDM lumps cluster gravitationally and merge together to form larger structures (White & Rees 1978;Blumenthal et al. 1984). Dark halos of galaxy systems are similarly formed via clustering of subhalos as a result of CDM agglomerations that reach the maximum expansion then turn around to collapse in the background expanding medium, but a detailed process leading to the halo formation from primordial density fluctuations is highly nonlinear and is not as simple as ⋆ E-mail: khattori@ioa.s.u-tokyo.ac.jp (KH); yoshii@ioa.s.utokyo.ac.jp (YY) the formation of larger structures in the universe (e.g., for review see Ostriker 1993 andBertschinger 1998). High-resolution ΛCDM simulations for the halo formation generically show that mergers and collisions of subhalos induce the overall collapse and virialize the inner region of host halo, while surviving subhalos orbit as separate entities within the inner virialized region of halo (e.g., Moore et al. 1999;Ghigna et al. 2000;Helmi, White & Springel 2003;Valluri et al. 2007). A majority of stars formed through this build-up of halo are expected to have also experienced the redistribution of energy and momentum that drives the phase mixing or violent relaxation towards the dynamical equilibrium (Lynden-Bell 1967). This leads to an idea that a stellar halo, which can be regarded as a collisionless system, holds the dynamical information just after the last violent relaxation in forming the halo. We then take an approach to find out the relics of the formation of the Milky Way halo from the kinematics of halo stars. Among many of their kinematic properties available at present and in the near future, the differential distribution of stellar orbital eccentricity N (e) seems to be of special importance. The orbital eccentricity of a star is a quasiadiabatic invariant (Eggen, Lynden-Bell & Sandage 1962;Lynden-Bell 1963) and is unaffected by the small and slow variation of the gravitational potential that might have occurred after the major formation of halo stars. It is therefore most likely that the shape of N (e) has been conserved until present. With this consideration, comparing the observed shape of N (e) for halo stars with the theoretical one for the halo in dynamical equilibrium, we could explore how far out in the halo the dynamical equilibrium was achieved. Consequently, N (e) serves as a new test of halo formation scenario in a ΛCDM cosmology. As a useful way to derive N (e) theoretically, we consider the orbit of halo stars in assumed gravitational potentials of the halo. In section 2, we present our formulation to calculate N (e) under some plausible assumptions for the halo, and apply it to two extreme gravitational potentials of academic interest. The results for realistic cases are shown for the isochrone potential and for the Navarro-Frenk-White (NFW) potential in section 3. We summarize the results and discuss the prospects of investigating the formation and evolution of the Milky Way halo in section 4. FORMULATION We assume that the halo stars constitute a spherical, collisionless system in dynamical equilibrium with a dark halo. Since the dark matter is known to dominate the total mass of the galaxy system, the motion of halo stars is governed by the gravitational potential of dark halo. Stellar orbital eccentricity in a model halo When a spherical halo potential V (r) is given with respect to the galaxy center, the energy E and the angular momentum L of a star at the position r with the velocity v are written respectively as where r = |r|. The orbital eccentricity of a star is practically defined as where rapo and rperi are the apo-and peri-centric distances, respectively, and are given by two real solutions (rapo > rperi) of the following equation: It is evident from equations (2) and (3) that a pair of (E, L) has a one-to-one correspondence to (E, e), but there is a region of (E, L) in which two real solutions are not allowed and thus, except for the case of circular orbits, the eccentricity cannot be defined. Since such unbound orbits do not form a steady population of stellar halo, we neglect them and exclusively consider stars with bound orbits. Constraints on E and L that allow bound orbits are presented in Appendix A. Differential distribution of stellar orbital eccentricity Let f (r, v) be the distribution function of halo stars, then the number of halo stars in a phase space volume d 3 rd 3 v centered at (r, v) is given by f (r, v)d 3 rd 3 v. According to the strong Jeans theorem, the distribution function should be expressed in terms of isolating integrals only (Lynden-Bell 1960, 1962. For a spherical system that is invariant under rotation, it takes a form of either f (E) or f (E, L), depending on whether the stellar velocity dispersion is isotropic or anisotropic, respectively. The velocity dispersion observed for halo stars is radially anisotropic (e.g., Yoshii & Saio 1979;Gilmore, Wyse, & Kuijken 1989). Furthermore, recent observations for halo stars within the distance of 10 kpc away from us show that the shape of velocity ellipsoid is constant and its principal axes are well aligned with the spherical coordinates (Carollo et al. 2007;Bond et al. 2009). If we extrapolate this fact to a whole system, one simple form of the distribution function is where g(E) is a function of E (e.g., Binney & Tremine 2008). Here, β is a constant value of velocity anisotropy parameter defined as where σr is the radial velocity dispersion and σt is the tangential velocity dispersion projected onto the spherical θ − φ surface. Although β is about 0.5 − 0.7 observationally (e.g., Bond et al. 2009;Smith et al. 2009;Carollo et al. 2010), we will use it as a constant parameter below. By changing the variables and integrating over the spherical coordinates, the number of stars in d 3 rd 3 v reduces to with the radial period of stellar orbit given by Since L 2 is a function of E and e, we here introduce the E-dependent differential eccentricity distribution as We then express the differential eccentricity distribution as It is apparent from this equation that N β (e) is a weighted sum of n β (E, e) with a weight function of g(E). Thus, once the gravitational potential V (r) and the velocity anisotropy parameter β are specified, we can formally obtain n β (E, e), and also N β (e) after integrating n β (E, e) over E with its appropriate weight. Extreme cases of mass distribution In this subsection, mostly for pedagogical purpose, we consider two extreme cases of mass distribution such as the point mass at the center and the homogeneous distribution in the truncated sphere. These cases allow analytic expression of n β (E, e), and because it is separable in E and e, N β (e) can also be obtained except for its normalization. Therefore, these cases are helpful to understand the results for any more realistic cases. Central point mass The gravitational potential arising from the central point mass is Keplerian and is given by where M is the total mass of dark halo and G is the gravitational constant. For bound orbits with E < 0, there are two real and positive solutions for equation (3), or equivalently, The orbital eccentricity is expressed in terms of (E, L) as and the other relevant quantities are neatly expressed as Substitution of these quantities in equations (8) and (9) gives the E-dependent differential eccentricity distribution and the differential eccentricity distribution Since N β (e) ∝ n β (E, e), we normalize N β (e) such that 1 0 N β (e)de = 1, and write normalized N β (e) = 2(1 − β) e (1 − e 2 ) β , (β = 1). The results of N β (e) for several values of β are shown on the left panel of Figure 1. For the case of β = 0 (isotropic velocity dispersion), N β (e) is exactly proportional to e (Binney & Tremaine 2008) and we call it the linear eccentricity distribution. For 0 < β < 1 (radially anisotropic velocity dispersion), N β (e) is a rapidly increasing function of e with a peak always at e = 1. On the other hand, for β < 0 (tangentially anisotropic velocity dispersion), N β (e) shows a hump-like e-distribution around a single peak at e = (1 − 2β) −1/2 . We should notice that the linear trend of N β ∝ e prevails in a range of 0 < e < 0.3 regardless of β, while the behavior of N β (e) is very sensitive to β in a range of 0.6 < e < 1 and the difference there clearly shows up. Truncated homogeneous sphere A homogeneous density distribution within truncated sphere is expressed as where M is the total mass of dark halo and rt is the truncation radius. The gravitational potential arising from this density distribution is given by We consider only stars with E < Et ≡ −GM/rt, which guarantees the stars to be confined inside the truncated radius rt. Thus, bound orbits within the truncated sphere are allowed if Emin < E < Et where we note Emin ≡ (3/2)Et. In this limited range of E, there are two real and positive solutions for equation (3), or equivalently, if and only if where The orbital eccentricity is expressed in terms of D as and the other relevant quantities are expressed in terms of (E, L) as Consequently, we obtain and (1 + e 2 ) 3−2β . (25) As in the point mass model, since N β (e) ∝ n β (E, e), we normalize N β (e) such that 1 0 N β (e)de = 1, and write The results of N β (e) for several values of β are shown on the right panel of Figure 1. For β < 0.5, N β (e) shows a hump-like e-distribution with a single peak at For 0.5 < β < 1 − √ 3/4, however, N β (e) has two local maxima such as a broad peak at e = e peak and a sharp peak at e = 1. Overall behavior monotonically increases with e in a range of 0 < e < e peak , and is kept more or less flat in the range of e peak < e < 1. For 1 − √ 3/4 < β < 1, N β (e) is a rapidly increasing function of e. For a given value of β, N β (e) is more weighted at smaller e in the homogeneous model, when compared with the point mass model. In particular, for β = 0 (isotropic velocity dispersion), N β (e) shows a broad hump-like e-distribution around a peak at e peak = 0.36 in the homogeneous model, while showing an exactly linear e-distribution in the point mass model. This sensitivity, though between two extreme cases, could be used to discriminate the likely mass distribution in more realistic cases to be considered in section 3. Effect of central mass concentration In the cases of central point mass and truncated homogeneous sphere, the shape of n β (E, e) is the same as N β (e), because n β (E, e) is separable in E and e and thus the shape of N β (e) is unaffected by g(E) in equation (4). This property generally holds when the density distribution in the truncated sphere is given by ρ(r) ∝ 1/r γ (see Appendix B). The homogeneous model in section 2.3.2 corresponds to γ = 0. Using the cases of γ = 1 (linear potential model) and γ = 2 (singular isothermal model) that are intermediate between two extreme cases considered above, we can examine how N β (e) depends on the central mass concentration. As shown in Figure 2 for β = 0, there is a clear trend that the e-distribution is peaked at larger e as the halo mass is more centrally concentrated. This trend is also true regardless of There is a clear trend that the e-distribution is peaked at larger e as the halo mass is more centrally concentrated. Note that the distribution is normalized such that 1 0 N β (e)de = 1. the value of β and is helpful in interpreting the results of more realistic models in the next section. ECCENTRICITY DISTRIBUTION OF HALO STARS Our formulation in the previous section can apply to more general cases of mass distribution, including the isochrone model and the NFW model that could form two ends covering in-between any realistic cases of mass distribution of dark halo. 3.1 Energy-dependent eccentricity distribution n β (ε, e) for the isochrone model The gravitational potential of the isochrone model (Hénon 1959) is given by where M is the total mass and b is the scale length parameter. Obviously, the asymptotic form in the limit of r ≫ b or r ≪ b approaches the point mass model or the homogeneous model, respectively. Thus, this model, though not explaining the flat rotation curve of the galaxy disk at greater distances from the galaxy center, is important to study the intermediate case of mass distribution by adjusting the scale size of the central core. Furthermore, the isochrone model is particularly valuable, because fully analytic expression of n β (E, e) can be obtained. Provided b = 0, we define useful dimensionless variables and effective potential as follows: and Equation (3) then reads This equation has two real and positive solutions if and only if −1 < ε < 0 and 0 < λ < λcir ≡ 2(1 + ε) 2 −ε . We denote the two solutions xperi and xapo, and use of them gives the relevant quantities in terms of ε and e: and Consequently, after tedious algebra, we succeed for the first time to obtain analytic expression of n β (ε, e) as follows: We see that n β (ε, e) is not separable in ε and e. Therefore, unlike the point mass and homogeneous models, the shape of n β (ε, e) depends on ε as well as β. Accordingly, derivation of N β (e) needs full numerical integration of n β (ε, e) over ε with the weight function g(ε) specified. When β = 0, by taking a limit of ε, we obtain and These shapes of e-distribution exactly coincide with those in equations (37) and (38), respectively. As understood from the definition of ε [≡ 2bE/(GM )], the limit of ε → 0 corresponds to b → 0 with E and M fixed, which is equivalent to taking a limit to the point mass model. Likewise, the limit of ε → −1 corresponds to b → ∞, otherwise such limit of ε is not attained with E and M fixed, which is equivalent to taking a limit to the homogeneous model. The shapes of n β (ε, e) for several values of ε and β are shown in Figure 3. For any value of β, there is a general trend such that eccentric orbits become more and more dominant as ε increases. However, a marked β-dependence shows up in the shape of n β (ε, e). When β 0.6, n β (ε, e) has a hump-like e-distribution with a peak at e = e peak . On the other hand, when 0.6 β 1, n β (ε, e) has a monotonically increasing e-distribution with a peak at e = 1. In particular, for β ≈ 0.6 and ε ≃ −1, n β (ε, e) shows something like a trapezoidal shape, similar to the case of β ≈ 0.6 for the homogeneous model (left panel of Figure 1). Furthermore, for β > 0.8, highly eccentric orbits prominently dominate in the e-distribution. In order to understand the situation differently, the plots of e peak at which the e-distribution is peaked for several values of ε and β are shown on the left panel of Figure 5. Here, by taking a limit of ε, we can easily confirm, through comparison of this figure with Figure 1, that lim ε→−0 e peak (β, ε) = e pm peak (β), and lim ε→−1 e peak (β, ε) = e hom peak (β), where superscripts 'pm' and 'hom' correspond to the point mass model and the homogeneous model, respectively. More generally, when β > 0.6, we see that e peak = 1 for any value of ε. On the other hand, when β 0.5, we see that e peak is an increasing function of both ε and β. . Energy-dependent differential distribution of stellar orbital eccentricity n β (ε, e) for the isochrone model. In different panels for different values of velocity anisotropy parameter β, shown by lines are the results for dimensionless energy ε = −0.9, −0.7, · · · , −0.1, in steps of 0.2. Note that n β (ε, e) is normalized such that 1 0 n β (ε, e)de = 1. By this normalization, the inclination of n β (ε, e) at e = 0, which is lower for smaller |ε|, helps identify each line. Energy-dependent eccentricity distribution n β (ε, e) for the NFW model Cosmological simulations have been run to reconstruct galaxies from the primordial density fluctuations in the universe. These numerical results have shown that the dark halo has a universal shape of so-called NFW density profile that has little dependence on the cosmology (Navarro, Frenk & White 1997), such as where a is the scale length parameter. This density profile behaves as ρ ∝ 1/r for r ≪ a, while ρ ∝ 1/r 3 for r ≫ a. The associated gravitational potential is of the form Orbital eccentricity distribution of the halo stars 7 Provided a = 0, we define dimensionless variables and effective potential as follows: and Equation (3) then reads This equation indicates a one-to-one correspondence between (ε, λ) and (ε, e), and allows two real and positive solutions if and only if −1 < ε < 0 and 0 < λ < λcir ≡ xc ln(1 + xc) − x 2 c 1 + xc , where xc is the solution for We denote the two solutions xperi and xapo (xapo > xperi), and use of them gives L 2 = 8πGρ0a 4 εx 2 i + xi ln(1 + xi) (xi = xperi or xapo), and We see that n β (ε, e) does not allow analytic expression in terms of ε and β. Accordingly, derivation of n β (ε, e), as well as N β (e) with the weight function g(ε), needs full numerical integration for the NFW model. The results of n β (ε, e) for several values of ε and β are shown in Figure 4. The plots of e peak at which the e-distribution is peaked for several values of ε and β are shown on the right panel of Figure 5. Here, similarly to the isochrone model, by taking a limit of ε, we can easily confirm that lim ε→−0 e peak (β, ε) = e pm peak (β), and lim ε→−1 where superscripts 'pm' and 'lp' correspond to the point mass model and the linear potential model described in Appendix B.1, respectively. Except for slight shift of the e-distribution to have more weight at higher e, overall behavior of n β (ε, e) for the NFW model is very similar to the isochrone model. Such slight shift occurs, because the mass is little more centrally concentrated in the NFW model compared with the isochrone model. The insensitivity to the choice of gravitational potential, as far as it remains realistic, is encouraging, especially when our theoretical e-distribution is to be compared with that observed for stars in the Milky Way halo. Results of N β (e) In the previous subsections, we have derived the Edependent form of n β (E, e) for the respective models of isochrone and NFW. In order to obtain their eccentricity distribution N β (e) in equation (9), we have to specify the weight function of g(E) which can in principle be derived in a self-consistent way (Lynden-Bell 1962, 1963. Here, instead of entering into robustness, however, we take a simple approximation of g(E) as having the form: where A is a constant and σ stands for the radial velocity dispersion σr ∼ 150 km s −1 for the Milky Way halo stars (e.g. Yoshii & Saio 1979;, 2001. We can imagine that halo stars traveling far distantly from the galaxy center with near-zero energy would be captured by adjacent dark halo. Thus, it is reasonable to introduce a truncation energy Et above which g(E) should vanish. The NFW model provides a direct reason to include Et in the analysis. The mass of dark halo within the radius r is naively given by M (r) = 4πρ0a 3 ln 1 + r a − r/a 1 + r/a (54) and diverges in the limit of large r. In fact, numerical simulations indicate that the NFW density profile applies only inside a certain boundary radius but does not apply beyond it because of the existence of adjacent dark halos. Such a boundary usually used is the virial radius r200 within which the averaged density is equal to 200 times the critical density of the universe and the effects by adjacent dark halos are negligible. Thus, it is reasonable to place Et at V (r200) and assume that while halo stars with E < Et stay in the system, those with E > Et could be unbound and leave the system. From all these considerations, we examine how N β (e) would be modified with Et taken into account in the analysis. Here, we set Et equal to the potential energy V (r200) and write it in the dimensionless form: where c is the concentration parameter defined as c ≡ r200/a. Use of the kinematic data of the blue horizontal branch stars in the Milky Way halo and some CDM simulations of a halo of M (r200) ∼ 10 12 M⊙ as massive as the Milky . Energy-dependent differential distribution of stellar orbital eccentricity n β (ε, e) for the NFW model. Others are the same as in Figure 3. Way halo gives c = 3.9 − 12.5 (Xue et al. 2008), which corresponds to εt = −0.4 to −0.2. Thus, a choice of this range of εt, together with β = 0.5 − 0.7 (cf. section 2.2), would be appropriate for our analysis of the Milky Way halo. We have repeated the calculations of N β (e) for several values of σ and εt in the integration of n β (ε, e) over ε in equation (9), and find that N β (e) is insensitive to σ but sensitive to εt. The shape of N β (e) is almost the same as that of n β (εt, e). This is because n β (εt, e) significantly contributes to the integration of n β (ε, e). For example, in a particular case of β = 0 for the isochrone model, we clearly see such a situation from the explicit expression: Using the typical combinations of (β, εt) = (0.5, −0.2), (0.5, −0.4), (0.7, −0.2), and (0.7, −0.4) that more or less agree with observations of the Milky Way halo, the results of N β (e) for both the isochrone and NFW models are shown in Figure 6. We see from this figure that as far as reasonable values of β and εt are adopted, the resulting shape of N β (e) should be almost linearly proportional to e, except for the deviation only at e > 0.7. This is largely regardless of adopting either the isochrone model or the NFW model. Thus, if the dominant component of the Milky Way halo is in dynamical equilibrium, the total eccentricity distribution of stellar halo is expected to have a linear trend at e < 0.7 similar to our results. On the other hand, the behavior of predicted N (e) at e > 0.7, which still shows little difference between the isochrone and NFW models, is sensitive to β and ǫt. Consequently, such sensitivity can be used for a consistency check of the assumed form of f (E, L). These predictions in the separate regions of e < 0.7 and e > 0.7 are testable, given that large kinematical data of halo stars are available at present from the SEGUE project or in the near future from the Gaia project. SUMMARY AND DISCUSSION Hierarchical clustering scenarios of galaxy formation suggest that the major merger of at least several subhalos with comparable masses would occur at the last stage of galaxy formation. This last major merger would cause the violent relaxation of halo stars and make them in dynamical equilibrium with a dark halo. Based on the assumptions that approximate such a status just after the last violent relaxation (section 1), we have presented theoretical predictions of N (e) for halo stars. This predicted N (e) should be observed for the Milky Way halo if it is an isolated system and the subsequent variation of the potential is quiescent enough to conserve the eccentricity of each star. However, recent nearby observations suggest that at least some part of the Milky Way halo may have originated from accreted satellites, which possibly deviates the observed N (e) from our predictions. For example, if infalling satellites break up and spread their member stars into the field, these stars would show peculiar eccentricity distribution which necessarily imprints the initial condition of the progenitor satellites. In addition, if such satellites locally disturb the halo potential, some in-situ halo stars may have altered their orbits (e.g. Zolotov et al. 2009). With an invention of segregating in-situ halo stars from infalling stars, we might be able to well understand the nature of accretion and distortion of satellites. Numerous authors subdivided halo stars into some 'components' and examined the correlations between chemistry, age and kinematics of stars in each component. Carollo et al. (2010) obtained reliable eccentricities for ∼ 10, 000 halo stars within 4 kpc of the sun and decomposed them into the inner and outer halo components having distinct eccentricity distributions from each other. Since their sample is local and is inherently biased in favour of stars that stay longer in the surveyed region, our formalism, which is designed to predict N (e) of the whole stellar halo, has to be modified for the purpose of fair comparison with their data. Through proper incorporation of effects of such a bias, we can still predict N (e) for a local sample by fully taking into account a probability of finding each of halo stars in the surveyed region. This will be done in a separate paper in preparation. On the other hand, our formalism can directly apply to a global, and therefore less biased, sample of halo stars with reliable orbital eccentricities, such as those from next generation surveys including the Gaia mission. In either case, the analytical approach in the present paper certainly forms a basis that serves as a useful tool for analysing the kinematics of the stellar halo. Large, unbiased database of halo stars would enable us to test whether a given component is in dynamical equilibrium by comparing the observed and predicted shape of N (e). Such comparison would hopefully discover some relaxed components, and their adiabatically conserved shape of N (e) would carry some useful information of the physics of violent relaxation. Moreover, the spatial distribution of these relaxed components would enable us to see how far out in the halo the violent relaxation has exerted and how strongly it has affected the stellar halo. If the information of last violent relaxation, yet to be known observationally, is gained in this way, more precise assessment to the early evolution of the Milky Way would be possible, and our understanding of its formation would greatly be advanced. Our current calculations of N (e) are certainly very simple and can be improved by using more realistic assumptions. For example, we can modify our analysis to allow . Differential distribution of stellar orbital eccentricity N β (e) for the isochrone model (left panel) and the NFW model (right panel). Adopted combinations of (β, εt) more or less agree with observations of the Milky Way halo. Note that the normalization factor of N β (e) is arbitrarily chosen so that the nearly linear trend up to e ≈ 0.7 is clearly seen. axisymmetric potentials including a disk-like component as well as a bulge. Preliminary analysis has confirmed that inclusion of a disk-like component would cause no significant change in the linear trend of N (e) described in section 3.3, which will be discussed in a separate paper. Also, our choice of f (E, L) having the form in equation (4) has to be extended to allow the radial dependence of β(r). Further elaborate modeling of N (e) with these theoretical improvements, when applied to future large survey of halo stars, would then provide a promising way of unraveling mysteries of the galaxy formation and evolution in a paradigm of hierarchical clustering in the ΛCDM cosmology. A steady, bound orbit in a gravitational potential V (r) generated by a density distribution ρ(r) is only possible in a subset of energy E and angular momentum L that allows two real and positive solutions for equation (3). We discuss such an allowed region of (E, L) in this appendix. We begin with the effective potential V eff (L; r) = V (r) + L 2 2r 2 . Then, from the definition, we obtain where M (r) is the total mass inside the radius r. Since GM (r)r is a monotonically increasing function of r and it satisfies the allowed range of E with L fixed can be expressed as V eff (L; rc(L)) < E < 0. Here, we define the zero of V (r) so that limr→∞ V (r) = 0. Thus, for any given L, we obtain lim r→∞ V eff (L; r) = 0, which validates that the upper bound of inequality (A7) should be zero. As for the allowed region of L when E is fixed, we obtain 0 < L < Lcir(E), Figure B1. Differential distribution of stellar orbital eccentricity N β (e) in two cases of truncated mass distribution, such as the linear potential model (γ = 1) on the left panel and the singular isothermal model (γ = 2) on the right panel. The results are shown by lines for several values of velocity anisotropy parameter β. If N β (e) near e = 1 sensitively changes at some particular value of β, the results for β ± 0.05 are additionally shown by dotted lines for the purpose of illustrating its sensitivity. Note that N β (e) is normalized such that 1 0 N β (e) + de = 1. Since D, xperi, and xapo depend only on e, n β (E, e) is separable in E and e, so that Thus, the shape of N β (e) is not affected by g(E), like the point mass model and the truncated model with any γ. The results of N β (e) in the singular isothermal model are shown on the right panel of Figure B1. We see that N β (e) shows a monotonically increasing e-distribution for β > 0.45, while having a single peak for β < 0.45.
2010-05-20T13:14:13.000Z
2010-05-20T00:00:00.000
{ "year": 2010, "sha1": "40f0e4c7c02f988a55a82e0ce7f33ed19185a9c2", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/408/4/2137/4221308/mnras0408-2137.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "40f0e4c7c02f988a55a82e0ce7f33ed19185a9c2", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
23678592
pes2o/s2orc
v3-fos-license
Novel "restoration of function" mutagenesis strategy to identify amino acids of the delta-opioid receptor involved in ligand binding. A novel "restoration of function" mutagenesis strategy was developed to identify amino acid sequence combinations necessary to restore the ability to bind delta-selective ligands to an inactive delta/mu receptor chimera in which 10 amino acids of the third extracellular loop of the delta receptor were replaced by the corresponding amino acids from the mu receptor (delta/mu291-300). This chimera binds a nonselective opioid ligand but is devoid of affinity for delta-selective ligands. A library of mutants was generated in which some of the 10 amino acids of the mu sequence of delta/mu291-300 were randomly reverted to the corresponding delta amino acid. Using a ligand binding assay, we screened this library to select mutants with high affinity for delta-selective ligands. Sequence analysis of these revertants revealed that a leucine at position 300, a hydrophobic region (amino acids 295-300), and an arginine at position 291 of the human delta-opioid receptor were present in all revertants. Single and double point mutations were then introduced in delta/mu291-300 to evaluate the contribution of the leucine 300 and arginine 291 residues for the binding of delta-selective ligands. An increased affinity for delta-selective ligands was observed when the tryptophan 300 (mu residue) of delta/mu291-300 was reverted to a leucine (delta residue). Further site-directed mutagenesis experiments suggested that the presence of a tryptophan at position 300 may block the access of delta-selective ligands to their docking site. enkephalin ( agonists) was antagonized by ␤-funaltrexamine and naloxonazine ( antagonists) but not by ICI-174864 (␦ antagonists). Conversely, the antinociception produced by intracerebroventricular injection of DPDPE 1 (␦ agonist) was antagonized by ICI-174864 but not by ␤-funaltrexamine and naloxonazine. Moreover, studies have shown that an antisense oligodeoxynucleotide to the cloned ␦-opioid receptor given intrathecally lowers ␦ but not or spinal (20) and central (21) analgesia. These studies confirm, at the molecular level, traditional pharmacological studies implying distinct receptor mechanisms for ␦, , and analgesia. The development of selective and potent ␦-opioid agonists therefore presents the potential for the discovery of novel analgesic agents with reduced accompanying side effects. The recent cloning of the genes encoding the opioid receptors showed that they are members of the seven transmembrane G protein-coupled receptor family (22)(23)(24)(25)(26)(27)(28). There is about 60% amino acid identity among the sequences of the three subtypes , ␦, and . The highest sequence homology between the three opioid receptor subtypes resides in the transmembrane domains and the intracellular loops. Lower sequence homology is seen in the N and C termini, transmembrane domain 4, and extracellular loops 2 and 3. It is likely that some of these divergent regions contain elements responsible for the discrimination among these receptors by the subtypeselective opioid ligands. The construction of chimeric receptors is a powerful approach to investigate the structural basis for the subtype specificity of G protein-coupled receptor (29 -34). Previous studies from our group (35) and others (36 -40) have demonstrated, using chimeras, the importance of the third extracellular loop of the ␦-opioid receptor for ␦ ligand selectivity. Most mutagenesis experiments designed to analyze the structure and function of G protein-coupled receptor involve a strategy based on the loss of function. Thus, even in well controlled studies, the interpretation of these experiments is often difficult since a loss of function may result from various causes. A mutant receptor may lack affinity for a ligand because a critical residue of the binding pocket has been hit but also because the mutated receptor is unable to traffic efficiently to the cell surface or because the mutation induces protein misfolding, allosteric changes, or gross structural defect. Furthermore, the direct determination of G protein-coupled receptor three-dimensional structure is hampered by technical difficulties limiting their overexpression and purification in quantities that would permit crystallographic studies. Today, only relatively low resolution structural information has been obtained for the bacteriorhodopsin and bovine rhodopsin from two-dimensional cryo-electromicroscopic experiments (41,42). For these reasons, we have designed a mutagenesis strategy based on the restoration of a lost function to identify amino acid sequence combinations critical to confer ␦-selective ligand binding to an opioid receptor. EXPERIMENTAL PROCEDURES Construction of the Library of Mutants-pcDNA3-hDOR consists of a 1.2-kilobase cDNA EcoRI-XhoI fragment of the ␦-opioid receptor (23) sub-cloned at the EcoRI-XhoI site of pcDNA3 (Invitrogen). Using unique site elimination (USE) mutagenesis (43), pcDNA3-hDOR was mutated to pcDNA3-␦/291-300 by replacing 10 amino acids (positions 291-300) of the third extracellular loop of the ␦ receptor with the corresponding amino acids from the receptor. The selection primer was designed to mutate the unique PvuI site of pcDNA3-hDOR to a EcoRV site (5Ј-GCT CCT TCG GTC CTC GAT ATC TTG TCA GAA GTA AGT TGG C-3Ј) and primer ␦/291-300 (5Ј-GTC TGG ACG CTG GTG GAC ATC GAC CCA GAA ACT ACG TTC CAG ACT GTT TCT TGG CAC CTG TGC ATC GCG CTG GGT TAC-3Ј) was used to produce the pcDNA3-␦/291-300 chimera. The library of mutants of the ␦/291-300 chimera was produced by USE mutagenesis (43) using pcDNA3-␦/291-300 as the parental vector. A degenerated primer, ␦/291-300.degenerated (5Ј-GTC TGG ACG CTG GTG GAC ATC GAC C c/g A g/c a/g A a/g c/a T a/c CG TT c/g c/g a/t G a/g c/t T G t/c T t/g CT T g/t G CAC CTG TGC ATC GCG CTG GGT TAC-3Ј) was designed to randomly and independently revert amino acids of the sequence to the original ␦ sequence (16,384 possible combinations). Each degenerated position contained an equal ratio of the nucleotide from the or the ␦ sequence. This degenerated primer was used with a selection primer EcoRV to PvuI (5Ј-GCT CCT TCG GTC CTC CGA TCG TTG TCA GAA GTA AGT TGG C-3Ј) to perform a mutagenesis reaction on pcDNA3-␦/291-300. This synthesis mixture was transformed into Escherichia coli DH5␣ cells, and pools of clones were randomly selected. Plasmid DNA from each pool was isolated using QIAprep 8 plasmid kit (Qiagen, Chatsworth, CA) and used for transfection into HEK 293s cells. Cell Culture and Transfection Procedure-Human embryonic kidney 293s cells (obtained from Michael Matthew, Cold Spring Harbor) were grown in Dulbecco's modified Eagle's medium supplemented with 10% fetal calf serum. Cells were transiently transfected according to the procedure of Chen and Okayama (44). Transfections were performed using 15 g of expression vectors and 1 ϫ 10 6 cells per 25-cm 2 flasks. Binding of ␦-selective ligands (SNC-121 and DPDPE) to the transfected cells was monitored 48 h after transfection. Screening for Revertant Mutants That Bind ␦-Selective Ligand-Pools containing 50 clones each were screened for the presence of revertant mutants with affinity for ␦-selective ligands using a radioactive ligand binding assay. Cells transfected with pools of the library were assayed 48 h post-transfection for the binding of ␦-selective ligands [ 3 H]DPDPE (peptide agonist) and [ 3 H]SNC-121 (non-peptide agonist) (45). Glycerol stocks of E. coli transformants corresponding to positive pools were partitioned into smaller pools of 10 clones using a row/column strategy. One hundred colonies from each positive pool were inoculated on a Petri dish using 10 rows ϫ 10 columns pattern. After overnight incubation, the 10 colonies from each row and each column were pooled into 5 ml of Luria-Bertani (LB) broth and incubated overnight at 37°C. DNA from each pool was prepared as described and transfected into 293s cells for ligand binding analysis. Colonies at the intersection of a positive column and a positive row were selected for sequencing and further pharmacological characterization. The sequence of the revertants was determined by dideoxy nucleotide chain termination method using T7-DNA polymerase (Pharmacia Biotech Inc.) and ␣-35 S-dATP (DuPont NEN). Radioligand Binding Assay-For the receptor binding study, HEK 293s cells expressing pools of mutant receptors were harvested 48 h after transfection and resuspended in 1.5 ml of membrane buffer (50 mM Tris-HCl, pH 7.4, 320 mM sucrose). Cells were then frozen/thawed, and an aliquot was used for the radioligand binding assay. Cells (50 l) were incubated in a final volume of 150 l of binding buffer ( 1. Experimental strategy for the restoration of function analysis. ␦/291-300 chimera that has lost the ability to bind ␦-selective ligand was used as a template in a random mutagenesis reaction. We used a degenerated primer to randomly revert back to the ␦ sequence some of the residues located between positions 291 and 300. This mutagenesis generated a library of more than 16,000 mutants that were separated in pools of 50 clones. The plasmid DNA from each pool of clones was isolated and transfected into HEK 293s cells. The presence of a receptor mutant within a pool was detected if the transfected cells expressed a binding site for the selective ligand. The pool was then gradually split to isolate the clone responsible for the ligand binding activity. The DNA sequence of many revertant clones was determined and compared to identify structural features common to such revertant clones. as the difference between binding in the absence or presence of an excess of unlabeled naloxone (10 M). Curve fitting and analysis of the binding data were performed using the GraphPad Prism program version 1.03 (1994). Three-dimensional Modeling-The three-dimensional model of human ␦-opioid receptor was constructed following a general procedure to build G protein-coupled receptors. There are three steps in this procedure. First, we identified the transmembrane helical domains from sequence alignments of the opioid receptor subfamily. Using the identified sequences, we built the initial helices bundle and then searched for the maximum interactions among these seven helices using a mixed molecular dynamics and conformational search procedure with the restraints from the projection density maps of rhodopsin (41). Finally we added the extracellular loops obtained from the Protein Data base based on the sequence homology analysis. The sequence of human ␦-opioid receptor (47) (Genbank P41143) was first submitted to the TMAP procedure of EMBL (48) to search for the opioid receptor family and to identify the transmembrane helical regions. The assumption that the arginine and lysine residues are most likely at the end of helices (49) was further used to adjust the helical regions. The initial helix building, the sequence homology, and the final structure refinement were performed using Quanta/CHARMM (Biosym/MSI). The mixed molecular dynamics and conformational search procedure was developed in-house. RESULTS Experimental Strategy- Fig. 1 illustrates the mutagenesis strategy we have designed to identify residues critical for the binding of ␦-selective ligands. This strategy relied on the restoration of a function rather than the loss of a function. A chimeric receptor unable to bind the ␦-selective ligands was used as template in a mutagenesis reaction. The amino acid sequence over the region (Fig. 1) was mutated using a degenerated oligonucleotide. The resulting library of mutants was then separated in pools of 50 clones. The plasmid DNA from each pool of clones were isolated and transfected into HEK 293s cells. The presence of a receptor mutant within a pool was detected if the transfected cells expressed a binding site for the selective ligand. The pool was then gradually split to isolate the clone responsible for the ␦-selective ligand binding activity. The DNA sequences of revertant clones were determined and compared to identify common structural features. Construction of the ␦/291-300 Chimera and Random Mu-tagenesis-Evidence from different groups has suggested that the third extracellular loop of ␦-opioid receptors is involved in the binding of selective ligands (35)(36)(37)(38)(39)(40)50). A chimeric ␦-opioid receptor in which 10 amino acids of the third extracellular loop (amino acids 291 to 300) were replaced by the corresponding amino acids from the receptor was constructed. HEK 293s cells were transfected with the plasmid DNA coding for the chimera, and radioligand binding assays using the nonselective Table I. This chimera binds the nonselective opioid ligand bremazocine with the same affinity as the wild-type receptor ( Fig. 2A and Table I Fig. 2A and Table I). We then randomly and independently substituted residues of this chimera with the corresponding ␦ residues. To this end, we used a degenerated primer (␦/291-300.deg) to mutate this 10-amino acid region. The primer was designed to allow each residue of this stretch to be either of the or the ␦ sequence. Owing to the design of the primer, some positions could also code for non-and non-␦ residues (see Fig. 2B). Using the pcDNA3-␦/291-300 plasmid as a template and ␦/291-300.degenerated as the mutagenic primer, we performed a mutagenic synthesis theoretically generating all the possible combinations of , ␦, or non-, non-␦ residues over the 10 amino acids of the third extracellular loop of ␦/291-300 (Fig. 2B). This represents 16,384 possible combinations. To evaluate the frequency of amino acid substitution over the targeted 10-amino acid stretch, 50 clones of the mutant receptor library were randomly selected and subjected to DNA sequencing. Sequence analysis of the clones showed that 90% of these clones were mutated and contained 48.3% amino acid substitution at a position where a combination of two residues was possible and a percentage of substitution increased to 76% at a position where a combination of four different residues was possible (Fig. 3). These results indicate that amino acid substitution occurred randomly and without any preference for one or the other sequence. FIG. 2. Schematic representation of the hDOR, ␦/291-300 chimera, and mutant derivatives. ␦/291-300 chimera consists of the ␦-opioid receptor into which 10 amino acids of the third extracellular loop (amino acids 291-300) were replaced by the corresponding amino acids of the receptor. Amino acid sequences derived from the or ␦ receptor or non-/non-␦ amino acids are indicated for the ␦, the chimera, or the mutant receptors. The single-letter amino acid code is used. Numbers next to mutated amino acids refer to their position within the hDOR sequence. Screening of the Library-Preliminary experiments were performed to determine the size of the pools where a single revertant would be detected using radioligand binding assays. We transfected HEK 293s cells with different dilutions of the wild-type hDOR expression vector corresponding to pools of 1-10,000 clones that would contain a single colony encoding a wild-type hDOR receptor. In this experiment, we observed that statistically significant specific binding can be detected using pools of 500 clones (one of which being hDOR) using [ 3 Sequence Analysis of the Revertant Mutants-The sequences of the revertant mutants were determined and are shown in Fig. 4. Analysis of the sequence of these revertants revealed that an arginine at position 291 and a leucine at position 300 (from the ␦ sequence) were present in all the revertants (Fig. 4). Amino acids from either the ␦, , or non-/non-␦ sequence were found at positions 292-294 suggesting that these residues are not critical for the binding of ␦-selective ligand (Fig. 4). Moreover, all revertants had acquired a stretch of hydrophobic residues at positions 295-300 (valine, alanine, and leucine), a characteristic of the ␦ receptor in this region (Figs. 4 and 5). The hydrophobic residues found at positions 295-300 were either from the ␦ sequence, the sequence, or from non-␦ or nonsequence. Ligand Binding Properties of Single and Double Point Mutants of ␦/291-300 Chimera-To evaluate the independent or simultaneous contribution of the two amino acids leucine 300 and arginine 291 to the binding of ␦-selective ligands, single and double point mutations were generated in ␦/291-300 chimera. Using this chimera as the template, tryptophan 300 was reverted to a leucine residue (␦/291-300 (W300L)), proline 291 was reverted to an arginine residue (␦/291-300 (P291R)), or both mutations were introduced simultaneously (␦/291-300 (P291R/W300L)). As determined by saturation binding experiments, all these mutant receptors bind [ 3 H]bremazocine with the same affinity as the wild-type recep- Table II. Double reversion to the ␦ sequence of residues located at positions 291 and 300 (␦/291-300 (P291R/W300L)) did not produce a further increase in affinity toward ␦-selective ligands. Three-dimensional Modeling of hDOR and Position of the Critical Residues-Three-dimensional computer modeling was used to gain some insight into the orientation of the residues that are present in all of the revertant mutants (Fig. 6). The three-dimensional model of the human ␦-opioid receptor was constructed following a general procedure for G protein-coupled receptors that has been described under "Experimental Procedures." In this model, the seventh transmembrane domain starts at valine 296 which is 5 residues ahead of leucine 300, and these 2 residues are separated by a hydrophobic region. According to our model, the arginine localized at posi- 5. Hydrophilicity analysis of the revertant mutants. Hydrophilicity analyses of the amino acid sequence of the revertants were performed using MacVector program (version 4.1.4) from Kodak International Biotechnologies Inc. Values above the axis denote hydrophilic regions that may be exposed on the outside of the molecule; values below the axis indicate the hydrophobic regions that tend to be buried inside the molecule or inside other hydrophobic environments such as membranes. We have used the Kyte and Doolittle scale with a window size of seven residues and an amphiphilicity window size of 11. Underlined amino acids are from ␦ sequence. Lowercase amino acids are from non-␦/nonsequence. tion 291 (Arg-291) (shown in yellow in Fig. 6) points toward the outside of the receptor suggesting that arginine 291 does not interact directly with ligand. The leucine localized at position 300 (Leu-300) (shown in yellow in Fig. 6) faces the inner side of the binding pocket and could directly interact with the ␦-selective ligand SNC-121 which is represented in red in this figure. The hydrophobic region from amino acids 295-300, represented in green in Fig. 6, is localized at the top of the seventh transmembrane domain. DISCUSSION In this paper we describe the design and use of a "restoration of function" mutagenesis strategy to identify residues of the human ␦-opioid receptor involved in the binding of subtypeselective ligands. Leucine 300 has been identified as a critical residue, and we proposed that residues at this particular position in other opioid receptor subtypes may play a role of exclusion of ␦-selective ligands. First, we generated a chimeric receptor (␦/291-300) in which 10 amino acids of the third extracellular loop of the human ␦-opioid receptor were replaced by the corresponding amino acid sequence of the -opioid receptor. This protein binds nonselective opioid ligands but is devoid of affinity for ␦-selective ligands. Our results are in agreement with results from previous studies using /␦ or ␦/ chimeric receptors that have shown that ␦-selective ligands interact mainly with the region containing the sixth transmembrane domain and the third extracellular loop of the ␦-opioid receptor (35)(36)(37)(38)(39)(40)50). In this study, we have delimited this region to 10 amino acids that are located between arginine residue at position 291 and leucine residue at position 300. Using this chimeric construct as the template, we generated a library containing theoretically 16,384 mutants in which combinations of amino acids of the third loop were reverted to the corresponding ␦ sequence. Next, we used radioactive ␦-specific ligands to select from this receptor library mutants that had regained the ability to bind ␦-selective ligands with high affinity. Using this novel strategy, we showed that a leucine at position 300, a hydrophobic region (amino acids 295-300), and an arginine at position 291 of the human ␦-opioid receptor were present in all revertants suggesting a possible role of these residues in the binding of ␦-selective ligand. The binding characteristics of the ␦/291-300 chimera demonstrates that replacing amino acids 291-300 of the ␦-opioid receptor by the corresponding amino acids of the receptor abolishes ␦-selective binding while preserving nonselective opioid ligand binding. This suggests that the overall structure of the chimera is preserved and that ␦ amino acids 291-300 contribute to ␦ selectivity either by making specific contacts with the ␦ ligand or by inducing conformational change in the receptor that would favor migration of the ␦ ligand to a binding pocket located more deeply in the receptor. Another hypothesis developed by Metzger and Ferguson (51) could be applied in this model. They suggest that opioid ligands would bind to their receptors into a pocket formed by the transmembrane helices and this pocket would be common to all opioid receptor subtypes. Selectivity would be conferred by the extracellular loops that would act as a gate to allow the passage of certain ligands while excluding others. In our model, residues located at positions 291-300 of the chimera could inhibit the passage of ␦-selective ligand to the transmembrane binding pocket. Comparison of amino acid sequences of the selected revertants allows a number of observations to be made. All revertants have substituted tryptophan 300 and proline 291 ( sequence) from ␦/291-300 chimera with a leucine and an arginine (␦ sequence), respectively, suggesting that these positions might play a role in determining ␦ specificity. To more precisely define the contribution of leucine 300 and/or arginine 291 to the restoration of ␦-selective ligand binding, these residues were mutated singly or in combination in ␦/291-300 chimera. ␦/291-300 is devoid of any detectable affinity for ␦ ligands. The reversion of tryptophan 300 ( residue) to a leucine (␦ residue) in the construct (␦/291-300 (W300L)) partially restores the affinity for both ␦-selective ligands DPDPE (K i ϭ 72 nM) and SNC-80 (K i ϭ 102 nM) (Table II). Nevertheless, these K i values remain 15 times higher than observed for the wild-type hDOR. The presence of a leucine at position 300 is not an absolute requirement for ␦-selective ligand binding, since a mutant of the ␦-opioid receptor in which this leucine residue is substituted for an alanine binds SNC-80 with wild-type affinity. 2 It appears that the absence of tryptophan at position 300 is more important than the presence of a leucine. It is conceivable that the presence of a bulky tryptophan residue at position 300 blocks the access of ␦ ligands to their docking site. Our three-dimensional model of the receptor (Fig. 6) suggests that the leucine at position 300 points toward the inside of the binding pocket. Therefore its replacement by a tryptophan would obstruct the access to the central pore of the receptor where the ligand docking site is likely located. These observations are in agreement with a recently proposed hypothesis (51) suggesting that the selectivity within the opioid receptor family may be imparted through a mechanism of exclusion, rather than specific pharmacore recognition within the extracellular loops. Single reversion of proline 291 ( residue) to arginine (␦ residue) is not sufficient to restore the binding of ␦-selective ligand (␦/291-300 (P291R)) ( Table II). The tryptophan residue at position 300 is still present in this construction and may inhibit binding of ␦-selective ligands. However when mutations reverting tryptophan 300 to a leucine and proline 291 to an arginine are introduced simultaneously in ␦/291-300 chimera, there is no increase in binding affinity as compared with single reversion of tryptophan 300 to leucine (␦/291-300 (W300L)). This result indicates that in this sequence context, arginine 291 does not improve ␦-selective binding. Therefore, the possible involvement of arginine 291 in the binding of ␦-selective ligands remains unclear and has to be elucidated. A mutant of hDOR with an alanine residue at position 291 instead of an arginine binds ␦-selective ligand DPDPE and SNC-80 with wild-type affinity suggesting that arginine 291 (␦ residue) is not critical for the binding of ␦-selective ligand (35). However, the adjacent residue at position 292 is also an arginine which may compensate for the substitution at position 291. This interpretation is supported by the observation of Wang and co-workers (50) that has shown that a double mutation of arginine 291 and 292 abolishes the ability to bind DSLET (a ␦-selective ligand) while retaining nonselective ligand binding properties. We observed that residues 295-300 were hydrophobic in all the selected revertants. This result is supported by the work of Valiquette et al. (35) where they have shown that the valine 296 and the valine 297 residues of the hDOR are involved in the binding of ␦-selective ligand. The present study suggests that it is the overall hydrophobic character of this region rather than its specific primary amino acid sequence that is important for ␦-selective binding since the primary sequence of most revertants is divergent from the ␦ receptor sequence. Amino acids 292-294 do not seem critical for the binding of ␦-selective ligands. Indeed, binding of [ 3 H]DPDPE and [ 3 H]SNC-121 is observed with mutants bearing ␦-, -, or non-␦ or non-amino acids at positions 292, 293, and 294, suggesting that strict residue identity is not required at these positions for ␦-selective binding. The restoration of function strategy we have used presents some advantages over the traditional "loss of function" strategy. In the traditional mutagenesis strategy, the residues that are identified as critical are those causing a loss of binding function when mutated. In interpreting the results from such experiments, one needs to explain the reasons for the loss of function which could be due to the substitution of a residue essential for ligand binding, to a low level of expression of the mutant, a decrease in the stability of the mutant receptor, or inefficient traffic to the cell surface. By using careful controls, one can eliminate possible explanations, but there often remain some possibilities for misinterpretation. Unlike the traditional mutagenesis strategy, which analyzes the contribution of a single amino acid, the restoration of function strategy permits the identification of multiple combinations of residues thus allowing a specific function to be restored. Moreover, this positive approach allows us to identify nonessential positions or positions that can tolerate various substitutions. We thus have observed in this study that residues at positions 292-294 were not critical for the binding of ␦-selective ligand since , ␦, or non-/non-␦ residues with different physicochemical properties were found at these positions. Finally, this positive approach can identify specific physicochemical properties (like hydrophobic characteristics) of a region or area required for the function. This situation is observed when residues are not reverted to a specific sequence but to residues sharing similar physicochemical characteristics. In this study, we have developed a novel and efficient method for the analysis of the structure-function of the opioid receptors or possibly any G protein-coupled receptor. This method is based on a positive approach that allows identification of positions within the receptors that are essential, deleterious, or neutral for the interaction with different ligands.
2018-04-03T03:52:31.696Z
1997-04-04T00:00:00.000
{ "year": 1997, "sha1": "dd0fad49aae9a66d6ff87d8fce1f53ea15f1476d", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/272/14/9260.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "c1d58e3cbaee149a67d4ba6394d075f4e004684e", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119310890
pes2o/s2orc
v3-fos-license
Corbino-geometry Josephson weak links in thin superconducting films I consider a Corbino-geometry SNS (superconducting-normal-superconducting) Josephson weak link in a thin superconducting film, in which current enters at the origin, flows outward, passes through an annular Josephson weak link, and leaves radially. In contrast to sandwich-type annular Josephson junctions, in which the gauge-invariant phase difference obeys the sine-Gordon equation, here the gauge-invariant phase difference obeys an integral equation. I present exact solutions for the gauge-invariant phase difference across the weak link when it contains an integral number N of Josephson vortices and the current is zero. I then study the dynamics when a current is applied, and I derive the effective resistance and the viscous drag coefficient; I compare these results with those in sandwich-type junctions. I also calculate the critical current when there is no Josephson vortex in the weak link but there is a Pearl vortex nearby. I. INTRODUCTION Thin-film annular Josephson weak links have been proposed 1-3 as a test bed for the observation of the influence of the Berry phase 4 on the dynamics 5 of a vortex trapped in the weak link. Recent experiments have been carried out by R. H. Hadfield et al. 6 in Corbinogeometry thin-film annular Josephson weak links, in which the weak links are in the same plane as the electrodes. The weak links were fabricated using a focusedion-beam technique in a superconductor/normal-metal (Nb/Cu) bilayer to mill a 50 nm trench in the superconducting layer to form a weak-link SNS junction. In the following, I theoretically examine the properties of a thin-film annular Josephson weak link in an idealized Corbino geometry, in which current enters at the origin, flows outward, passes through an annular Josephson weak link, and leaves radially. The topological differences between annular weak links and straight weak links of finite length produce striking differences in behavior. For example, since only integral numbers N of flux quanta can be present in annular weak links, their critical currents are zero when N = 0, whereas arbitrary amounts of flux can enter finite-length weak links, such that their critical currents are usually continous functions of the applied magnetic field. I consider here only thin films of thickness d less than the London penetration depth λ, in which the current density j is practically uniform across the thickness, and the characteristic length governing the spatial distribution of the magnetic field distribution is the Pearl length, 7 Λ = 2λ 2 /d. (1) Figure 1 shows the Corbino-geometry SNS Josephson weak link considered. Current, supplied to the inner superconducting (S) film at the origin, flows radially outward and passes through the annular weak link (N) of inner and outer radii R − = R − d N /2 and R + = R + d N /2, where d N ≪ R, and continues to flow radially outward through the outer superconducting (S) film. For simplicity, I consider only the case for which Λ ≫ R. When a Josephson vortex is trapped in the weak link or a Pearl vortex 7 is situated in the vicinity of R, the magnetic flux φ 0 = h/2e carried up through the film is spread out over an area of order πΛ 2 , so that the corresponding magnetic flux density is very weak. Although we can neglect the magnetic field generated by the vortex, it is essential to take into account the spatial distribution of the current density j or the sheet-current density K = jd. In thin-film junctions or weak links 8-10 there is a second important length scale, which characterizes the spatial variation of the gauge-invariant phase across the junction, [11][12][13][14] in SI units, where j c (assumed to be independent of θ in Fig. 1) is the maximum Josephson current density that can flow radially as a supercurrent through the weak link and K c = j c d is the maximum Josephson sheet-current density. The case ℓ ≫ 2πR corresponds to the smalljunction limit in straight finite-length junctions, and ℓ ≪ 2πR the large-junction limit. 15 The main goals of this paper are to (a) show that when there are N Josephson vortices trapped in the weak link, the critical current I c is zero for all values of the ratio ℓ/2πR, (b) present exact static solutions for the θ dependence of the gauge-invariant phase difference for arbitrary N for all values of the ratio ℓ/2πR when the applied current is zero, (c) examine the dynamics when a current is applied, and (d) show how the critical current density is affected by the presence of a nearby Pearl vortex when there is no Josephson vortex trapped in the weak link. In Sec. II, I derive the basic equation for the gaugeinvariant phase difference φ(θ) across the weak link and note that there are three additive contributions to φ ′ (θ) = dφ(θ)/dθ to be considered. I examine in Sec. III the contribution due to N flux quanta in the weak link, in Sec. IV the contribution due to a Pearl vortex pinned nearby, and in Sec. V the contribution due to Josephson currents. In Sec. VI, I derive the integral equations connecting φ ′ and sin φ. I present exact solutions for the gauge-invariant phase difference in a thin-film annular Josephson weak link containing a single Josephson vortex (N = 1) in Sec. VII and an arbitrary number N of Josephson vortices in Sec. VIII, and for all cases I work out some consequences for the vortex dynamics when a net current I is applied. I calculate in Sec. IX the critical current of the annular weak link when there is a Pearl vortex nearby, and I briefly summarize all results in Sec. X. Appendix A contains general expressions for the vector potential and sheet-current density generated by N flux quanta in a narrow circular slot of radius R, Appendix B contains details of the Josephson-currentgenerated sheet current, and Appendix C presents a brief comparison with the properties of sandwich-type annular junctions. II. GAUGE-INVARIANT PHASE DIFFERENCE In the context of the Ginzburg-Landau (GL) theory, 16,17 the superconducting order parameter can be expressed as Ψ = Ψ 0 f e iγ , where Ψ 0 is the magnitude of the order parameter in a uniform sample, f = |Ψ|/Ψ 0 is the reduced order parameter, and γ is the phase. Let us assume that the induced or applied current densities are so weak that the suppression of the magnitude of the superconducting order parameter is negligible, such that f = 1. For a thin film in which d < λ the second GL equation (in SI units) can be expressed as where K = jd is the sheet-current density, A is the vector potential, and B = ∇ × A is the magnetic induction. With a sinusoidal current-phase relation, the Josephson sheet-current density in the radial ρ direction across the weak link is K ρ (θ) = K c sin φ(θ), where K c is the maximum Josephson sheet-current density and φ(θ) is the gauge-invariant phase difference between the inner (ρ < R − ) and outer (ρ > R + ) superconducting banks, where R ± = R ± d N /2. A simple relation between φ(θ) and the sheet-current densities at ρ = R − and ρ = R + can be obtained by integrating the vector potential A around a loop of width a few coherence lengths larger than d N enclosing the weak link with one end of the arc at θ ′ = 0 and the other end at θ ′ = θ, as shown by the dashed contour in such that the gauge-invariant phase difference obeys When a current I enters at the origin and there is neither a Josephson vortex trapped in the weak link nor a Pearl vortex pinned nearby, the sheet-current K has a radial component K ρ = I/2πρ but no azimuthal component (K θ = 0), such that the gauge-invariant phase difference φ is independent of θ. Since d N ≪ R, the radial current of the weak link is I = 2πRK c sin φ to good approximation, and the maximum supercurrent that can flow without producing a voltage across the weak link is the critical current, I c0 = 2πRK c . On the other hand, when flux quanta are trapped in the weak link or a Pearl vortex is pinned nearby, azimuthal symmetry is destroyed, the radial component of the sheet-current density K ρ varies as a function of θ, and the azimuthal component of the sheet-current density K θ has the property that [R + K θ (R + , θ)−R − K θ (R − , θ)] = 0, such that φ varies with θ according to Eq. (6). The net supercurrent carried through the weak link is I = I c0 sin φ, where the bar denotes the average over θ, and the critical current is given by its maximum value, I c = I c0 |sin φ| max . Although there are nonlinearities associated with the properties of Josephson weak links, it is important to note that Eq. (6) is a linear equation. Just as the net sheet-current density K can be written as a linear sum of contributions, so also can dφ/dθ be written as a linear sum of contributions. Since we are interested in the behavior when flux quanta are in the annulus, pinned vortices are nearby, and radial Josephson currents flow, we need to calculate the effects of the linear superposition of all three of the corresponding contributions to the sheet-current density K and the θ derivative of the gauge-invariant phase difference, each of which can be calculated from the corresponding discontinuity in ρK θ (ρ, θ) across the annulus at ρ = R [Eq. (6)]. We next examine each of these three contributions in turn. III. N FLUX QUANTA IN THE ANNULAR WEAK LINK Suppose that N flux quanta are trapped in the annular weak link in the absence of any nearby Pearl vortices or any Josephson currents across the junction. Since we are considering only the case that Λ ≫ R, we can neglect the vector potential term in Eq. (3). For ρ < R − , the phase γ = const and K = 0. However, for ρ > R + , the phase winds by multiples of 2π. When there are N flux quanta in the junction, γ = −N θ, which generates the azimuthal sheet-current contribution K θ (ρ) = N φ 0 /πµ 0 Λρ in the region ρ > R + . From Eq. (6) we obtain The above current, phase, and field distributions are equivalent to those produced by N Pearl vortices whose cores are distributed uniformly around the circle of radius R, 9 such that the total magnetic flux carried up through the superconducting film is N φ 0 . See Appendix A for details. Choosing an integration contour in the shape of a circular sector of radius ρ > R and central angle θ instead of the contour shown in Fig. 1, one can show that Eq. (8) is valid for any value of Λ/R. IV. PEARL VORTEX PINNED NEARBY Suppose that a Pearl vortex is pinned at (x, y) = (ρ v , 0) either inside the annular weak link (ρ v < R − ) or outside (ρ v > R + ), but no flux quanta are trapped in the annular slot nor are there any radial Josephson currents across the junction. Since ∇·K = 0 and ∇×K = 0, with the latter equation holding to good approximation because the the magnetic field can be neglected when R ≪ Λ, the method of complex potentials and fields can be used to calculate the sheet-current density generated in response to the Pearl vortex. In general, the complex potential G(ζ) is an analytic function of the complex variable ζ = x + iy, and the corresponding complex sheet current is Since the radial and azimuthal components of K along the unit vectorsρ =x cos θ +ŷ sin θ andθ =ŷ cos θ − where x = ρ cos θ, y = ρ sin θ, ρ = x 2 + y 2 , and θ = tan −1 (y/x). where ζ = x + iy, x = ρ cos θ, y = ρ sin θ, and ρ i = R 2 /ρ v corresponds to the radial coordinate of an image vortex. Figure 2 shows a contour plot of the real part of G in . The contours correspond to streamlines of K in . When ρ v > R + , the complex potential is Figure 3 shows a contour plot of the real part of G out . The contours correspond to streamlines of K out . Evaluating K in (ζ) = dG in (ζ)/dζ and K out (ζ) = dG out (ζ)/dζ at ρ = R − and ρ = R + and using Eq. (6), we find that where withρ V. JOSEPHSON CURRENTS Let us next focus on the contribution to the sheet current K generated by Josephson currents through the junction, ignoring the contributions due to flux quanta in the annular weak link or a nearby Pearl vortex. To obtain the equation determining how φ(θ) varies when the radial Josephson current K ρ (R, θ) = K c sin φ(θ) varies as a function of θ, we start by deriving the Green's function for this problem, assuming that the current I entering at the origin flows through the weak link with a deltafunction distribution K 0ρ (R, θ) = (I/R)δ(θ − θ ′ ). As in Sec. IV, we can use the method of complex potentials. The required complex potential is where ζ = x + iy = ρe iθ , ζ ′ = Re iθ ′ , and the upper (lower) sign holds for ρ > R (ρ < R). The corresponding sheet current is whereρ =x cos θ +ŷ sin θ andθ =ŷ cos θ −x sin θ. As The complex potential for a general distribution of radial Josephson sheet current K ρ (θ) = K c sin φ(θ) can be obtained from Eq. (15) by replacing I by K c sin φ(θ ′ )Rdθ ′ and integrating over θ ′ : From this expression we find that the radial and azimuthal components of the sheet current associated with the Josephson currents are whereρ = ρ/R and the upper (lower) sign holds when ρ > R + (ρ < R − ). The terms involving the azimuthal components of the sheet current needed in Eq. (6) are given by the principal-value integral, such that the Josephson-current contribution to Eq. (6) is VI. GENERAL EQUATIONS Combining the contributions from Eqs. (8), (13), and (22), we find that that general equation determining the angular dependence of the gauge-invariant phase is We can invert this integral equation by making use of The result is where The term involving N drops out of Eq. (25) because Equation (25) When a Josephson vortex is trapped in the annular weak link (N = 1) with no Pearl vortex nearby and the current I is zero, the Josephson vortex is stationary, and the gauge-invariant phase obeys This equation has an exact solution, corresponding to a Josephson vortex centered at θ = 0, where tan(θ 1 /2) or, alternatively, Note that φ(−π) = 0, and φ(π) = 2π; also tan(θ 1 /2) → 1 when ℓ → ∞, and θ 1 → ℓ/R → 0 when ℓ → 0. Figure 4 shows φ(θ) vs θ for a variety of values of ℓ/R. In sandwich-type annular Josephson junctions [see Appendix C], the phase obeys a sine-Gordon equation, which involves the sine and the second derivative of the phase with respect to the coordinate along the junction. 15 In thin-film annular junctions discussed here, however, the sine of the phase obeys an integral equation, obtained by partial integration of Eq. (25) with sin φ = 0 and P = 0, To calculate the critical current [see Sec. II] of a small Josephson weak link (which corresponds to the case R/ℓ = 0), one usually can start with a non-currentcarrying static solution φ for which sin φ = 0, add a bias phase β, and then compute sin(φ + β) = cos φ sin β to conclude that the critical current is proportional to the average |cos φ|. This procedure remains valid here in the limit R/ℓ = 0, and the result is |cos φ| = |cos θ| = 0, which tells us that the critical current is zero in this case. However, this procedure fails for finite values of R/ℓ because φ(θ) + β is not a solution of Eq. (28). No static current-carrying state can be generated from the exact solution given in Eq. (29); there is no solution corresponding to a stationary Josephson vortex in the presence of a current I. In other words, the critical current I c of a thin-film annular Josephson weak link is zero for all ratios of R/ℓ. As soon as a current I is applied, the gauge-invariant phase distribution becomes time-dependent and the weak link becomes resistive. The behavior is simplest in the limit R/ℓ = 0, for which the voltage measured directly across the weak link between ρ = R − and R + is, by the Josephson relation, V = (h/2e)dφ/dt = IR n , where R n is the normal-state resistance of the annulus. Since the phase φ slips by 2π with a frequency ν, this occurs because the straight-line phase distribution, given by φ(θ) = θ + π at time t = 0 (similar to the dotted line for ℓ/R = 10 in Fig. 4), slides rigidly toward negative values of θ with an angular velocity ω = 2πν, giving rise to a voltage V = hν/2e = φ 0 ν, where ν = (R n /φ 0 )I. For increasing values of the ratio R/ℓ, the timedependent behavior is more conveniently described in terms of Josephson-vortex motion using a quasistatic approach. The applied current entering at the origin produces a uniform sheet-current density K I =ρK Iρ = ρI/2πR at the annulus. The resulting Lorentz force 19 F L = −θF L = −θK Iρ φ 0 induces the Josephson vortex to rotate in a clockwise sense around the annulus. When R/ℓ ≫ 1 (ℓ/R ≪ 1), the Josephson core becomes very compact and the dissipation there becomes quite large. As a consequence, for the same current I, the vortex speed v = 2πRV /φ 0 and the phase-slip frequency ν = V /φ 0 become smaller than in the opposite limit ℓ/R ≫ 1, which has the effect of reducing the effective resistance of the weak link, R ef f = V /I. This behavior is similar to that in sandwich-type annular junctions, as discussed in Appendix C. To show this quantitatively, we first note that the time dependence of all quantities calculated from the exact solution φ(θ) in Eq. (29) can be obtained to good approximation by replacing θ by θ + ωt. The voltage measured directly across the weak link between (ρ, θ) = (R − , θ) and (R + , θ) is V (θ, t) = (h/2e)dφ/dt = φ 0 νφ ′ (θ + ωt). The power delivered to the weak link by the external current source is therefore where V , the angular average of the voltage, is equal to the time-averaged voltage, < V >= hν/2e = φ 0 ν. However, the power dissipated by the ohmic currents across the weak link 22 is where the angular average of φ ′2 , obtained from Eq. (32), is Equating the input power P in to the dissipated power P out , we obtain the effective resistance of the annular weak link, and the corresponding phase-slip frequency, ν = (R n /φ 0 )I/ 1 + (R/ℓ) 2 . When R/ℓ ≫ 1, such that φ ′2 = R/ℓ to good approximation, the Josephson core size (∼ℓ) becomes much smaller than the circumference of the weak link (2πR), and it is then appropriate to think of the Josephson vortex speed v as being determined by a balance between the Lorentz force 19 F L and a viscous drag force 22 ηv. Equating the input power P in = F L v to the dissipated power P out = ηv 2 , we obtain the viscous drag coefficient (units Ns/m), Note that η is inversely proportional to the Josephson core size. As discussed in Appendix C, this behavior of η is similar to that in sandwich-type annular junctions, in which η is inversely proportional to the Josephson penetration depth λ J . The above calculations assume that the maximum value of the displacement current density across the weak link (ǫ r ǫ 0 /d N )(dV /dt) is much smaller than the maximum Josephson current density j c . This approximation is equivalent to the requirement that the vortex speed v be much smaller than c, where we find for ℓ/R ≪ 1, where ǫ r is the relative dielectric constant in the weak link and c is the speed of light in vacuum. Note that c is the analog of the Swihart velocity 23 in long sandwichtype Josephson junctions. VIII. EXACT SOLUTIONS FOR ARBITRARY N When N equally spaced Josephson vortices are trapped in the annular weak link (N = 1, 2, 3, ...) with no Pearl vortex nearby and the current I is zero, the Josephson vortex is stationary, and the gauge-invariant phase obeys An exact solution of this equation, corresponding to one Josephson vortex centered at θ = 0 and the others arranged around the annulus with equal angular spacing ∆θ = 2π/N is for −π/N ≤ θ ≤ π/N . For θ outside this interval in the positive (negative) θ direction, multiples of 2π must be added to (subtracted from) Eq. (44) to make φ(θ) continuous with the property that φ(π) − φ(−π) = 2πN . Also or, alternatively, Note that tan(N θ N /2) → 1 when ℓ → ∞, and θ N → ℓ/R → 0 when ℓ → 0. Equation (44) yields which has a maximum (+1) and a minimum (-1) at θ = −θ N and θ = +θ N . Defining the angular width θ core of one of the Josephson cores as the range of θ values for which π/2 ≤ sin φ(θ) ≤ 3π/2 (modulo 2π), we find for arbitrary N , When ℓ/R → ∞, θ core = π/N , and when ℓ/R ≪ 1, θ core ≈ 2ℓ/R. See Fig. 9. As in the case N = 1, the critical current of the annular weak link is zero for all N . When a current I is applied, the effective resistance can be calculated as in Eqs. (37)-(40), except that for arbitrary N we have V = N φ 0 ν, such that the effective resistance of the annular weak link is When R/N ℓ ≫ 1, such that φ ′2 = N R/ℓ to good approximation, the Josephson core size (∼ ℓ) becomes much smaller than the intervortex spacing (2πR/N ). In this case the effective resistance of the annular weak link containing N Josephson vortices is R ef f = N R 1 , where R 1 = R n ℓ/R is the effective resistance when N = 1 in this limit [see Eq. (40)]. It is also appropriate in this limit to think of the Josephson vortex speed v as being determined by a balance between the Lorentz force 19 F L and a viscous drag force 22 ηv. Equating the input power per vortex P in /N = F L v to the dissipated power per vortex P out /N = ηv 2 , we obtain exactly the same viscous drag coefficient as in Eq. (41). IX. CRITICAL CURRENT AFFECTED BY A NEARBY PEARL VORTEX We next consider the behavior when there is no flux quantum in the annular weak link (N = 0) but there is a Pearl vortex at (x, y) = (ρ v , 0) either inside the annulus (ρ v < R − ) or outside (ρ v > R + ). For simplicity let us consider only the case for which ℓ/R is so large that we can ignore the effect of the Josephson currents on dφ(θ)/dθ. The equation determining the angular dependence of the gauge-invariant phase is then simply whereρ v = ρ v /R and P (ρ v , θ) is given in Eq. (14). Integration of Eq. (52) yields the gauge-invariant phase difference, where and the constant of integration is chosen such that φ v (ρ v , θ) = 0 at θ = 0, the point on the annulus that is closest to the Pearl vortex. To calculate the critical current [see Sec. II], we note that the net supercurrent carried through the weak link is I = I c0 sin φ. Noting that φ(θ) = φ v (ρ v , θ) + β, where β is a constant bias phase, also is a solution of Eq. (52), we obtain supercurrent-carrying solutions for which I = I c0 cos φ v sin β. The critical current is then given by the simple result, or Note that I c = 0 whenρ v = 1, which corresponds to the case that the Pearl vortex has moved into the annular junction. This is equivalent to the state N = 1 discussed in Sec. VII. Equations (55)-(57) are valid only in the limit R/ℓ = 0. To calculate I c for finite values of R/ℓ would require solving Eq. (23) for N = 0 at all ρ v . While this equation can be solved perturbatively for small R/ℓ, the corrections to Eqs. (55)-(57) are second order in R/ℓ, such that this procedure yields only very small increases in the values of I c /I c0 for 0 <ρ v < 1 andρ v > 1. How I c is affected for small values of ℓ/R (large R/ℓ) remains unknown. X. SUMMARY In this paper I have reported a detailed study of the properties of a Corbino-geometry annular weak link of radius R in a superconducting thin film for which the Pearl length 7 Λ = 2λ 2 /d is much larger than R. I have considered separately the contributions due to an integral number N of flux quanta trapped in the weak link, a Pearl vortex pinned nearby, and the Josephson current distribution across the weak link. I derived two equivalent integral equations describing the gauge-invariant phase distribution φ(θ) around the annulus, and I described how these integral equations can be transformed into each other. I considered the case of N = 1 with no nearby Pearl vortex, first presenting an exact solution for φ(θ) in the static case when I = 0, and then discussing the dynamic case for I > 0, when the Josephson vortex rotates around the annulus at constant angular velocity. I then briefly discussed the case of an arbitrary number N of equally spaced flux quanta trapped in the weak link, again presenting an exact solution for the static case when I = 0 and discussing the dynamic case when I > 0. Finally, I calculated the critical current I c of the weak link as a function of the position of a nearby Pearl vortex and showed that I c = 0 when the Pearl vortex falls into the weak link. I mentioned in the introduction that thin-film annular weak links containing trapped vortices have been proposed 1-3 as a place to test for the influence of the Berry phase on the vortex dynamics. However, in this paper I have assumed that the vortex motion is determined only by the principle of conservation of energy: the vortex speed was obtained by setting the power supplied to the weak link equal to the power dissipated via ohmic currents. I leave it to other authors to discover how this treatment may need to be modified to account for the influence of the Berry phase. and Θ(x) = 1 when x > 1 and Θ(x) = 0 when x < 1. the annulus with a speed v = Rω = 2πRν. The effective resistance of the junction, calculated as in Sec. VII, is where R n is the normal-state resistance of the annular junction, φ ′2 is the angular average of [dφ(θ)/dθ] 2 , and E(k) is the complete elliptic integral of the second kind. 24,25 In the limit λ J ≪ R, for which the Josephson core size (∼ λ J ) is much smaller than the circumference of the annulus (2πR), it is appropriate to think of the effective resistance as arising from a balance between the Lorentz force per unit length of vortex and a viscous drag force per unit length. In this limit, the viscous drag coefficient per unit length (units Ns/m 2 ) is 22 where W is the width of the annular junction. Note that η is inversely proportional to the Josephson core size.
2010-08-30T15:12:40.000Z
2010-08-30T00:00:00.000
{ "year": 2010, "sha1": "f01840d056d3269d9583d1bf6c227dea60fb2a87", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1008.5094", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f01840d056d3269d9583d1bf6c227dea60fb2a87", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
212787494
pes2o/s2orc
v3-fos-license
Simulation Fidelity and Skill Learning during Helicopter Egress Training: The Role of Vision This project aimed to evaluate the effects of ambient lighting during practice and performance of simulated helicopter escape sequences. Participants were random-ized to one of the following groups to practice a standard helicopter underwater escape sequence: Light (with room lights on), Dark (with room lights off), or Graduated (in the light for the first half and then in the dark for the second half of the trials). Following practice, participants had a minimum of 30 min break, followed by retention testing in the dark and then in the light. Dependent measures included accuracy and movement time. Results indicated that participants performed more accurately during the dark retention trial than during the light retention trial. This could be due to increased arousal elicited by performance in the dark or, alternatively, may suggest that performance of helicopter escape sequences is not visually mediated. Based on findings, it appears that training in the light is suitable for potential performance in the dark. Introduction Safety training for high-risk industries and scenarios requires an approach that optimizes learning for enhanced skill learning and retention. An example is helicopter underwater escape training (HUET) for surviving a ditching over water. HUET is mandatory for offshore oil and gas employees and relevant military personnel. Currently, no universal training standard exists [1]. When a helicopter ditches in water, it typically inverts and sinks [2][3][4]. Crew and passengers often have less than 15 s of notice to make an underwater escape [5]. Not surprisingly, drowning has been identified as the leading cause of death following a ditching [6]. Disorientation and limited vision have been hypothesized as contributing to reduced survival [1,7]. These factors are influenced by darkness, which has been linked with higher mortality rates during egress [1,5]. Arguably, all egress occurs in low-light conditions. A nighttime helicopter ditching would obviously occur in dark conditions. However, regardless of time of day, numerous factors degrade light availability and consequently may impact visibility. For example, the inversion of the helicopter directs windows away from daylight and transmissivity of light through water is much less than light through air. As the helicopter sinks, light penetrance degrades. Indeed, at 35 feet of sea water, approximately 20% of light penetrates clear ocean water [8]. Debris presence [9] and water turbidity [8,9] further impact light attenuation. Even at shallow depths with bright sunlight, very high turbidity can degrade visibility to less than 1 foot of distance [8]. Presumably, darkness would augment challenges that are exacerbated by poor visibility such as finding exits and getting oriented to the water's surface, thereby impacting survival. Research has shown that night flying is associated with reduced survival rates [1,5]. One study reported that survival rates for a nighttime and daytime crash were 41 and 77%, respectively. Limited vision during egress was hypothesized as contributing to the reduced survival rate at night [1]. To mitigate this, emergency exit lighting has been incorporated in helicopter design, known as helicopter emergency escape lighting (HEEL). Some studies have demonstrated reduced escape times with HEEL in the laboratory setting [9][10][11][12]. However, the effectiveness of HEEL remains a concern, as there is some evidence to suggest that lights may not be detectable when seated by the aisle even with bright ambient lighting conditions [9]. To help prepare for emergency egress, many military organizations and industries have mandated that relevant personnel complete HUET. Since no universal training standard or assessment standard exists [13], whether trainees practice egress in low-light conditions will vary based on the best practices of individual training facilities. Limited research exists on optimal training curricula to improve performance and survivability. The principle of learning specificity states that practice is most effective when it closely matches actual performance conditions [14]. Skill learning is contingent upon the development of a sensorimotor plan that is sensitive to sensory information available during practice [14][15][16]. According to these principles, helicopter egress practice should be conducted in low-light conditions to optimize learning. The 2009 Cougar Flight 491 helicopter crash off the Newfoundland, Canada coast prompted an increased focus on identifying and mitigating safety threats to helicopter night flying. Following the accident, the Commissioner's report recommended the restriction of night flying until adequate safety improvements were made [17]. A ban on night flights in the province has remained in effect. Another recommendation made was increased simulation fidelity of training. Simulation fidelity refers to "the degree of faithfulness between entities" [18]. The similarities between entities, or conditions, govern the degree of learning transfer [19,20]. A high degree of simulation fidelity may be particularly important for optimizing learning when training for high-stress scenarios [1,18,21]. Although it was required that pilots demonstrate successful ditching during night flights, no attention has been given to the ability of passengers to demonstrate the ability to escape during low-or no-light conditions or to the fidelity of HUET to prepare for these conditions. Limited nighttime ditching training was identified as a potential factor contributing to the reduced survival rate [1]. Given the challenge of limited visibility during escape, it is plausible that training in dark conditions may be beneficial to learning and performance. Since helicopter egress generally occurs in a low-light setting, the principle of learning specificity suggests that HUET would be most effective if also conducted in low-or no-light conditions. According to the principle of learning specificity, the most efficient sensory information available during acquisition dominates over other feedback sources and is utilized to develop a sensorimotor plan. Once developed, the sensorimotor plan remains sensitive to the optimal sensory information available during practice [14][15][16]. This principle was first demonstrated when participants who had practiced a manual aiming task with vision performed more poorly on transfer tests when vision was withdrawn, suggesting that vision is the dominant and preferred sensory source [22][23][24]. Accordingly, lack of visual feedback due to low ambient light levels during practice would result in performance decrements. Ambient vision is thought not to be affected by low levels of light [25]. However, decreased light levels could reduce the acuity of visual feedback. This may consequently affect aspects of sensory feedback such as eye and head movement patterns. Changes in lighting can affect perception and object appearance, for example by shadow production [26]. It is plausible that low lighting may reduce visibility range, which could affect end-target sight or object recognition. For goal-directed movements where terminal visual feedback is imperative for movement calibration, performance would decline in low light conditions [27]. It is possible that learning may be similarly affected. Motor learning refers to the changes in internal processes that occur with practice or experience, which affects an individual's ability to execute a motor task. Motor learning depends on the integration and interpretation of sensory stimuli. Retention testing is the preferred method to assess learning, which involves evaluation of a trained task after some time interval. Performance is the observable production of a motor skill, which is influenced by transient factors such as fatigue, motivation, and affective state [25,[28][29][30]. Although related, it is important to note that performance and learning are distinct processes [25,30]. To examine the role of visual feedback on learning specificity, studies have typically examined effects of manipulated visual feedback (e.g., by distortion or narrowing) or withdrawn vision during motor tasks. Proteau and colleagues had participants practice a manual aiming task, which required the movement of a stylus to an end target while mechanically perturbed and time constrained, in either a light or dark room [14,22]. When participants trained in the dark and then performed a retention transfer test in the light, performance deteriorated. This demonstrated the impact of training condition for retention and transfer. Importantly, the end-target was always visible in the dark condition. Additionally, subjects performed over 1000 practice trials and were given knowledge of results following each trial. These conditions may not be generalizable to real-life contexts. The present study aimed to evaluate the effects of lighting on practice and retention (i.e., learning) performance during helicopter egress sequences conducted in a simulator. Practice occurred either with all trials in the light (Light Group), all trials in the dark (Dark Group), or half of the trials in the light followed by half in the dark (Graduated Group). The Graduated group was intended to evaluate effects of progressive learning [31]. We hypothesized that the Dark Group would have superior retention performance in the dark compared to the Light Group, supporting the principle of learning specificity and the Graduated Group (that practiced in both the light and then the dark) would have similar performance to both the Light and Dark Groups in the respective retention tests. Participants Thirty-eight participants (20 females, 18 males; average age (SD): 31 (11) years, range: 19-58) were recruited from the local community. All participants had selfreported normal or correctable-to-normal vision and gave written consent. Procedures complied with the Declaration of Helsinki and ethics was approved by the Interdisciplinary Committee on Ethics in Human Research at Memorial University protocol 20180377-HK. Task and apparatus Experimental procedures were conducted at the Marine Institute's Offshore Safety and Survival Centre (MI-OSCC), Conception Bay South, Newfoundland, Canada. Trials were conducted in the Help Quest Helicopter Ditching Simulator (Virtual Marine, St. John's, NL) without use of the motion platform or simulated helicopter noise. The interior of the simulator replicates a Sikorsky S-92, which is used commonly for operational purposes internationally. For practical reasons, the simulator contains only four seats (two seats each by a starboard side window and two each by a port side window, forming two rows) compared to 19 seats in the actual S-92. Practice trials were conducted in the front and rear port window seats and front starboard window seat since these seats had push-out window exits. Retention trials were conducted in the front port window seat. The front port seat was always in a crash attenuated position (stroked), which is low to the ground. A stroke seat collapses upon impact as part of an energy absorption system that is intended to prevent primarily spinal injuries after a crash. However, evidence suggests that egress from a stroked seat position is more challenging than from a normally positioned seat because the evacuee is now situated lower relative to the window (escape route) and is in an orientation where it is more difficult to generate sufficient force to push out the helicopter window for egress [13,32]. Participants performed a standardized escape sequence (Appendix) during a simulated submerged helicopter ditching. The sequence included the following: taking off a headset; putting on a hood; putting on a scuba-type mask; crossing arms and tucking the head to brace for "impact"; putting a scuba-type regulator (mouthpiece attached to a compressed air-filled cylinder) in the mouth; preparing to exit by pushing the window; and unbuckling a four-point harness. Participants were prompted to execute sequence steps by the following verbal commands (given in the order listed): "ditching, ditching, ditching"; "brace, brace, brace"; and "impact, impact, impact". Cues were given at regular elapsed time intervals -the brace call was given 30-45 s after the ditching call (time interval based on completion of ditching steps), and the impact call was given 15 seconds after the brace call. Procedures Permuted block randomization was used to allocate participants into one of the following training groups: with room lights on for all trials (Light); with room lights off for all trials (Dark); or in the light for half of the trials and in the dark for the other half (Graduated). The experiment consisted of a didactic session followed by simulator-based trials. The didactic session consisted of a 20-min pre-recorded training video in which a qualified and experienced instructor presented adapted material from the existing HUET course offered by the MI-OSSC. Information relevant to helicopter egress using Helicopter Underwater Escape Breathing Apparatus (HUEBA) was given, while other non-pertinent material was removed. Didactic sessions included up to four participants. HUET is regularly taught using group instruction format. Participants performed simulator trials individually. Each participant was allotted one orientation trial with real-time feedback immediately preceding practice trials. The orientation trial was conducted in the rear starboard position, which was not used for practice or retention trials. No feedback was given once practice trials commenced. Practice trials consisted of six total sequence executions, which is similar to the amount of practice performed during a HUET course. Participants rotated through each seat position (front and back port side; front starboard side) twice. Seat position order was counterbalanced. Practice trials took approximately 30 min to complete. Following practice trials, participants were given approximately a 30-to 60-min break prior to retention testing. During this time, participants remained onsite and were permitted to engage in leisure activities of choice (e.g. reading, browsing on internet). For all participants, the retention tests consisted of one trial in the stroked seat in the dark followed by one trial in the light. Retention tests took approximately 10 min to complete. All practice and retention trials were recorded with a FLIR T430sc series infrared video camera that was able to capture video in dark conditions. Dependent variables Measures of performance included accuracy and movement time. Movement time was defined as the time in seconds (s) from the first action taken after the ditching command to when movement ceased. Participants were instructed to pause in the final position when he or she felt that the sequence was completed. Accuracy was measured with a checklist (refer to Appendix) where participants were awarded a point for every task in the sequence that was correctly performed. All subtasks had to be performed correctly and in the appropriate sequence to be awarded the point. The maximum possible score was seven. This checklist was developed through consultation with experienced HUET instructors at the OSSC and according to the training requirements of the Canadian Association of Petroleum Producers. Analysis Dependent measures during practice were analyzed by separate 3 (Group: Dark; Light; Graduated) X 3 (seat-position; front starboard; back port; front stroked port) Analyses of Variances (ANOVAs) with repeated measures on the seat-position factor. Learning was evaluated by comparing practice trials conducted in the stroked seats, the dark retention test, and the light retention test. Data were analyzed in separate 3 (Group: Dark; Light; Graduated) X 3 (phase: practice trials in front stroked port seat; dark retention in stroked port seat; light retention in stroked port seat) ANOVAs with repeated measures on the phase factor. Results Data from 36 participants were included in the analysis. Two participants were excluded due to loss of performance data. Retention movement time Mauchly's test indicated that the assumption of sphericity has been violated for the main effect of trial (χ 2(2) = 7.067, p = 0.029); therefore, degrees of freedom were corrected using Huynh-Feldt estimates of sphericity (ε = 0.920). A significant main effect of phase was found (F (1.839, 53.335) = 5.911, p = 0.006, ƞ 2 p = 0.169). LSD post hoc tests indicated that participants took significantly longer during the practice trial (mean = 44.5 s) than during the light retention trial (mean = 39.2 s; Discussion This is the first study aimed to evaluate performance of simulated helicopter escape sequences conducted in low light conditions. We hypothesized that in comparison to the Light Group, the Dark Group would demonstrate superior overall retention. We also hypothesized that the Graduated Group would perform equivalently to both the Light and Dark Groups in the respective retention tests. Results did not support our hypotheses. Performance during practice and retention did not differ significantly across groups, indicating that ambient lighting during practice did not affect performance. Based on our findings, training in the light appears to be appropriate for performance and learning of helicopter escape sequences that will be eventually performed in the dark. Findings may inform training standards and be relevant to other extreme environments domains, such as within the search and rescue and cave diving, where ambient light levels may vary and may impact performance. However, it is possible that the task was too easy and that under more ecologically valid conditions (e.g., performing in a mockup helicopter that is being dropped into a pool) that are accompanied with increased anxiety, results would have been different. Interestingly, all participants performed more accurately during the dark retention trial than during the light retention trial or during the practice trials conducted in the stroked seat. However, movement times were significantly shorter during the light retention trial. This is indicative of a speed-accuracy trade-off. It is possible that the dark retention trial conditions promoted more optimal arousal than the light retention trial conditions. The Yerkes-Dodson law states that increased arousal will improve performance until optimal performance is achieved, after which point performance will decline as arousal further increases [33]. Attentional resources may be directed towards the task as self-awareness increases with anxiety. This may be detrimental to performance by disrupting automatic processes [34]; however, it can also benefit learning by inducing the allocation of more cognitive resources for task completion, which may attenuate aversive threat effects [35]. The principle of learning specificity has been primarily demonstrated in studies where participants have extensive practice. Evidence suggests that specificity effects are positively correlated with experience level, and thus are predominantly seen after the sensorimotor plan for a skill has been engrained and is automated [22,23,36,37]. Participants in this study had either limited or no HUET experience. It is possible that experts, while outperforming novices, would experience performance decrements if escape occurred in the dark, but training had previously been conducted in the light. Another explanation may be that helicopter escape is not visually mediated. Lastly, it is possible that it is relatively easy to perform the set of required actions on a dry simulator with no motion and hence the lighting conditions did not affect performance. It is important to discuss the meaning of the accuracy values. Mean accuracy score during the dark retention test was 4.9 (out of possible 7) but is this considered good performance? This is hard to answer directly as it is possible that on one hand, failure to properly execute two steps may still allow for helicopter egress whereas, on the other hand, it may prevent egress depending on what steps are involved. For example, if one mistakenly releases his/her safety harness before pushing out the window, the latter may not be possible. This is because once the safety harness is released, pushing the window while submerged will only lead to the being pushed away from the window. In other words, once the harness is released, the passenger may not have the necessary support or leverage to push out the window. If that happens, egress may not be possible. The passenger may need to egress through a different window that was opened by another passenger. Doing so would likely promote disorientation and, in the extremely high stress scenario, may not be realistically possible. Hence, we suggest that instructors may need to decide whether some steps in the sequence of actions are more critical than others. If so, in the limited time available for training, it may be prudent to emphasize critical steps and ensure accuracy and appropriate sequence of execution. Two limitations of this study are noteworthy. First, as mentioned before, the conditions were relatively easy and did not fully mimic an actual ditching experience. It is anticipated that the inclusion of more naturalistic conditions such as noise and motion from the helicopter, heat stress and discomfort from the flight suits, and, perhaps most importantly, escape while underwater would affect the ability to learn and retain the required skills. Second, the retention period in this study was only 30 min. A longer retention period would have been more ecologically valid as passengers are certified in this procedure every 3-4 years, depending on the jurisdiction. Hence, it would be important to examine the ability to retain the egress skills in a longitudinal study. Conclusion Our results suggest that the practice of helicopter escape sequences in the light may be sufficient for performance during virtual reality simulation in the dark. It is interesting to note, however, that the average accuracy across groups for the dark and light retention tests were both 5 points out of a maximum of 7 points. Arguably, any score less than 7 could have severe consequences in the real-world. Higher fidelity studies would help to better characterize optimal practice conditions to further inform training standards.
2020-02-20T09:16:27.153Z
2020-02-03T00:00:00.000
{ "year": 2020, "sha1": "f79140afcf11d507f2adf6665f5532e620b7803e", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/70306", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "4903aa75d7361a4b737dee0cca26d6d01e2402d1", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
238227680
pes2o/s2orc
v3-fos-license
Application and Effectiveness of Big Data and Artificial Intelligence in the Construction of Nursing Sensitivity Quality Indicators In order to explore the quality management efficiency of applying big data and artificial intelligence in nursing quality index, a method of building a nursing management platform integrating nursing indicators and nursing events is proposed. Based on the investigation of the application demand of nursing information system, the method achieves timely data sharing and transmission through WLAN technology and realizes nursing management monitoring, nursing quality index enquiry, and automatic statistical analysis under the vertical management mode of nursing. The results showed that 77 people (73%) thought the time decreased, 19 people (18%) thought the time was the same, and 9 people (7%) thought the time increased. In terms of intelligent application and big data of nursing information management system, there is a significant difference in nursing management efficiency before and after using nursing management information system (P < 0.001). The nursing management control platform is designed and applied, and the nursing quality control method and actual management process are improved, which is very good for strengthening nursing quality management. The overall optimization of the quality control process is realized, which helps to mobilize the initiative and enthusiasm of nursing staff and continuously improve the effectiveness of nursing management and nursing efficiency. Introduction With the development and progress of the society, people's demand for medical treatment has been upgraded from the treatment of diseases to the integration of medical care. Nursing quality evaluation is an objective indicator reflecting this demand and is the key link and important basis of nursing quality management [1]. Scientific, reasonable, unified, and standardized nursing quality indicators are the main tools for evaluating nursing quality, with the help of which nursing services can be evaluated and supervised throughout the whole process [2]. Correct and effective use and analysis of nursing quality index data can timely find out the problems existing in nursing quality and safety management and provide a basis for managers to make decisions. In the context of the era of big data, nursing information discipline, which organically combines nursing science, computer science, and information science, emerges at the historic moment [3]. Nursing informatics will identify and process the collected data to provide the basis and direction for managers to make decisions or behaviors. e nursing quality management information system refers to the nursing quality score mark. Input the data into the computer, establish the database, store the information with the computer, carry out statistical analysis, and output the nursing work quality of each department, so as to accurately evaluate the quality of nursing work, find defects, and promote the continuous improvement of nursing quality [4]. How to use big data and artificial intelligence to optimize the function of nursing quality management information system is a topic that needs to be actively explored and solved by nursing management personnel. Some hospitals in China have started to use the hospital information system (HIS) based on mobile network equipment and distributed software development to improve the working efficiency and service quality. e nursing department cooperates with software companies [5]. We design and apply the nursing quality management system with smart phone as the terminal and big data analysis and nursing quality control platform as the core, adopt the mode of pilot before rolling out, and apply it in all nursing units of the hospital; it has promoted the informatization construction of nursing quality management and achieved good results in shortening the time of nursing quality control, optimizing the flow of nursing quality control and strengthening the quality control and tracking management in the process of nursing service [6]. Guleng et al. proposed to update the national nursing sensitive quality index database and identified 9 indicators, including 2 structural indicators: the ratio of specialized nurses to other nursing staff and the number of nursing hours per day of patients, 2 process indicators: nursing staff satisfaction and patient satisfaction with health education, and 5 outcome indicators: skin integrity care, falls, incidence of nosocomial urinary tract infection, patients' satisfaction with general nursing, and patients' satisfaction with pain management [7]. rough the development, testing, and implementation of the nursing sensitivity index database, Cz et al. effectively collected nursing staffing, patient flow adverse events, hand hygiene, and other management data, which can be used to measure nursing performance, evaluate patient prognosis, and determine the quality and safety of nursing practice. In order to improve the quality of nursing, the "web version of nursing quality evaluation and improvement system" was developed, and the nursing quality database included 260 nursing indicators [8]. Dey et al. integrated the traditional strategies of Walker and Avante based on the concept matrix of Holzemer's model of health care research results. Four structural indicators were finally determined: 24 h patient nursing hours and nurse staffing (personnel mix, skill mix, and personnel ratio), and four outcome indicators were determined: the incidence of pressure sores, the incidence of falls (injuries), hospitalacquired infections, and patient (family) nursing satisfaction [9]. By investigating six non-university teaching hospitals in different regions, Zhou established five nursing sensitivity quality indicators, including screening for mental disorders, observation of mental disorders, malnutrition, and standardized pain assessment for patients after surgery in the rehabilitation room and hospital units. Risk identification of patients in hospital can be realized to effectively measure the quality of nursing [10]. On the basis of current research, a nursing management platform combining nursing indexes and nursing events was proposed. Based on the survey of application needs of nursing information system, the method achieves data sharing and transmission in time through WLAN technology and realizes nursing management monitoring, nursing quality index inquiry, and automatic statistical analysis under vertical nursing management mode. e experimental results showed that there was a significant difference in nursing management efficiency before and after artificial intelligence and big data were applied to nursing information management system (P < 0.001). e design and application of nursing management and control platform can comprehensively improve the nursing quality management and control methods and actual processes, which is very beneficial to strengthen nursing quality management. e overall optimization of quality control process will help to fully mobilize the initiative and enthusiasm of employees, and continuously improve the effectiveness of nursing management and nursing efficiency. Establishing a Medical Care Quality Management System. In order to meet the application requirements of nursing information system, the nursing information system has timely data sharing and transmission with his database, big data platform, and medical information docking system through WLAN technology and realized the functions of nursing management monitoring, statistical analysis of nursing quality indicators, quality control, and automatic feedback under the vertical nursing management mode. e hospital has established a nursing informatization research and development team, which is composed of the director of nursing management, director of nursing department, some head nurses, nursing backbone, and engineers, and holds regular special meetings to jointly develop and improve system functions. e information flow of the nursing quality indicator information system is shown in Figure 1. rough the data cleaning and processing, the relevant data of nursing quality indicators are presented in the form of a variety of visual charts. Diversified charts show problems with quality of care from a variety of perspectives, so that weaknesses can be quickly identified and corrected to reduce the occurrence of quality of care problems. e "number of cases of event type and department" in adverse events is taken as an example for specific explanation. According to the selected year, quarter, month, and day, the number of cases of department of event type is presented in the form of data table and histogram. e types of reported events were presented in the form of pie chart, line chart, and Platonic chart, and the number of cases of event types and departments were compared in the form of year-onyear trend chart according to different conditions [11]. At the same time, the data of the content of adverse events were extracted, the fishbone diagram was drawn, and the major cause, medium cause, and minor cause were analyzed. In addition, the presentation of some data correlated the nurses involved in the indicators with the patients, and the responsibility was transferred to people to avoid the occurrence of buck passing. Nursing managers can use PDCA or failure mode and effect analysis (FMEA) management mode to manage nursing staff who present nursing quality problems and problems related to them from all angles and levels [12]. e Application of Medical Care Quality Management System (i) System Login. e computer of the nurse station can be equipped with medical and nursing quality management software, and all nurses can log in through the work number, which can facilitate the nurses to query the feedback information and nursing quality inspection information, and the steering group can also query the nursing management quality at any time. (ii) Set Inspection Standards. We improve the nursing quality information base and input the index items and quality standards to be monitored into the system, so as to form a more standardized and reasonable structured information, which is helpful for the inspection team to check, for example, in the nursing department, nursing data collection items have been added to the reporting system of bad nursing problems, such as the cause of patients' falls, treatment measures and the degree of injury, etc., and the system can automatically generate indexes such as the rate of patients' falls and the number of accidents. (iii) Quality Control of Smart Phones. Nursing quality monitoring personnel realize quality monitoring at the bedside of patients through smart phones and take the inspection standard as the control index to timely input, store, and report the existing nursing problems; if the submission is not completed in time, the system will send a reminder message. At present, the hospital has achieved bedside monitoring, such as nursing document management, ICU nursing quality, graded nursing, patient satisfaction, quality nursing quality control, and emergency nursing management. Comparison of the incidence (%) of falls/falls in the nursing safety management application information system is shown in Table 1. Comparison of the incidence of stress injury (%) of nursing safety management application information system is shown in Table 2. It is mainly composed of nursing sensitive index and clinical basic quality index, which is involved in the reporting, review, control, and tracking of nursing quality indicators. e system can automatically extract the relevant data of statistical query and decision analysis module, nursing adverse event reporting system, nursing electronic evaluation sheet, and other modules, realizing the function of systematic query, automatic statistics, analysis, and feedback of nursing indicators. e nursing department and quality control staff can check the indicators and quality control progress of the whole hospital and various departments at any time through the nursing management platform and the dashboard of the nursing management end of the medical letter smart phone. It is sensitive to the actual nursery-patient ratio, the incidence of falling and falling, and the incidence of unplanned extubation. For example, the incidence rate of unplanned extubation in the whole hospital in every quarter can be queried and analyzed. As can be seen from Tables 1 and 2, after the trend chisquare test, there were significant differences in the incidence of fall/fall and stress injury in the four quarters (P < 0.05). Specifically, the incidence of falls/falls and stress injuries decreased significantly from the first quarter to the fourth quarter. e Application Effect of Artificial Intelligence and Big Data in Medical Care Quality Management. rough the application of advanced technologies such as artificial intelligence and big data in medical care quality management, 79% of nursing staff said that the system greatly saves the time of nursing quality control, and nursing staff can realize the automatic generation of nursing quality management reports through the input of nursing quality management problems in the system, and automatic statistics of nursing problems can help save the statistical time of nursing management personnel, which can not only shorten the time of nursing quality control but also improve the efficiency of nursing [13]. is is shown in Table 3. As can be seen from Table 3, the comparative analysis of nursing management efficiency before and after the application of nursing management information system has significant difference (P < 0.001), which shows that the nursing management efficiency has been significantly improved after implementation. Journal of Healthcare Engineering e accuracy, completeness, and objective authenticity of traditional manual statistics are difficult to be guaranteed, which affects the management effect. e big data platform integrates the previously dispersed, isolated, and static information into complete, continuous, and shareable dynamic information, which improves the accuracy, objectivity, and continuity of nursing quality management. As long as the quality inspection data in the nursing quality management information system supported by the big data platform are timely input, the background can automatically and accurately calculate the scores, analyze the proportion of various problems, and realize the objective quantification of the quality index data. e system accurately analyzes the daily work quality of each nurse and checks the details that can be traced, so that the behavior of nursing staff can be more standardized, to ensure and improve the quality of nursing safety. Experimental Analysis e nursing quality management system based on big data and intelligent mobile realizes the timely sharing and transmission of data with his database, big data platform, and medical information docking system through WLAN technology, so as to realize the nursing management monitoring under the vertical nursing management mode. It has the functions of statistical analysis of nursing quality indicators, bedsides quality control and automatic feedback. e hospital can set up a nursing information R&D team, and the nursing director, nursing department director, some head nurses, nursing backbone, and engineers will regularly hold special meetings to jointly develop and improve the system functions. e whole hospital went online to use the new mobile nursing quality management information system. e nursing quality management platform can be installed on the computer of the nurse station on each floor and the head nurse and the nurse's work mobile phone. Some nurses can log in to the platform using their work number to view the ward nursing quality and the feedback information of the nursing department. e nursing quality supervision and education group can also report the operation of the platform to the platform development company at any time, bring more quality standards and index items that need to be monitored into the system to improve the nursing quality standard information base. Analysis of hospital data after using the system is shown in Table 4; 73% of nurses believe that the use of the system can shorten the time consumption of nursing quality control. Nurses only need to input the complete quality control problems into the system. e system can automatically classify and count various reports and nursing problems in nursing quality management, which can effectively reduce the time used by nursing managers for input and statistics. At the same time, it can also inquire and supervise the nursing quality problems in real time, which greatly saves the time of nursing quality control and improves the work efficiency. Traditional Manual Statistics. Accuracy, integrity, and objective authenticity are difficult to guarantee, affecting the management effect. e big data platform integrates the previously scattered, isolated, and static information into complete, continuous, and shareable dynamic information, which improves the accuracy, objectivity, and sustainability of nursing quality management. In the nursing quality management information system supported by big data platform, as long as all kinds of quality inspection data are timely input, the background can automatically and accurately calculate the scores, analyze the proportion of all kinds of problems, and achieve the objective quantification of quality index data. e quality of each nurse's daily work execution can be accurately counted, and the details can be traced back, so that the practice of nurses can be more standardized, and the safety and quality of nursing can be guaranteed and improved. After the medical staff input the patient's basic information, cost information, and work performance into the system, the nurse can provide medical care for each disease through the task panel corresponding to the bed, so as to provide personalized and high-quality nursing services. It can meet the needs of patients, close the relationship between nurses and patients, real-time early warning and reminder, and reduce work omissions. e effective implementation of the nursing check system standardizes the behavior of nurse practitioners. rough synchronous ward rounds of medical care in the mobile medical system, the consistency of patient information obtained by medical care is improved, effective communication between medical care and patients is strengthened, the quality of medical care service is further improved, and patient satisfaction is enhanced. e nursing quality management platform can give full play to the automatic statistical analysis function of big data and realize the refinement of data management. e platform automatically summarizes the problems of each department according to the input results, analyzes the data according to the problem ratio, and feeds back the problems of each department by exporting word documents, so that the corresponding department can formulate improvement measures to improve the quality of nursing. rough the nursing quality management platform, medical staff can know the basic information, cost information, and work execution of patients at a glance. rough the corresponding task panel, the responsible nurse can provide personalized and high-quality nursing services to meet the needs of patients, close the relationship between nurses and patients, and real-time early warning information to reduce work omissions. Information technology can not only improve the reliability and timeliness of data but also assist nursing decision making to avoid potential nursing risks, reduce the total length of patient hospitalization, reduce the total cost of patient hospitalization, and increase patient turnover and potential income, so as to achieve better quality nursing services. rough accurate statistical analysis of nursing quality monitoring content assisted by information technology, the complex data are organized and the redundant system is simplified, which provides support for the acquisition and storage of massive data and realizes the sharing of data across nursing units. It is helpful for nursing managers to quickly grasp the general situation of nursing quality, which is an important guarantee for the realization of modern medical construction. is paper designs a nursing quality control platform by connecting big data with intelligent nursing information system and practices and improves the processes and methods of nursing quality control, so as to strengthen quality management, optimize quality control process, and improve work efficiency and management efficiency. However, there is still a lot of work to be done in the closedloop management of the whole process of informatization of human, financial, and material resources and it is necessary to further improve the system function and improve the quality of nursing quality management. Conclusions In order to explore the quality management efficiency of applying big data and artificial intelligence in nursing quality indicators, a nursing management platform combining nursing indexes and nursing events was proposed. Based on the survey of application needs of nursing information system, the method achieves data sharing and transmission in time through WLAN technology and realizes nursing management monitoring, nursing quality index inquiry, and automatic statistical analysis under vertical nursing management mode. e experimental results show that there is a significant difference in nursing management efficiency before and after the application of artificial intelligence and big data in nursing management information management system (P < 0.001), which is very beneficial to comprehensively improve nursing quality control methods and actual processes and strengthen nursing quality management. It can realize the overall optimization of quality control process, help to mobilize the initiative and enthusiasm of nursing staff, and continuously improve the effectiveness of nursing management and nursing efficiency. At present, the users of regional APPS are gradually increasing, but relatively remote areas and underdeveloped rural areas do not know much about the convenience measures of apps. erefore, these preferential measures should be extended to rural areas. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest. Table 4: Comparison of analysis time of nursing quality problems input before and after using the nursing quality management information system by nurses. Project Number of people Percentage Time reduction 77 73 e time is equal 19 18 Time increases 9 7 Journal of Healthcare Engineering 5
2021-10-01T05:15:14.012Z
2021-09-21T00:00:00.000
{ "year": 2021, "sha1": "5498a795c0825e43334cce946c6f53f7fe9839f8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2021/2087876", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5498a795c0825e43334cce946c6f53f7fe9839f8", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18623595
pes2o/s2orc
v3-fos-license
Muscle strength, physical fitness and well-being in children and adolescents with juvenile idiopathic arthritis and the effect of an exercise programme: a randomized controlled trial Background Decreased muscle strength, fitness and well-being are common in children and adolescents with juvenile idiopathic arthritis (JIA) compared to healthy peers. Biological drugs have improved health in children with JIA, but despite this pain is still a major symptom and bone health is reported as decreased in the group. The improvement made by the biological drugs makes it possible to more demanding exercises. To jump is an exercise that can improve bone heath, fitness and muscle strength. The aim of the study was to see if an exercise programme with jumps had an effect on muscle strength, physical fitness and well-being and how it was tolerated. Methods Muscle strength and well-being were studied before and after a 12-week exercise programme in 54 children and adolescents with JIA, 9–21 years old. The participants were randomized into an exercise and a control group. Muscle strength, fitness and well-being were documented before and after the training period and at follow-up after 6 months. Physical activity in leisure time was documented in diaries. The fitness/exercise programme was performed at home three times a week and included rope skipping and muscle strength training exercises. Assessment included measurement of muscle strength with a handheld device, and with Grip-it, step-test for fitness with documentation of heart rate and pain perception and two questionnaires (CHAQ, CHQ) on well-being. Results There were no differences between exercise and control group regarding muscle strength, grip strength, fitness or well-being at base line. Muscle weakness was present in hip extensors, hip abductors and handgrip. For the exercise group muscle strength in hip and knee extensors increased after the 12-week exercise programme and was maintained in knee extensors at follow-up. There was no change in fitness tested with the individually adapted step-test. The CHQ questionnaire showed that pain was common in the exercise group and in the control group. There were only small changes in the CHAQ and CHQ after the training period. The fitness/exercise programme was well tolerated and pain did not increase during the study. Conclusions A weight bearing exercise programme, with muscle strength training with free weights and rope skipping was well tolerated without negative consequences on pain. It also improved muscle strength in the legs and can be recommended for children and adolescents with JIA. Background Children and adolescents with juvenile idiopathic arthritis (JIA) in most parts of the world have decreased muscle strength, bone health and well-being compared to healthy peers [1][2][3][4][5][6][7][8]. The disease can affect school performance, physical training, family life, and activities in leisure time with peers [9][10][11]. Kimura et al. declare in a study from 2008 that pain is one of the major symptoms and limits the activities, disrupts school attendance and contributes to psychosocial distress [12]. The last decade has seen the introduction of biological drugs, e.g. anti-tumor necrosis factor alpha (anti-TNFα) also for paediatric rheumatic disorders [13,14]. The medical effect of anti-TNF is especially high in children with polyarticular onset of JIA [13,14]. In subjects with JIA the effects are described as improvement in functional ability, health-related quality of life, pain, sleep quality and daily participation and in terms of less flares or inflammatory active joints [15][16][17]. Anti-TNFα drugs are effective, safe and well tolerated in children with JIA [14][15][16][17][18][19]. Despite the use of biological agents, pain is reported as the major symptom of the disease, and joint pain is the leading cause of disability in this disease [12]. The authors describe pain perception as multifactorial and therefore require "a bio-psychosocial model that includes the individual's age, developmental status, coping ability, mood, stress levels, and environmental and family factors, in addition to disease status and severity" [12]. Physical activity is important from a health perspective, especially in the subgroups with the polyarticular and extended oligoarticular categories [1,2,12,17,20]. Different physical activities have been studied such as jumping with a rope (rope-skipping) and exercise programs in water [4,6,7,[21][22][23][24]. Jumping has influence on bone health and foot orthotics can significantly improve pain, speed of ambulation, and self-rated activity and functional ability [23,25]. Exercise programs with weight bearing exercises have been shown to improve both muscle strength and bone mass [6,22,23]. The exercise programmes in these studies were at different intensity levels and of different duration and physical activity in leisure time was not fully documented [4,6,22]. Takken showed that also cardiovascular fitness was decreased in children with JIA compared to healthy peers and point out the importance of cardiovascular fitness and motor performance as a part of total well-being [24]. Muscle strength is an important part in a fitness programme; muscle weakness in children with JIA is reported in many studies since the 1990s [2,8,9]. There is, however, a lack of knowledge about physical exercise levels and the impact on pain and well-being. Physical fitness is described as a state of well-being with energy to participate in a variety of physical activities [26]. Frankala-Pinkham et al. stress the importance to incorporate more strategies to increase fitness, physical activity, and participation in the rehabilitation programme to improve quality of life (QoL) [27]. Questions concerning well-being and the impact of social and psychological functioning are well covered in the Child Health Questionnaire (CHQ) [28][29][30]. A couple of studies have reported that children with JIA after exercise interventions have less physical impairment or discomfort but report low levels of psychosocial abilities such as self esteem, psychosocial functioning and high levels of pain [21,22,25]. This seems to be a pattern for children with different chronic diseases or disabilities [31,32]. At our hospital the children with JIA attend the hospital regularly for physical training, which is time-consuming and costly both for the families and the health care system. An easy-to-handle home-based exercise programme making the patient less dependent on the physical therapist was needed, which was the impetus for this study. The aim of the study was to evaluate muscle strength, grip strength, physical fitness and well-being in a cohort of children and adolescents with JIA and the effects of a home-based exercise programme. Subjects The study is the second part of a randomized controlled trial of 54 children and adolescents with JIA that studied the effects of an exercise programme on bone health, muscle strength, fitness and well-being. Bone health and leisure time activities have been reported earlier and the randomization process has been described in detail [23]. The inclusion criteria were polyarticular or extended oligoarticular arthritis, treated with methotrexate, TNFblockers and/or prednisone, and in need of repeated corticosteroid injections of joints in the lower extremities. Medical records were obtained and three participants were found to be diagnosed with enthesitis related and psoriatic arthritis. After written consent by the parents and assent by the children, the subjects were randomized into an exercise or a control group. The person carrying out the group allocation was blinded. A flow chart of randomization and training is presented in Figure 1. There were 10 dropouts from the control group after the randomization, as they had preferred to belong to the exercise group. There were another six dropouts after the first test occasion due to the families' lack of time. Muscle strength, range of motion, balance, fitness and well-being were studied before and after a 12-week exercise programme. The participants were evaluated three times; at base line, after 3 months at the end of the training period and at follow-up at 6 months. The same physiotherapist, who was blinded to the previous measurement, performed all measurements. Range of motion and balance Range of motion (ROM) was measured with a plastic goniometer in dorsal and plantar flexion of the ankle; flexion, abduction and rotation of the hip; and rotation, abduction and flexion of the shoulder. The Balance Reach Test for children [33] was performed before and after the exercise programme. It has a good test-retest and inter-rater reliability with intraclass correlation coefficients between 0.54 and 0.88 and 0.54 -0.93 respectively [33]. Muscle strength Muscle strength in arms and legs was tested with a handheld device (adapted Chatillon W , dynamometer; Axel Ericson Medical AB, Gothenburg Sweden), in eight muscle groups (shoulder abduction, elbow: extension, flexion, hip: extension, flexion, abduction, knee: extension, ankle: dorsal flexors) using the "make" technique and with standardized positions. After instruction and familiarisation with the procedure, three attempts were made and the maximum recording was used for data analysis. Lever arm for each muscle group was measured with a tape measure and torque was calculated (Nm). For six muscle groups the positions used in this study were similar to the one in a normative study [34], presenting equations for a predicted value for every muscle group based on age, sex and body weight. Muscle strength was compared with the normative material and calculated as a percentage of the predicted value. Thus it was possible to evaluate the whole group despite their different ages. Grip strength was measured with Grippit (Detektor AB, Goteborg, Sweden) [35]. The instrument estimates peak strength over a 10 s period and the test was performed three times for each hand. The maximum recording was used for statistical analysis. Measurements were compared to normative values obtained with the same device [35], with data presented as mean ±1 SD and grouped according to age and sex. The data in this study was classified in a three level ordinal scale: strong = outside +1SD from mean, average = within mean ±1 SD and weak = outside −1 SD from mean. Physical fitness Fitness was tested with a step-test. In the Harward step test from 1956 the step board was 45 cm high and the speed 30 steps per minute for 5 minutes or until exhaustion [36]. In this study the test was adapted, by using a lower step board (20 cm high), in order not to provoke pain. The participants were stepping on and off the step board for six minutes and a metronome was used for keeping an individually chosen speed. Heart rate was documented once a minute during the test and exertion was documented with the Borg Scale 6-20 [37]. The power in Watt was calculated taking into account body weight, gravity, the height of the step board and speed (P = m × g × v) and was normalised to body weight (W/Kg). The test was considered as an individual sub maximal test. Quality of life The Child Health Assessment Questionnaire (CHAQ) is a questionnaire that is diagnose-specific for JIA, and is translated into and validated in Swedish [29]. The instrument refers to the last 14 days and includes eight different categories of activities (dressing, eating, walking, getting up, reaching, gripping, hygiene and activity). Each question is scored from 0 to 3 (0= no difficulty, 1= some difficulty, 2= much difficulty, 3= unable to complete task). The total score varies from 0 (no limitation) to 3 (extensive limitation). The instrument is recommended for children with JIA by the International League of Associations for Rheumatology (ILAR) and the Paediatric Rheumatology International Trials Organization (PRINTO) [29]. The Child Health Questionnaire (CHQ-C87) is a survey of the physical and psychosocial health of children 5 years of age and older. The questionnaire refers to the wellbeing status for the last four weeks. It was developed for children in the general population (for which normative data are available), and for children with chronic conditions [30]. The instrument has been validated for Swedish children, 9-16 years old, with epilepsy, diabetes and JIA [28]. It has a multidimensional profile consisting of 87 questions in twelve different domains (see Table 1). Scoring algorithms are provided for the different domains [30]. The manual consists of a Scale Scoring for a clinical sample of children with JIA, epilepsy, asthma and with psychiatric disorders. Pain The children were asked to report if pain occurred during the test occasions. Presence of pain was documented with a 10 centimetres visual analogue scale (VAS). Pain in a perspective of health/well-being was also reported within the questionnaires CHAQ and CHQ. Fitness programme The participants fulfilled a training programme three times a week for 12 weeks. The exercise programme consisted of rope skipping, muscle strength, core exercises and exercises with free weights for arms (Appendix). The programme has been described in detail earlier [23]. The number of repetitions performed was documented in an exercise diary. Physical activity in leisure time outside the programme was also documented in an activity diary. Statistical methods For comparison between groups, Mann Whitney U-test was used for grip strength and for the questionnaires. T-test was used for muscle strength with myometer. The repeated measures ANOVA method was used for comparison of muscle strength and for step-test at baseline, after training and at follow-up. Data was tested with Mauchlys test of sphericity, and if sphericity was not assumed the Greenhouse-Geisser method/procedure was used for analysis. As the results from the questionnaires were not normally distributed the Friedman test was used for repeated measures followed by post hoc testing with Wilcoxon singed rank test. P-values of 0.05 or less were considered evidence of statistically significant findings. In the post hoc analysis Bonferroni adjustment for multiple comparisons was used. Software packages Statview, SPSS (version 17.0) and SPSS for Mac (version 19.0) were used for statistical analysis. Ethics This study was carried out in compliance with the Helsinki Declaration and was approved by The Regional Results Fifty-four children and adolescents were included in the study, with a mean age of 13.9 years (range 8.8-21.6). There were 41 girls and 13 boys, randomized in an exercise and in a control group (Table 2). There was a difference in age between the exercise group and the control group, not reaching statistical significance (p=0.059), but there were a statistically significant difference in height (p=0.007) and weight (p=0.026). Range of motion and balance There were no differences between groups at baseline for measurement of ROM in the Balance Reach Test and there were no significant changes during the study. Muscle strength All children did not fulfil the whole protocol at all test occasions and only muscle groups with complete measurements were analysed. Muscle strength measurements taken at baseline were compared with the normative material for six muscle groups (see Table 3 and Figure 2). Values for hip abductors (33-38%) and hip extensors (52-55%) were below the limits of the 95% prediction interval. There were no significant differences between control and exercise group (Table 3). Values were also compared in order to see if age had influence on muscle strength. No significant differences were found when younger children (8-12 years) were compared with older children (13-16 years) ( Figure 2). Forty-five children had measurement of grip strength that could be compared to normative values [33], 17 in control and 28 in exercise group. The comparison showed weakness, with 28 children showing values below the normative mean −1 SD. Sixteen children were average (within 1 SD from mean) and one was strong with values above mean +1SD. There were no significant differences between groups when corrected for/compared to age (Table 4). Changes after fitness programme There were no changes in grip strength during or after the training period (see Table 4). Measurements of muscle strength of the legs are presented in Table 4. Statistically significant changes were found in the exercise group after training, with an increase in hip extensors and knee extensors, compared to baseline. Knee extensor strength was maintained at follow-up. Physical fitness There were nine dropouts in the step-test due to pain; five experienced pain in the knee, one in the hip and three in the foot. The power in W/Kg and heart rate in the steptest are shown in Table 5. There were no differences between groups before training started regarding power and heart rate. There were no changes in heart rate or perceived exertion after training (Table 6). QoL and well-being Results of CHAQ and CHQ are shown in Tables 1 and 7. Fifty-three children completed the CHAQ and CHQ at baseline. There were no differences between the exercise and the control group. The CHQ was only used in scientific studies such as the study of Norrby [38]. The CHQ was used for all the participants in this study. The adolescents even the one 20.6 years of age and those older than 16 years were considered adolescents as the participants still were patients at the Children´s hospital. Our subjects showed low values in the domain "bodily pain" and also in the domains "general health" and "mental health" at baseline. 35 children fulfilled the CHAQ and 39 the CHQ at all test occasions. There was no increase in pain during the study. There were only small changes in both of the questionnaires. In the control group there was a statistically significant increase in CHQ domain "role physical" at the end of the study period. There was a tendency to improved "mental health" in the exercise group and deterioration in the control group in "general health". Exercise programme The participants in the exercise group fulfilled about 70% of expected numbers of exercises (Figure 3). Discussion The study revealed muscle weakness and the presence of pain in children and adolescents with JIA. Pain is not an obstacle for performing the physical fitness programme, but there were ten dropouts mainly due to pain in the testing procedure. Muscle strength in the leg Muscle weakness in knee extensors, elbow flexors and in ankle dorsal flexors has since 1995 been reported in children with JIA compared to healthy peers [2,3,7,8]. In our study muscle weakness was found in hip extensors, hip abductors and in handgrip strength. The other muscle groups were also below predicted values but within the normative range. In contrast to other studies our group was within normal values in knee extensors [3,7,8] Saarinen, Lindeman, and Broström all found that muscle weakness was present in children and adolescents with JIA, [3,7,8]. Grip strength More than 50% of the children in this study had a lower grip strength compared to peers and many of the children in the group had received corticosteroids in hands and fingers. No increase was found in hand grip strength after the training, which was not to be expected, as there were no specific exercises for grip strength in the programme. Earlier studies with children and adolescents report lower grip strength and that performance in writing and drawing in school give negative consequences compared to peers [39,40]. A pilot study shows that children with JIA are suffering from handwriting difficulties and are limited mainly due to pain and the inability to sustain handwriting for a longer period of time [40]. Exercise programme The increased muscle strength in hip and knee extensors found after 12 weeks exercise correlates well with the training programme that included of exercises for both hip and knee extensors. Rope skipping seemed to be an effective exercise to improve muscle strength in these muscle groups. Improvement in knee extensors is especially important, as the knee joint is the most affected joint in children and adolescents with JIA [10]. Earlier studies on groups of children and adolescents with JIA [1][2][3][4][20][21][22][23][24][25]38,[41][42][43] show that this group is in inferior regarding functional ability, physical fitness and cardiovascular capacity compared to peers. Earlier studies with rope skipping also reported significant improvement in bone health and muscle strength both for children with JIA and for healthy children [6,23,44]. The physical fitness programme also covered items for muscle strength in core muscles and muscles around the shoulders. The failure to increase muscle strength in these muscle groups may have been due to that the weights used were not heavy enough. No changes in fitness, in terms of heart rate and exertion, were found after training. This may be due to the tests not being sensitive enough or the exercises were not sufficiently challenging regarding fitness. The steptest was performed on a 20 cm step board in order not to provoke knee pain; despite this ten children did not fulfil the test mainly due to pain in knee, hip or foot. A higher step board had been more demanding and could perhaps have shown a difference in the measurement. Measuring physical fitness with a cycle ergometer may be less painful for children with JIA. On the other hand the step-test with weight bearing exercise is a test close to functions that are important in daily life. Functional ability, physical fitness and cardiovascular capacity have earlier been studied in groups of children with JIA [20][21][22][23][24]40] who found that they were inferior regarding physical fitness compared to peers. There was a lack of knowledge regarding to the participants' ability to perform the programme and how well they adhered to it. The focus had been on daily activity and of participating in physics at school. The focus had not been on progressive cardiovascular fitness. With the improvement in medical treatment it is important to keep addressing physical fitness, as it is a prerequisite for good health [45,46]. The exercise programme in our study was designed to meet the need for physical training, in accordance to the ILAR recommendation. The frequency of three times a week and the level of cardiovascular effort and weights were well tolerated. In this study we did not individualize the programme or increase the number of repetitions or the load during the training period, which might have given another outcome. Noteworthy, our protocol did not render any increase in pain during the training period. By the introduction of an exercise programme we also hoped to encourage Table 6 Step-test; heart rate and exertion at baseline and after 3 and 6 months, mean (SD) the children to change from a sedentary to an active life style. As reported earlier, physical activity increased in the group during the study and at follow-up [23]. Out of the 54 participants 48 completed the training, but there were dropouts in some of the measurements. This was explained by; lack of time, pain, and different psychosocial reasons. Well-being The questionnaires showed that the children in the study did not have any difficulties to carry out daily activities. Compared to an earlier study in the same region and compared to the results from normative data [38] our group scored higher on the domain "general health" but were at about the same levels on the other domains. There were only a few changes during the 6-month period. The CHAQ has been debated because of lack of sensitivity in view of changes in rehabilitation [29]. Dempster et al. in a study from 2001 showed that a minimal clinically important improvement is represented by a median change of −0, 13 in the CHAQ [47]. They also found a minimal clinically important deterioration by a median change of 0.75. In our study no improvement or deterioration was found. Our results confirm that it is not an optimal questionnaire for children with a moderate impairment and being in an inactive phase of the disease, as it did not capture the spontaneous positive comments from participants during the study. In our study the children reported low levels in the domain "bodily pain" and also in the domains "general health" and "mental health". The pathogenesis of pain in children with rheumatic diseases is multifactorial, and disease treatment alone is often not enough to alleviate it. Many researchers stress that children with chronic conditions can have "hidden" consequences on self esteem and well-being why pain treatment should include non-pharmacological interventions, for example exercise and cognitive-behavioural therapy, for better outcome on their general health [12,17,45,46]. Conclusions Muscle weakness was present in hip extensors, hip abductors and in the handgrip. Muscle strength in hip and knee extensors increased after the 12-week exercise programme and was maintained in knee extensors at follow-up. Pain was common in the group. The exercise programme was well tolerated, there was a compliance of 70% to the programme and pain did not increase during the study. The study shows that a weight bearing fitness programme with muscle strength training including free weights and rope skipping can be recommended for children and adolescents with JIA.
2017-06-23T06:39:47.265Z
2013-02-22T00:00:00.000
{ "year": 2013, "sha1": "9054295c014f88c2fde161acb88fb9f5b1c0d3fd", "oa_license": "CCBY", "oa_url": "https://ped-rheum.biomedcentral.com/track/pdf/10.1186/1546-0096-11-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "366f959c27302bd03cef9fa22e7ce912c4d85feb", "s2fieldsofstudy": [ "Education", "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
258487299
pes2o/s2orc
v3-fos-license
Rapid effects of valproic acid on the fetal brain transcriptome: Implications for brain development and autism There is an increased incidence of autism among the children of women who take the anti-epileptic, mood-stabilizing drug, valproic acid (VPA) during pregnancy; moreover, exposure to VPA in utero causes autistic-like symptoms in rodents and non-human primates. Analysis of RNA-seq data obtained from E12.5 fetal mouse brains 3 hours after VPA administration to the pregnant dam revealed that VPA rapidly and significantly increased or decreased the expression of approximately 7,300 genes. No significant sex differences in VPA-induced gene expression were observed. Expression of 399 autism risk genes was significantly altered by VPA as was expression of 255 genes that have been reported to play fundamental roles in fetal brain development but are not otherwise linked to autism. Expression of genes associated with intracellular signaling pathways, neurogenesis, and excitation-inhibition balance as well as synaptogenesis, neuronal fate determination, axon and dendritic development, neuroinflammation, circadian rhythms, and epigenetic modulation of gene expression was dysregulated by VPA. The goal of this study was to identify mouse genes that are: (a) significantly up- or down-regulated by VPA in the fetal brain and (b) known to be associated with autism and/or to play a role in embryonic neurodevelopmental processes, perturbation of which has the potential to alter brain connectivity and, consequently behavior, in the adult. The set of genes meeting these criteria provides potential targets for future hypothesis-driven studies to elucidate the proximal causes of errors in brain connectivity underlying neurodevelopmental disorders such as autism. INTRODUCTION: Autism is a neurodevelopmental disorder (NDD) characterized by social interaction deficits including language, and repetitive, stereotyped behavior with restricted interests 1 .Autistic individuals may also display an increased incidence of intellectual disability (ID), anxiety and seizures.The reported incidence of autism has increased dramatically over the last two decades 2 , with some estimates now as high as 1:44 3 .The severity of autism varies widely, ranging from high-functioning with minimal disability, to severely afflicted, where aggressive, self-destructive repetitive behaviors pose a threat to the safety of the patient, caregivers, and family.The term "autism spectrum disorder" (ASD) reflects this symptomatic variability.The neurological mechanisms underlying ASDs are not understood nor is it known whether autism is a single disorder or multiple disorders sharing common core features.The diagnosis of autism typically is made at around two years of age when the child fails to meet normal milestones for social development and language. However, it has become increasingly evident that autism arises before birth: infants destined for an autism diagnosis fail to show normal attention to faces 4 .Epidemiological studies also indicate that the onset of autism occurs during fetal development; one such study found that folate supplementation for pregnant women reduced the incidence of autism, but only when administered between two weeks before and four weeks after conception 5 . Twin studies have revealed that 60 to 88% of autism cases are inherited 6,7 , however, many cases have been linked to in utero exposure to environmental factors such as pharmaceuticals, air pollution, insecticides, and maternal infection 8,9 .For example, exposure to the anti-epileptic, mood stabilizing drug, valproic acid (VPA; Depakote®) 10 or the organophosphate insecticide, chlorpyrifos 11 , increases the incidence of autism and other NDDs in the offspring of women who are exposed during pregnancy.Another environmental cause of autism is maternal immune activation (MIA) in which pregnant women who have systemic bacterial or viral infections with a hyper-immune response have an increased incidence of autism in their children [12][13][14] . Studying the etiology of autism in humans is limited to epidemiological approaches implicating genetic variants (single nucleotide polymorphisms, copy number variants, and single-gene syndromic mutations).The Simons Foundation Autism Research Initiative (SFARI) has compiled a list of 1,115 human genes (https://gene.sfari.org/database/human-gene/),referred to as the "SFARI List" below, for which there is evidence for association with autism based on genome-wide association studies (GWAS). Hypothesis-driven experiments to determine cause and effect need to be done in animal models, which display a range of behaviors that are remarkably similar to the core symptoms of autism [15][16][17] .These autistic-like behaviors have been reported in transgenic animals lacking ASD-linked genes such as Cntnap2 and Shank2 18, 19 , supporting the conclusion that pathogenic mutations in these genes are causative for syndromic autism and leading to the widespread use of these transgenic mice as animal models 20,21 .Systemic administration of VPA, chlorpyrifos or induction of MIA in genetically normal pregnant rodents also leads to increased autistic-like behaviors in their offspring 12,14,16,17 .Unlike transgenic mouse models, these environmental toxicity models of autism provide the opportunity to control the timing of exposure of the fetus to the toxic stressor, as will be discussed below.Gene expression analyses of the brains of mice in the VPA and MIA models have been conducted using RNA microarray analysis or next-generation RNA sequencing (RNA-seq).In most of those animal studies, the fetus was exposed to the drug during mid-gestation, but RNA expression was analyzed in postnatal or adult animals (e.g., see refs 22 −26). There are two competing, but not mutually exclusive, theories about the role of differential gene expression in autism.In one, abnormal gene expression in the child or adult at the time of behavioral testing, is responsible for autistic-like behavior.The second posits that abnormal gene expression, due to genomic variants or in utero environmental stressors, interferes with one or more critical steps in the "program" controlling fetal brain development.Such errors could carry forward throughout life leading to a brain with subtle anatomical or connectivity defects that underlie the abnormal development of an autistic brain.This has been referred to as a "presymptomatic signature" 27 .The present study focused on the second hypothesis by examining altered gene expression in fetal brains when the environmental stressor (VPA) is still present; these genes need not be continuously dysregulated throughout life.Consequently, genes associated with critical steps in early brain development such as neurogenesis, neuron fate specification, axon and dendrite growth, and synaptogenesis are of particular interest. The reported incidence of autism is about 4-times higher in males than in females 28 . Sex differences in neural function are generally considered to be mediated by sex hormones acting from late prenatal brain development through to the adult.Since the time of VPA administration in the present study (E12.5) is just prior to the maturation of gonads and production of sex hormones 29 , any sex differences in gene expression observed likely would be due to sexually-dimorphic gene expression rather than to hormonal effects. The complexity and variability of the behavioral symptoms of autism together with identification of over 1,100 ASD-associated gene variants (SFARI List) make a mechanistic understanding of the causes of the disorder a daunting task.One approach to this problem is to compare the results of multiple studies using GWAS results from human patients and gene expression results from animal models of autism, looking for common differentially-expressed genes.The subset of genes in common from both types of studies can provide a short(er) list of candidate genes that could be subjected to future hypothesisdriven studies to establish causal relationships between gene dysregulation and ASD symptoms.Meng et al. 30 have taken an analogous approach, identifying genes that are dysregulated by long-term VPA treatment of human forebrain organoids in vitro and that are also linked to autism. The goal of the present study was to address this question using the VPA mouse model in which pregnant mice receive a single i.p. injection of VPA at gestational day 12.5 (E12.5).Fetal brains were processed for RNA-seq three hours after VPA administration; male and female fetuses were analyzed separately.Pharmacokinetic studies have demonstrated that injected VPA dissipates within 3−5 hours due to metabolism of the drug by the maternal liver 31,32 ; VPA levels in the fetal brain follow a similar trajectory 33,34 . Consequently, with this protocol, the fetal brain receives a brief, transient exposure to VPA at a critical time for early brain development.Thus, VPA-induced molecular events occurring on E12.5 are both necessary and sufficient for the autistic-like behaviors in VPA-treated animals assessed 5−6 weeks later.Analysis of these data revealed that VPA induced a significant increase or decrease in the expression of approximately 7,300 genes, of which 399 are among the 1,115 genes on the SFARI List and at least 255 additional genes are known to play fundamental roles in the development and function of the nervous system. MATERIALS AND METHODS: Timed-pregnant C57BL6 mice were generated by the University of Maryland School of Medicine Division of Veterinary Resources.All animal procedures were approved by the University of Maryland Baltimore, Institutional Animal Care and Use Committee.VPA was obtained from Sigma. One male was paired overnight with two females; the day of separation was designated E0.5.On E12.5 pregnant females received i.p injections of VPA (400 mg/kg) in sterile saline or saline alone.7 pregnant females received VPA and 7 received saline.Three hours after VPA administration, pregnant females were euthanized by cervical dislocation and decapitation.Fetuses were transferred to ice-cold saline, decapitated and the entire brain was removed to Trizol and disrupted for 60 sec with a Bead Beater using 0.2 mm beads.The 105 individual fetal brain samples were frozen on dry ice and stored at −80 o C prior to RNA extraction and quality control (RIN = 10 for all samples).Sex was determined by analyzing Sry and Gapdh RNA derived from the torso of each fetus by PCR.Males were identified by the presence of both Sry and Gapdh; Gapdh but not Sry RNA was expressed in females 35 .One male and one female brain from each of the 7 pregnancies at each condition were processed for RNA-seq by the University of Maryland Institute for Genome Sciences. Libraries were prepared from 25 ng of RNA using the NEB Ultra II Directional RNA kit. Samples were sequenced on an Illumina NovaSeq 6000 with a 150 bp paired-end read configuration.The quality of sequences was evaluated by using FastQC 36 .The alignment was performed using HiSat (version HISAT2-2.0.4) 37 and Mus musculus GRCm38 as reference genome and annotation (version 102).Aligned bam files were used to determine number of reads by gene using HTSeq 38 .On average, we sequenced 187,000,000 reads.96.6% of them properly mapped the reference: 87% mapped exons, 2.4% mapped introns and the remaining 10.6% mapped intergenic regions. Differential gene expression between mice treated with VPA and saline controls was conducted using DESeq2 39 , which models gene counts using the negative binomial distribution.P-values were corrected for false discovery rate (FDR) using the Benjamini-Hochberg procedure; pFDR ≤ 0.025 and log2-fold change ≥0.32 was used as the criterion for significance.Differentially expressed genes due to VPA treatment were tested for enrichment in SFARI autism risk genes (https://gene.sfari.org/database/human-gene/)using Fisher's Exact Test.Significance of sex-differences in FC was analyzed by 2-way ANOVA using the Limma-Voom tool 40 . RESULTS: RNA-seq.19,721 individual genes were analyzed in fetal mouse brain by RNA-seq.Male and female brains were analyzed separately.The raw data showing base counts (mean of male and female counts) for each gene in each of seven independent fetal brain samples of each sex, each from a different pregnancy, are shown in Supplemental Table S1, i.e., n = 7 fetal brains from 7 different pregnancies for each sex/condition.Only genes with base counts ≥100 were analyzed further.Genes that increased or decreased in response to VPA by < 1.25-fold and > −1.25-fold and those with adjusted p-values (pFDR) > 0.025 were also filtered out; a gene was included if VPA increased or decreased its expression by ≤ 1.25-fold and ≥ −1.25-fold in only one sex.Shown in Supplemental Table S2 are the 6516 genes significantly affected by VPA (pFDR ≤ 0.025) in both sexes, 498 in females only and 280 in males only (listed in three separate sections in Table S2).These apparent sex differences were due to variability in the replicates of one of the sexes (i.e., pFDR > 0.025). Throughout this report, a positive fold-change (FC) indicates that VPA increased gene expression; a negative FC indicates that VPA decreased expression such that the FC is the ratio of control to VPA gene expression.For example, FC = −3.0indicates that V/C = 0.33, a 67% reduction by VPA. Curating the mouse genes dysregulated by VPA in the fetal brain.The 7294 genes that were significantly altered by VPA (Table S2) were further analyzed in two ways.First, they were merged with the genes on the SFARI List identifying 399 common genes (Table S3).It should be noted that inclusion on the SFARI List is based on GWAS; in many cases, a mechanistic role in neither brain development nor the etiology of autism is known.Second, we identified 255 genes (Table S4) significantly altered by VPA, not on the list of SFARI risk genes but having a documented role in brain development including intracellular signaling, neurogenesis, and excitation-inhibition balance as well as 15 additional categories (c.f., Table 3 and Supplementary Discussion).As will be discussed below, although these 255 genes have not been associated with autism in GWAS, it is plausible that changes (either positive or negative) in the expression of their gene products in the fetal brain might adversely affect the trajectory of brain development leaving a permanent "signature" 27 on brain structure and connectivity leading to autistic-like behavior in the adult. No significant sexually-dimorphic effects of VPA. Figure 1 is a plot of the log2 fold-change (FC) of males versus females for the 654 genes in Tables S3 and S4.Deviations from the regression line indicate possible sex differences; however, in no case was the effect of VPA found to be significantly different between males and females.20 genes (magenta) were upregulated by > 5-fold or downregulated > 80% by VPA (average of males and females). Limitations: 1.The results reported here are restricted to VPA-induced changes in RNA levels; the extent to which these changes reflect commensurate changes in the expression levels of the proteins encoded by these genes is not known.Although in many cases, RNA and protein levels are correlated, the extent to which RNA levels of a given gene track its corresponding protein levels may vary. 2. These results represent VPA-induced changes in gene expression at a time single point in time (3 hr after VPA administration on E12.5).Whether the changes are transient and reverse as the VPA levels dissipate, as has been reported for Bdnf 41, or are persistent, is not known.It is also not known whether the same VPA induced changes would be observed after VPA administration on a different gestational day. 3. There are many genes for which expression is increased or decreased by ≥ 1.25-fold or ≤ −1.25-fold in response to VPA but the FDR-corrected p-value did not reach the ≤ 0.025 criterion applied in this study.Higher statistical power would be needed to determine whether these changes are significant. 4. The ± 1.25 FC cutoff used here is arbitrary; it is possible that smaller changes in the expression of one or more critical genes have profound effects on fetal brain development and these would not be identified in this analysis.5.Only a fraction of the approximately 7,300 genes whose expression is significantly affected by VPA were curated in this study.It is likely that other genes, in addition to those shown in Tables S3 and S4, could play a role in mediating the effects of VPA on fetal brain development. Reproducibility: The stimulation by VPA of Bdnf expression (Table S4) has been independently confirmed by quantitative RT-PCR 35,41 . By separately analyzing gene expression in male and female brains, the results of this study were effectively replicated with independent biological samples (7 of each sex ± VPA, each from a different pregnancy).With very few exceptions, male and female gene expression levels were similar (c.f., Figure 1), demonstrating reproducibility across independent samples. DISCUSSION: The overarching hypothesis for this research is that NDDs involving ID such as ASDs are the result of abnormal connections within and among multiple brain regions.Normal connectivity is established beginning during fetal brain development as the various brain regions become populated with specific classes of neurons that subsequently connect with other neurons to establish the complex neuronal networks that underlie cognition and behavior.Although these networks are refined during late prenatal and postnatal brain development, it is likely that certain early prenatal developmental epochs are particularly vulnerable to alterations in the neurodevelopmental "program".Although it is by no means clear that the fetal rodent brain accurately recapitulates the autistic human brain during prenatal development, the VPA model enables experimental control of the timing of exposure to an environmental factor that causes abnormal behavior resembling autism.In the present experimental paradigm, VPA is administered to the pregnant dam at E12.5, a time when multiple early neurodevelopmental events critical for brain organization and connectivity are occurring.These include proliferation of neural progenitors (NPs), determination of the neuronal fate of these NPs, migration of newborn neurons away from the proliferative ventricular zones, differentiation of NPs to establish a mature neuronal phenotype, extension and branching of axons and dendrites and establishment of synaptic connections.The goal of this study was to identify genes that are: a. rapidly and significantly up-or down-regulated by VPA in the fetal mouse brain (Table S2) AND b. known to be linked to autism in GWAS (Table S3) or to play a role in embryonic neurodevelopmental processes, perturbation of which has the potential to alter brain connectivity in the postnatal and adult brain (Table S4). The set of genes meeting these criteria would provide potential targets for future hypothesis-driven approaches to understanding the underlying proximal causes of defective brain connectivity in NDDs such as autism. It was not practical to vet each of the nearly 7,300 VPA-dysregulated genes for a potential role in abnormal brain development.However, as described below and in the Supplemental Discussion, the 255 genes in Table S4 and some of those in Table S3 have been reported to play substantial roles in specific aspects of fetal brain development. Consequently, it is plausible that the up-or downregulation of one or more of those genes could contribute to the abnormal connectivity in subjects with NDDs.The validity of this potential contribution could be tested in animals by determining the effect of manipulating the expression of individual (or combinations of) genes in the fetal brain on subsequent behavior. A common subset of genes in GWAS and VPA-fetal mouse brain studies. Several GWAS have identified genes linked to autism (see SFARI List).Here we tabulated the findings of four GWAS studies [42][43][44][45] and one report categorizing autism risk genes potentially involved in neurogenesis 46 .As shown in Table 1, many of the genes identified in these reports are also dysregulated by VPA in the fetal mouse brain (Tables S3 and S4).Genes discussed in the respective papers are listed in each column in Table 1; genes that are also dysregulated by VPA in the present study are shown in bold.Five VPAregulated genes in Tables S3 and S4 that also appear in all five of the published reports are in red.(Arid1b, Dyrk1a, Pogz, Pten, Tbr1).Six genes in Tables S3 and S4 that also appear in four of the five other studies are in blue.(Adnp, Ash1l, Chd2, Kmt5b, Tcf7l2, Wac).There were four genes (yellow) identified in all five previously published studies that were not affected by VPA in the fetal mouse brain.Chd2, which is listed in three of the four published studies, and Chd3, (50% reduced by VPA), are both SFARI genes and have been reported to have similar functions to Chd8.Of the 11 red and blue genes in Table 1, three (Adnp, Dyrk1a, Pten) appear in Table 2 as VPA-regulated, autism-associated genes involved with the structural stability of neurons 47 (see below). VPA dysregulation of high-confidence (hc)ASD genes in mid-fetal layer 5/6 cortical projection neurons.Willsey et al. 48identified 9 hcASD genes that are expressed in layer 5/6 projection neurons in the fetal human brain.Three of these genes (Dyrk1a, Pogz, Tbr1) were downregulated by VPA in this study (Table S3).The authors used the nine hcASD genes as seed genes for co-expression network analysis, which revealed 10 probable ASD genes, of which two (Bcl11a, Nfia) were down-regulated and one (Aph1a) was up-regulated by VPA (Table S4).All of these genes are on the SFARI List (Table S3) and the three seed genes were identified in all 5 reports analyzed in this study (Table 1).These findings raise the possibility that dysregulation of one or more of these genes in developing layer 5/6 projection neurons in the cortical plate of the fetal brain contributes to permanent connectivity defects underlying autistic-like behaviors. Autism-associated genes and structural stability of neurons.Lin et al. 47 tabulated genes linked to autism in GWAS that have also been reported to be involved in the structural stability of neurons.These genes were further sorted into three categories, viz., "neurite outgrowth", "spine/synapse formation" and "synaptic plasticity".61 genes were assigned to these three categories although some genes appeared in two or all three categories, resulting in 29 different genes among the three categories (Table 2). Shown in red bold are 10 of these genes (35%) that were significantly up-or downregulated by VPA (i.e., in Table S3).Dysregulation of one or more of these genes (due to gene variants or fetal exposure to VPA) could alter the structure and function of synapses throughout brain development. Examination of Tables 1 and 2 revealed that Pten and Dyrk1a dysregulated by VPA and are common to both the Lin et al. study 48 and to all five of the GWAS.Consequently, these two genes may be good candidates for investigating the molecular basis for the proximal causes of autism. Biological roles of genes dysregulated by VPA.Table 3 shows the biological function of 189 different genes that were significantly dysregulated by VPA in the E12.5 fetal brain.One or more developmental errors caused by these changes could induce a "signature" of circuit defects in the fetal brain, resulting in abnormal behavior in juveniles and adults.Many of the genes affected by VPA are involved in intracellular signaling pathways, which could mediate a wide variety of developmental processes.Some examples of these processes are discussed in the following sections ("Intracellular Signaling", "Neurogenesis", "Excitation-Inhibition").The rationale for inclusion of the genes in the remaining 15 categories together with supporting references is provided in Supplemental Discussion.In this discussion, the change in gene expression induced by VPA in the fetal brain at E12.5 (expressed as fold-change) is shown in parentheses (mean of males and females). Intracellular signaling.As summarized in Figure 2, the genes encoding at least 10 members of the canonical signaling pathways downstream from receptor tyrosine kinase receptors are significantly up-regulated by VPA in the fetal brain (green); four are downregulated (red).Pten (+1.4) (Phosphatase and tensin homolog), a negative regulator of the PI3K-AKT-mTor signaling pathway, has a particularly strong association with autism, appearing in all GWAS (Table 1) and associated with the structural stability of neurons (Table 2) 47 . Twelve additional genes associated with intracellular signaling are included in Table 3.Thus, abnormal signaling via these pathways is a potential mechanism by which VPA could interfere with the establishment of normal brain connectivity.Meta-analysis of GWAS and copy number variant studies of autism-related genes revealed that three signaling networks, regulating steroidogenesis, neurite outgrowth and excitatory synaptic function, were enriched 48 .A-kinase anchoring proteins (AKAPs) functionally integrate signaling cascades within and among these networks 49 ; VPA decreased and increased Akap8 (−1.3) and Akap8l (+1.6) expression in fetal mouse brains.Dyrk1b (−2.2) (which encodes dual-specificity tyrosine phosphorylation regulated kinase 1B) is a mediator of double-stranded DNA break repair 50 and regulates hedgehog signaling by activating the mTor/AKT pathway 51,52 .Dyrk1b has been linked to metabolic syndrome and autism 53 .The 55% reduction in Dyrk1b expression induced by VPA could contribute to autistic-like behavior by dysregulating hedgehog or mTor/AKT signaling.Ppp1r1b (+3.7), encodes the dopamine-and cAMP-regulated neuronal phosphoprotein (DARPP-32).DARPP-32 amplifies and/or mediates many actions of cyclic AMP-dependent protein kinase at the plasma membrane and in the cytoplasm, with a broad spectrum of potential targets and functions 54 . Neurogenesis.Excitatory neurons in the mammalian cortex are generated from proliferating radial glia cells (RGs), which undergo mitosis at the ventricular surface to expand the pool of neuroblasts and consequently determine the final number of neurons 55 .Neurogenesis proceeds according to a "program" that is tightly-regulated by intercellular signals from a variety of brain cells as well as the meninges.Disturbances in this regulation, for example by environmental factors such as VPA or gene mutations, have the potential to alter the neuronal population and configuration of the developing brain leading to permanent defects in connectivity that could lead to altered behavior.An example of this disruption is the effect of VPA exposure at E12.5 on the number of neocortical neurons born on E14.5 measured one day before birth (E18.5) 56.Thus, gene expression changes on E12.5 alter neurogenesis at E14.5, long after the VPA has dissipated. Retinoic acid (RA) plays a number of roles in the regulation of cortical neurogenesis 57,58 . The RA is produced in the dorsal forebrain meninges and acts at receptors on the endfeet of RGs 59 .The latter study used a transgenic mouse with reduced expression of Foxc1 (−1.9) leading to defects in forebrain meningeal formation and loss of neuron progenitor production due to failure of proliferating RGs to exit the cell cycle.This would be predicted to lead to a delay in the generation of postmitotic neurons and consequently, an expansion of the proliferating RG pool neuroprogenitor pool.Downregulation of Nr2f1 (−1.8) and Sfrp2 (−1.8) would be predicted to influence the balance between proliferation and neurogenesis 60 .Multiple studies have implicated Eomes (Tbr2) (−1.8) in the regulation of cortical neurogenesis 61−66 , while Neurog1 (−2.2) has been reported to play a role as a negative regulator of cortical neurogenesis 67 . The conversion from symmetrical to asymmetrical mitoses associated with the generation of post-mitotic neurons is regulated by the Plk1 (−1.7) -Lrrk1 (−2.9) -Cdk5rap2 (−1.3) cascade 68 .Sulliman-Lavie et al. 69 reported that Pogz (−2.2) is a negative regulator of transcription and Pogz deficiency upregulates expression of genes associated with ASD resulting in disruption of embryonic neurogenesis; Pogz was found to be associated with autism in all five GWAS (Table 1).Several members of the heterogeneous nuclear ribonucleoprotein family have been linked to NDDs including Hnrnph1 (−1.6), Hnrnpk (−1.3),Hnrnpu (−1.6); expression of these genes in radial glia has been reported to be critical during neurodevelopment 70 . Comparison of the mechanisms controlling neurogenesis in rodents and humans has revealed that gyrification of the human brain is driven by changes in the expression of regulatory genes and growth factors 71 .Stahl et al. 72 reported that, over the period E12 to E16, decreased expression of the DNA-associated protein, Trnp1 (+1.5), leads to radial expansion of the cortex and the appearance of gyri-like folding of the mouse brain.In the present study, VPA induced a 50% increase in the expression of Trnp1 in the E12.5 fetal mouse brain; this would be predicted to inhibit or delay the shift to radial growth of the cortex.Fibroblast growth factor 2 (FGF2) has also been reported to induce gyrification in the mouse brain, having an effect opposite to that of Trnp1 73 .In the present study, VPA caused a 65% reduction on the expression of Fgf2 (−2.9) (Table S4).This would also be predicted to bias cortical neurogenesis toward tangential and away from radial growth which could contribute to gyrification.These findings together with the present results lead to a hypothesis that increased Trnp1 and decreased Fgf2 expression, induced by VPA in the fetal brain at E12.5, cause a delayed shift in radial growth resulting in autism-like behavioral defects.This hypothesis could be tested by experimentally decreasing Trnp1 and increasing Fgf2 expression in the mouse fetal brain exposed to VPA; these maneuvers would be predicted to normalize behavior. WNT/β-catenin signaling plays several roles in regulating neurogenesis in the developing neocortex 74 .Wnt3a (−3.8) regulates the timing of differentiation of neocortical intermediate progenitors into neurons 75 .The 74% reduction in Wnt3a expression induced by VPA has the potential to disrupt the normal timing of cortical neurogenesis in the fetal brain.Zinc finger and BTB domain-containing 16 [Zbtb16 (−1.8)] has been reported to regulate cortical neurogenesis and Zbtb16 knockout mice display a thinning of neocortical layer 6 and a reduction of TBR1-expressing neurons as well as increased dendritic spines and microglia 76 .The VPA-induced reduction in Wnt3a or Zbtb16 expression or both at E12.5 has the potential to alter the developmental program, resulting in altered cortical connectivity and, ultimately behavioral deficits. Excitation-Inhibition. A widely discussed hypothesis is that autism is caused by an imbalance between neuronal excitation and inhibition due to increased excitation, reduced inhibition, or both 60,79,80 .Consequently, alterations in the developmental program in the fetal brain that result in too many excitatory (glutamatergic) or too few inhibitory (GABAergic) neurons (or synapses) could contribute to autistic behavior.Camk2a (+1.9), which is associated with excitatory synapses, is upregulated by 90% in response to VPA; CAMK2A is upregulated in the ASD superior temporal gyrus 81 .Two related genes, Maf (−1.6) and Mafb (−1.6), which are both downregulated by about 40% by VPA, have been reported to play redundant roles in the generation of interneurons from the fetal median ganglionic eminence; their deletion results in decreased numbers of cortical somatostatinreleasing, inhibitory interneurons 82 , Gad1 (−2.5) and Gad2 (−3.3) encode isoforms of the enzyme that synthesizes the inhibitory transmitter, GABA.VPA reduced Gad1 and Gad2 levels by 60% and 70%, respectively; GAD1 and GAD2 are downregulated in the superior temporal gyrus of ASD patients 81 .Insyn1 (−1.4) encodes a component of the dystroglycan complex at inhibitory synapses; loss of Insyn1 alters the composition of the GABAergic synapses, excitatory/inhibitory balance, and cognitive behavior 83,84 .Using cortical assembloids together with CRISPR screening to analyze interneuron generation and migration, Meng et al. 85 found that Csde1 (−2.0) is required for interneuron generation.Taken together, these findings are consistent with VPA increasing net brain excitation by decreasing the number of inhibitory interneurons generated during fetal brain development. These reductions in gene expression may contribute to a decrease in inhibition in mice exposed to VPA in utero 86 SUMMARY: The approach for this study can be described as "hypothesis generating" in that we began without any preconceived idea (hypothesis) as to the mechanism by which a brief, transient dose of VPA, administered to the pregnant mouse at E12.5, can cause abnormal, autistic-like behavior in her offspring many weeks later.The analysis identified approximately 7,300 genes, expression of which was significantly affected by VPA.We then identified those genes significantly affected by VPA that were a) also linked to autism by GWAS (SFARI List; Table S3) or b) not on the SFARI List (Table S4) but known to play a role in critical steps of early brain development.Interference with one or more of these steps in the fetal brain has the potential to interfere with the "program" directing brain development to create a persistent pathological "signature" that leads to abnormal neuronal circuitry in the adult, long after the increase in VPA and the initial changes in gene expression have dissipated.Of these 654 genes, at least half have known mechanisms of action associated with brain development or function, of which 189 (Table 3) are discussed in detail below and in the Supplementary Discussion. An initial expectation was that one or more of these genes would exhibit sexually-dimorphic dysregulation by VPA, thereby suggesting a plausible underlying mechanism for the sex difference in the incidence of ASDs; however, no significant sexually-dimorphic effects of VPA were observed. Several genes affected by VPA in the fetal mouse brain appear in multiple GWAS studies (c.f., Tables 1 and 2) including Adnp, Arid1b, Ash1l, Cdh2, Cdk5l, Dyrk1a, Kmt5b, Mecp2, Nr2f1, Pogz, Pten, Tbr1, Tcf7l2, and Wac, moreover, nine of these genes were implicated in neurogenesis and cell fate determination in the embryonic brain 76 .This "short list" is a potential starting point for future hypothesis-driven studies to determine whether dysregulation of one or more of these genes by VPA is a proximal cause of the behavioral abnormalities identified in the adult animals.At least 20 genes encoding components of intracellular signaling pathways are dysregulated by VPA in the E12.5 brain (Figure 2); disruption of these signaling pathways also has the potential to disrupt multiple developmental processes and may contribute to the autistic-like behavior induced by VPA.Considering that neurogenesis and the fate determination of excitatory and inhibitory neurons are occurring in the E12.5 mouse brain, genes involved in the regulation of these processes that are dysregulated by VPA (c.f., Table 3) are of particular interest. Nevertheless, it remains possible that dysregulation of one or more of the other genes altered by VPA contributes to the proximal cause of the autistic-like behavior.The role of these candidate genes in causing the autistic-like endophenotype could be tested by determining the effect of manipulating gene expression (singly or in combination) on VPAinduced behavioral abnormalities.S2 and S3) are indicated in bold.The percentage of VPA-regulated genes is shown for each study.Red: genes identified in all five published papers and were affected by VPA the present study; blue: genes identified in four of the five published papers and were affected by VPA the present study; yellow: REFERENCES autism-associated genes identified in all five published papers but were not affected by VPA in the present study. Table 2. Genes linked to autism in GWAS that have also been reported to regulate "neurite outgrowth", "synapse/spine formation" or "synaptic plasticity" 47 .Genes in red bold are also up-or down-regulated by VPA in the fetal brain (Table S3).Lin et al. 47 did not distinguish among the multiple known isoforms of cadherins (CDH's); Cdh11 is downregulated 60% by VPA in the fetal brain (c.f., Table S3).S3 and S4).Note that some genes are included in more than one category. The number in parentheses shows the FC induced by VPA (mean of males and females); . Nf1 (−1.3) encodes NF1, which is mutated in neurofibromatosis type 1, an inherited neurocutaneous disorder associated with NDDs including autism.Nf1 deletion results in the specific loss of parvalbumin-expressing inhibitory cortical interneurons 87 .In the present study, Nf1 was downregulated by about 25% in fetal mouse brains exposed to VPA.Semaphorin-4a [Sema4a (−1.6)] and Semaphorin-4d [Sema4d (+1.9)], promote inhibitory synapse development via the postsynaptic Plexin-B1 receptor encoded by Plxnb1 (−1.8) 88 .Hapln1 (−3.7) and Hapln4 (+25.7),encode extracellular matrix proteins that are components of perineuronal nets (PNNs) 89,90 .Ramsaran et al. 91 reported that Hapln1 mediates the functional maturation of hippocampal parvalbumin interneurons through assembly of PNNs; this mechanism mediates the development of memory precision during early childhood.Hapln4 is a selective regulator of the formation of inhibitory GABAergic synapses between Purkinje and deep cerebellar nuclei neurons 92 ; the cerebellum is one of many brain regions implicated in the etiology of autism 93 .The 70% decrease in Hapln1 expression and the massive, 25-fold increase in Hapln4 induced by VPA would be expected to alter PNN density, which could disrupt the connectivity of neuronal circuitry leading to cognitive behavioral deficits. Figure 2 . Figure 2. Schematic diagram of multiple signaling pathways downstream from receptor Angara K, Ling-Lin Pai E, Bilinovich SM, Stafford AM, Nguyen JT, Li KX et al.Nf1 deletion results in depletion of the Lhx6 transcription factor and a specific loss of parvalbumin + cortical interneurons.Proc Natl Acad Sci USA 2020; 117: 6189-Soda T, D'Angelo E, Prestori F. The Cerebellar Involvement in Autism Spectrum Disorders: From the Social Brain to Mouse Models.Int J Mol Sci 2022; 1 American Psychiatric Association.Diagnostic and Statistical Manual of Mental Disorders (5th ed., text rev.).2022. Table Legends :Table 1 . Autism-risk genes identified in five published studies 42−46 .Genes dysregulated by VPA in fetal brain in the present study (Tables Table 3 (next page).Biological roles of 189 different genes that are dysregulated by VPA in the fetal brain and associated with autism or prenatal neurodevelopmental processes (from Tables Table 1 Table 2 text indicates that VPA increased gene expression The basis for this categorization, with references, is given below and in Supplemental Discussion.
2023-05-05T13:10:06.587Z
2024-01-10T00:00:00.000
{ "year": 2024, "sha1": "ac41e2deeed5796b931f67243b875b597549b838", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-3684653/latest.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "3428b319a1fc908888402866b9684a6580c77efe", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
9388211
pes2o/s2orc
v3-fos-license
ARTIGO Nested-PCR using MPB 64 fragment improves the diagnosis of pleural and meningeal tuberculosis Nested-PCR usando o fragmento MPB 64 melhora o diagnóstico da tuberculose pleural e meníngea Fluids in which Mycobacterium tuberculosis are seldom found, such as pleural and cerebrospinal liquids, are good candidates to be studied using PCR techniques. We detail our experience with a PCR assay applied to pleural and cerebrospinal fluids using the primer MPB64. Seventy three specimens were analyzed: 30 pleural fluids (PF), 26 pleural biopsies (PB) and 17 cerebrospinal fluids (CSF). The gold standard for the diagnosis of tuberculous meningitis was the positive culture for M. tuberculosis in CSF. Tuberculous pleural effusion was diagnosed when cultures of PF and/or PB were positive for M. tuberculosis, or the PB histology showed granulomas. Our results, compared to the gold standards employed, showed a sensitivity of 70%, specificity of 88%, positive predictive value of 82% and negative predictive value of 80%. The high specificity of the MPB64 fragment while still retaining a good sensitivity makes it very well suited for pleural and cerebrospinal tuberculosis diagnosis. Key-words: Tuberculosis. Diagnosis. PCR technique. Central nervous system. Pleura. Resumo O Mycobacterium tuberculosis é raramente encontrado em fluidos como o líquido pleural e o cérebroespinhal, tornando estas localizações de difícil diagnóstico. Apresentamos nossa experiência com uma técnica de PCR aplicada a líquido pleural e cerebroespinhal com o uso do primer MPB64. Sessenta e três espécimes foram analisados: 30 líquidos pleurais (PF), 26 biópsias pleurais (PB) e 17 líquidos cerebroespinhais (CSF). O gold standard para o diagnóstico de meningite tuberculosa foi a cultura positiva para M. tuberculosis no CSF. Tuberculose pleural era diagnosticada quando culturas do PF e/ou PB eram positivas para M. tuberculosis, ou a histologia da PB mostrava granulomas. Nossos resultados, comparados aos gold standards empregados, mostram sensitividade de 70%, especificidade de 88%, valor preditivo positivo de 82% e valor preditivo negativo de 80%. A elevada especificidade e boa sensibilidade do fragmento MPB64 o transformam em um bom parâmetro para o diagnóstico de tuberculose pleural e do líquido cérebroespinhal. Palavras-chaves: Tuberculose. Diagnóstico. Técnica de PCR. Sistema nervoso central. Pleura. 1. Medical School, Marilia, SP, Brazil. 2. Pulmonary Disease Department, Universidade de Campinas, Campinas, SP, Brazil. 3. Department of Clinical Pathology, UNICAMP, Campinas, SP, Brazil. 4. Hematology, UNICAMP, Campinas, SP, Brazil. 5. Internal Medicine, UNICAMP and Laboratory of Cancer Molecular Genetics-FCM/UNICAMP, Campinas, SP, Brazil. Address to: Dra Laura Sterian Ward. GEMOCA/Med Int/Clin Med/FCM/UNICAMP, Cidade Universitária Zeferino Vaz, 13083-970 Campinas, São Paulo State, Brazil. Telefax: 55 19 289-4107. e-mail: ward@unicamp.br /Recebido para publicação em 16/7/99. Tuberculosis continues to be a serious health problem in Brazil and, likewise in other countries, the number of cases is increasing in patients with human immunodeficiency virus (HIV) infection 6 .Pleural and central nervous system tuberculosis are frequently suspected in these patients and we often have to deal with these challenging diagnoses in our service.Laboratory methods play a crucial role in establishing the diagnosis and monitoring the therapy.In order to establish a conclusive diagnosis of tuberculosis it is necessary to demonstrate the presence of the Mycobacterium tuberculosis in body fluids or tissues.Microscopy for acid-fast bacilli is, at present, the mainstay of routine clinical laboratories for any rapid diagnostic approach to a patient under clinical suspicion of tuberculosis.However, the technique has low sensitivity and cannot identify the Mycobacterium species 7 .Traditional culture methods with identification of the causative Mycobacterium continue to be considered definitive in terms of diagnosis.However, these techniques are laborious and may take as long as 12 weeks to yield results 3 7 .Besides, the sensitivity of the culture can be as low as 50% or less 3 7 .Radiometric culture systems may improve the sensitivity and are faster but they still require at least 2 weeks to confirm the diagnosis and are expensive 11 .With the development of new techniques, such as the detection of microorganisms by hybridization with probes, introduced in the 1970s, and immunological procedures, limitations in the sensitivity and/or specificity of established techniques have become apparent 13 .Hence, molecular amplification technology emerged as the most revolutionary development to reach clinical and virology laboratories this decade.The advent of nucleic acid probe methods, more than a decade ago, was welcomed as a way to speed up the identification problem.However, it soon became clear that a more sensitive detection method amplifying the targets when only very few specimens were present was needed 2 .PCR-based amplification methods allow the search for organism-specific nucleic acid sequences regardless of the physiological requirements or viability of the organism.In some situations, such as in pleural or cerebrospinal infection, PCR stands out because of its speed, sensitivity and specificity 15 16 18 .A recent evaluation of PCR sensitivity using the primers described by Eisenach et al 5 suggest that it allows the detection of three copies of the M. tuberculosis genome/ml 1 5 .The method may therefore be used for the early detection of M. tuberculosis growth on liquid medium.We present here a reliable, simple and fast PCR method for tuberculosis detection in human fluids. MATERIAL AND METHODS Seventy-three specimens were obtained from 56 patients of the Hospital das Clínicas -Medical Science Faculty of the State University of Campinas (UNICAMP) São Paulo, Brazil.Thirty were pleural fluids (PF), 26 pleural biopsies (PB) and 17 cerebrospinal fluids (CSF).All patients presented exudative pleural effusions or symptoms of meningeal disease.In both circumstances, tuberculosis had to be ruled out.After signing an informed consent, the patients had their clinical specimens analyzed using bacterioscopy, traditional culture methods, microscopic examination of the pleural biopsies and PCR.The gold standard used for the diagnosis of meningeal tuberculosis was the positive culture for M. tuberculosis in the cerebrospinal fluid.For pleural tuberculosis, we considered positive those specimens with positive culture for M. tuberculosis in the pleural fluid and/ or in the biopsy or the demonstration of granulomas in the biopsy fragment 15 16 18 .The fragments of pleural biopsies were cultured, typed for Mycobacterium tuberculosis and also submitted to histologic examination. Two DNA extraction methods were used: one, simple and fast consisted of heating the sample during 10 minutes at 100ºC.Afterwards, the samples were used directly in the PCR reaction (PCR1).In parallel, we extracted DNA from another aliquot of the same sample using proteinase K over-night digestion, phenol/ chloroform extraction and ethanol precipitation (PCR2) 14 .We used a specific 240bp primer set of primers for the complex M. tuberculosis/bovis, a fragment called MPB64: sense 5' TCC GCT GCC AGT CGT CTT CC 3'and antisense 5'GTC CTC GCG AGT CTA GGC CA 3' 4 8 .The PCR products obtained were not satisfactory, possibly because most of the samples had less than 10-100 tubercle bacilli.Therefore, we designed another pair of primers to amplify an inner sequence: sense 5'ATT GTG CAA GGT GAA CGT AG 3'and antisense primer 5'AGC ATC GAG TCG ATC GCG GA 3'.PCR mixtures were prepared with 10µl of the product of the extraction, 50pmol of each primer, 100µM dNTPs, 10mM Tris HCl (pH = 9.0 at 25 o C), 50mM KCl, 1,5mM mGC12, and 4U Taq polymerase (Promega Co, Madison, WI) in a final volume of 100µl.Amplifications were carried out for 35 cycles with 1 minute denaturation at 94º C, 55ºC annealing for 1.5 minutes and primer extension at 72ºC for 3 minutes.We used a sample of distilled water as a negative control and samples of M. tuberculosis DNA extracted from a known culture (strain H37 from Institute Adolpho Lutz, Brazil) as positive controls.In order to perform a nested PCR, an aliquot of 10µl was removed from the initial reaction and directly added to the new reaction, carried out for 30 cycles at 94ºC for 1 minute, 55ºC for 1 minute and 72ºC for 1 minute.PF and CSF samples were also cultured in the egg-based (Lowenstein-Jensen) media and the culture supernatant was also used in the PCR.The PCR products were then analyzed by electrophoresis using a 2% agarose gel stained with ethidium bromide and examined under ultraviolet light. Statistical analysis.Data were analyzed by the χ 2 test.The level of significance was taken as p < 0.05.The sensitivity, specificity, positive and negative predictive values of the proposed test were calculated according to standard methods 10 . RESULTS The results of all methods for pleural tuberculosis diagnosis are compared in Table 1. Among the 19 cases investigated, 6 were positive with PCR1 and 13 cases were positive with PCR2.PCR1 and also by PCR2 (the same three cases), suggesting these patients could be harboring a tuberculosis disease not detectable by standard means.PCR2 extraction method provided better results than the PCR1 method (χ 2 ; p < 0.01). Table 3 shows PCR2 results compared with gold standard methods for both pleural and meningeal tuberculosis.Using this nested PCR method, we achieved a sensitivity of 70%, specificity of 88%, positive predictive value of 82% and negative predictive value of 80% 10 .These data were not different in the CSF group compared with PF group (χ 2 ; not significant). DISCUSSION Although considered primarily a pulmonary disease, tuberculosis can affect any organ system.Central nervous system involvement is potentially devastating and occurs with escalating frequency in both immunocompetent and immunologically incompetent populations.When we are dealing with specimens such as pleural and cerebrospinal fluids, known for their little positivity in bacterioscopy, most commonly used diagnostic methods present low sensitivity and/or are time consuming.Culture and histology exam of fragments obtained by pleural biopsy can increase the accuracy of the diagnosis, however they require an invasive procedure.Microscopic examination of CSF for acid-fast bacilli also has low sensitivity in meningeal tuberculosis and, especially in patients without AIDS, positive results are rare 9 .On the other hand, pleural and meningeal tuberculosis need to be promptly and reliably diagnosed.PCR techniques emerged as a very useful implement in these cases augmenting the diagnosis sensitivity dramatically 3 .Moreover, M. tuberculosis detection by molecular methods may also play a role in laboratory safety and, therefore, in laboratory costs, since after the initial extraction procedure, only non-infectious materials are handled 15 .PCR is expected to be more specific and sensitive than the routine procedure for diagnosis, but it is also more costly 1 .Cost-effectiveness comparisons of PCR versus smear examination showed no advantages of the former for the diagnosis of tuberculosis 12 .However, PCR can be of great value detecting very few bacilli when a rapid diagnosis is imperative, like in pleural and meningeal infections.Also, the largest contributing cost component is the cost of the PCR-kit 12 .We may be able to substantially reduce this cost standardizing home-made methods.New studies involving fine needle aspirates may also be envisaged, improving the diagnosis in many cases 17 . MPB-64 insertion element has been widely demonstrated to be highly specificity for the M. tuberculosis complex 4 8 .Tested against the IS6110 and the 65Kda HSP fragments, it gave less false positive results 8 .Some handicaps had to be overcome, such as the challenging mycobacterium DNA extraction from liquids where bacilli are very sparse.Phenolchloroformproteinase K method provided a material of good quality for PCR.The use of a nested amplification increased both sensitivity and specificity of the PCR process. In conclusion, we validated a nested-PCR technique using MPB64 fragment in the diagnosis of pleural and meningeal tuberculosis in our specimens, confirming it to be a powerful diagnostic tool with a good sensitivity (70%), a high specificity (88%), positive predictive value of 82% and negative predictive value of 80%.We demonstrated that the method is reliable, fast and specific, comparing positively with all other similar methods reported in the literature 4 5 8 . Table 1 - Pleural tuberculosis diagnostic methods comparison: gold standard procedures (pleural fluid culture, biopsy culture and granuloma detection in the biopsy), PCR 1 (without phenol/chloroform extraction) and PCR2 (with phenol/ chloroform extraction).The sensitivity of each method is showed in the last line of the table.
2017-07-16T10:21:20.498Z
2000-01-01T00:00:00.000
{ "year": 2000, "sha1": "31a784533327cfb304f3270f7f1a6a01b05907dd", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/rsbmt/a/KTprXqtJ79D33zmJ784xNWz/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "31a784533327cfb304f3270f7f1a6a01b05907dd", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
86807798
pes2o/s2orc
v3-fos-license
EVALUATION OF THE ANTICOAGULANT EFFECT OF VITAMIN K ANTAGONISTS IN PATIENTS WITH NON-VALVULAR ATRIAL FIBRILATION ISPITIVANJE ANTIKOAGULANTNOG EFEKTA ANTAGONISTA VITAMINA K KOD PACIJENATA SA NEVALVULARNOM ATRIJALNOM FIBRILACIJOM Authors Background/Aim. Despite the introduction of new oral anticoagulants (dabigatran, rivoroxaban, apixaban), vitamin K antagonists (VKA), such as warfarin and acenocoumarol are still the most widely used oral anticoagulants for the treatment of nonvalvular atrial fibrillation (NVAF). The time in therapeutic range (TTR) represents a measure of the quality of the anticoagulant effect of these drugs, and it is considered that the lower value of TTR is associated with the adverse effects of therapy. The aim of this study was to evaluate of the effectiveness of VKA therapy in patients with NVAF and to identify factors affecting the anticoagulation efficacy. Methods. A retrospective study was conducted on a population of 725 outpatients with NVAF, treated with VKA and followed in the Blood Transfusion Institute of Nis, Serbia, from January to December 2017. Laboratory control of the INR was done from capillary blood of patients on Thrombotrack Solo (Axis Shield, Norway) and Thrombostat (Behnk Elektronik, Germany). Targeted therapeutic INR was between 2.0 and 3.0. For each patient all available INR values were evaluated to calculate the individual TTR according to the Rosendaal method. Results. The study included a total of 725 patients with NVAF which had 6,105 INR measurements, what was 8.13 ? 2.47 INR measurements per patient. The mean value of TTR and was 60.15 ? 17.52%, but 49.72% of patients had TTR less than 60%. Patients were at high risk of thrombosis in 6.15% of time (INR < 1.5) and high risk of bleeding in 2.2% of time (INR > 4.5). The most significant independent factors affecting the quality of VKA therapy were gender, arterial hypertension, diabetes mellitus and the use of amiodarone and antiplatelet drugs (aspirin, clopidogrel). Conclusion. The TTR is undoubtedly useful indicator of the VKA treatment effectiveness. The most important predictors of poorer efficacy of VKA therapy are: arterial hypertension, diabetes mellitus, patients' gender and the use of amiodarone and antiplatelet drugs (aspirin, clopidogrel). To improve the quality of VKA therapy, education of patients and better collaboration with them, as well as a successful teamwork of clinicians are also imperative. Introduction Despite the implementation of new oral anticoagulants (NOAC) for the treatment of patients with atrial fibrillation or venous thromboembolism, vitamin K antagonists (VKA), such as warfarin, acenocoumarol and phenprocoumon are still the most widely used oral anticoagulants.The most common indications for their use are atrial fibrillation, mitral or aortic stenosis, mitral or aortic prosthetic valve, venous thromboembolism and intracavitary thrombosis 1,2 .This therapy is long lasting, for months and years, and in some cases till the end of life.The mechanism of action of these drugs is based on their competition with the vitamin K and reduction the level of vitamin K dependent coagulation factors (FII, FVII, FIX, FX), an anticoagulant protein C and its co-factor protein S 3 . The use of VKA must be regularly and often laboratory controlled in order to ensure the adequacy of therapy and to avoid sub-dosing or drug overdose.The most commonly used test for the control of oral anticoagulant therapy is the prothrombin time (PT), expressed in INR system, which provides an internationally standardized monitoring of the treatment.Therapeutic range for INR is from 2.0 to 3.5, depending on the indication for which the drug is used 4 .Therapeutic ranges are generally set up on the basis of clinical trials and are determined in order to achieve the required minimum coagulating effect for the prevention of recurrent thrombosis or lasting of existing thrombotic episodes.The treatment carries, on the one hand, the risk of bleeding, and on the other hand, the risk of thrombosis, so warfarin and other VKA have a narrow therapeutic index and should be dosed within strictly defined ranges 3,5 . The Time in Therapeutic Range (TTR) is commonly used to evaluate the quality of VKA therapy and is an important tool for assessing the risks of this therapy.TTR estimates a percentage of time a patient's INR is within the desired therapeutic range and is widely used as an indicator of anticoagulation control [6][7][8] .The numerous studies have shown that higher TTR correlates with good clinical outcomes, and that there is a strong correlation between TTR and adverse events (bleeding, thrombosis).But although TTR is routinely assessed, there is no consensus on acceptable target for TTR in practice.Active-W study suggested a minimum TTR of 58% in order to show a benefit of oral anticoagulant therapy over antiplatelet therapy in terms of preventing vascular events 9 , RE-LY study on Portuguese patients showed mean TTR of 61% 10,11 , Thrombosis Canada states that good INR control is when TTR is more than 60% 12 , but there are studies that report elevated level of TTR on 74% as a measure of effective anticoagulation 8,13 .It is known that many factors correlate with TTR, and the most important are age, sex, smoking, concomitant drugs, alcohol, comorbid medical and psychiatric conditions 14 . The aim of this study was was to evaluate of the effectiveness of VKA therapy in patients with NVAF and to identify factors affecting the anticoagulation efficacy. Methods A retrospective study was conducted on a population of 725 outpatients with atrial fibrillation, treated with VKA (warfarin (Farin), acenocoumarol (Sintrom, Sinkum, Acenokumarol)) and followed in Department for hemostatic disorders testings in Blood Transfusion Institute of Niš from January to December 2017.The study included patients of both sexes who had strictly determined diagnosis of NVAF and indication for the use of VKA, the target INR (2.0-3.0),patients who were expected to take VKA throughout the whole period of study and that control testing of INR would be done only at the Institute.We excluded patients who had discontinued treatment for any reason at any time of investigation, patients who have had interruption in taking VKA for any reason, patients who have done any of the control of INR in another facility, patients that had changed target INR during the investigation, as well as patients with INR > 6.0.We have characterized the demographic and clinical characteristics of the patients, as well as the use of other drugs (-blockers, antiplatelet drugs, statins, amiodarone, ACE inhibitors). Laboratory control of the INR was done from capillary blood of patients on Thrombotrack Solo (Axis Shield, Norway) and Thrombostat (Behnk Elektronik, Germany).For each patient we evaluated all available INR values to calculate the individual TTR according to the Rosendaal method 15 .This method uses linear interpolation to assign an INR value to each day between successive observed INR values (INR-DAY software program (Dr FR Rosendaal, Leiden, Netherlands).The primary outcome was to determine the TTR, and the secondary outcomes were to determine time under (INR < 2.0) and over therapeutic range (INR > 3.0), time with increased thrombotic risk (INR < 1.5) and time with increased hemorrhagic risk (INR > 4.5), as well as to determine independent factors for increased risk of worse anticoagulation therapy. Statistical analysis was performed using Statistical Package for Social Science (SPSS Software GmbH, Germany), version 15.0.The results are presented in tables and graphs, using the mean values and standard deviations (SD).Qualitative characteristics of the investigated variables are given as frequency (N) and the percentage (%).The continuous data were analyzed using Chi-square test.Multivariate logistic regression analysis was performed to identify independent risk factors for TTR < 60%.The results were considered to be statistically significant at a p < 0.05.Since it is "post-hoc" analysis from the prospective observational registry, we cannot exclude the presence of unmeasured selection bias, and statistical analyses were not specified before the data was seen, which could be some kind of study limitation. Results From the total of 725 patients in this study, there were 430 men (430/725 or 59.40 %) and 295 women (295/725 or 40.60 %).The average age of patients in the study was 71.05±10.42years, range from 22 to 88 years.There was no statistically significant difference in the age structure of patients by gender (t = 1.125; p = 0.043).Table 1 shows the main characteristics of the patients.The mean TTR was 60.15±17.52%.More than a fifth of the time patients had INR under therapeutic range (INR < 2.0 in 21.05% of time), while in 18.10% of time patients had INR > 3.0.A high risk of thrombosis (INR < 1.5) patients had in 6.15% of time, and in 2.20% of time they were at high risk of bleeding.Time in therapeutic range and time out of therapeutic range in investigated patients are shown in Figure 1.During the period of examination there were no major bleedings, while 65 patients (65/725 or 8.96%) had minor bleedings, mainly in the form of bruises, hematoma and epistaxis, whereas 4 patients (4/725 or 0.55%) had haematuria and 3 patients (3/725 or 0.41%) had bleeding from the gastrointestinal tract.After adjusting the dose of VKA bleedings were stopped. Figure 2 shows distribution of TTR values, where we can see that 49.72% of patients have a TTR less than 60%, which means that almost half of the patients was at increased risk for serious complications of treatment.Table 3. shows logistic regression model of independent factors for the assessment of increased risk of poor effect of anticoagulation therapy.The whole model was highly significant (χ 2 (df = 9, N = 725) = 20.637;p < 0.001) and explained 57.81% of the variance of efficiency of VKA.The factors that gave statistically significant contribution to the model were: gender, arterial hypertension, diabetes mellitus and the use of amiodarone, aspirin and clopidogrel.Table 3. Discussion Anticoagulant drugs are used in the treatment or prevention of thromboses and thromboembolic complications.Traditional VKA, which have been in use for over 50 years are the gold standard in therapy for all that time.They provide the necessary protection from thromboembolic events and have proven to be sufficiently effective over many years of use.One of the most common indications for VKA therapy is atrial fibrillation and guidelines presents that patients who are at low risk may be treated only with aspirin, while in patients at high risk it is recommended to use VKA 2,16,17 .Anticoagulant therapy has reduced a rate of stroke by 64% and mortality by 26% in this group of patients 18.But VKA therapy has disadvantages and the most important are: unpredictable response, narrow therapeutic window, routine monitoring, slow start/stop action, often dose adjustment, numerous interactions with food and drugs, resistance to warfarin, procoagulant effect of warfarin at the beginning of the therapy.However, the most severe complication of VKA therapy is intracranial hemorrhage (ICH), whose rate is about 1% in clinical studies 19 . The efficiency and safety of VKA depend strongly on the TTR value, which is a measure of the period in which the patient was in an optimal INR range.However, although the TTR is generally accepted as a measure for monitoring of the anticoagulant effect of drugs and the successful conduction of this therapy, there are no strengthened data what is accepted value of TTR.Recent trials related to the introduction of new oral anticoagulants have provided data of actual TTR values in different countries of the world.In ROCKET-AF study the mean TTR was 55.2%, but the values in Western Europe and North America were significantly higher, 63% and 64%, respectively 20 .In ARISTOTLE study the mean TTR was 66% 21 , in RE-LY study 67.2%, with the highest values of 77% in Sweden and 74% in Finland and Australia 10,11 .On the other hand, Gateman D et al calculated the mean TTR in St. Paul Family Health Network in Ontario of 58.05% 8 , while the mean TTR in the study of Ciurus T et al is 76% that is considered to represent excellent anticoagulation control 1 .According to our study, the mean value of TTR is 60.15% during a follow-up of one year, and it is lower than that reported from big clinical trials, but still correlates with the number of the existing data in the literature.Also, the value is greater than the minimum TTR of 58% at which there is a benefit of anticoagulant therapy over antiplatelet therapy in terms of preventing vascular events 9 .Especially important result of our study was the fact that 49% of patients had TTR less than 60%, indicating that almost half of the patients were at increased risk of serious adverse events, both of bleeding and thrombosis. This fact imposes a deeper analysis of management of the anticoagulant therapy in our institution, which involves the study of the relationship between patient and transfusion physician, identifying and understanding the factors which may have the influence on the quality of the therapy, the behavior of the patients in accordance with established criteria, as well as the modification of VKA therapy in accordance with co-morbidities and other drugs that must be introduced into therapy afterward.The INR values that are out of therapeutic range require high-speed control (in a short period of time), which enhances the number of patients on a daily and monthly basis, increasing the cost of treatment, and additionally is the risk factor for complications of VKA treatment which may be potentially very serious for the patients. Great variations in the values of TTR show that the anticoagulant effect of VKA is affected with a great number of factors.Our investigation has shown that gender, arterial hypertension, diabetes mellitus and the use of amiodarone, aspirin and clopidogrel were associated with lower probability of staying within the target INR.The strongest independent factor for bad anticoagulation control was use of amiodarone, which is the most widely used antiarrhythmic in atrial fibrillation.It is known that amiodarone has a negative impact on the anticoagulant effect of VKA, because it inhibits the hepatic metabolism of warfarin, potentiating its anticoagulant effect and resulting of high INR values and increased risk of bleeding 22,23 .The same effect has the concomitant use of antiplatelet therapy (aspirin and/or clopidogrel), which also potentiate the anticoagulant effect of VKA and increases the risk for bleeding.A large number of studies showed that although this combination of drugs can potentially prevent both thromboembolism and atherothrombotic events, it is also associated with an increased risk of severe bleeding and requires careful consideration of all the risks and benefits 24,25 .A large, nationwide investigation in Denmark showed that a risk for severe bleeding in patients taking VKA and aspirin is being 1,8-fold increased, 3,5-fold increased in patients taking VKA and clopidogrel, and 4-fold increased in patients taking triple therapy 26 .Looking at the same problem from the other hand, our recent investigation of different preparations of acetylsalicylic acid in patients with stable coronary disease also has shown that there is an increased effect of aspirin in patients receiving anticoagulant therapy, so there is an increased risk for bleeding 27 . Gender also stands out as a significant predictor of bad anticoagulation, which shows that women respond poorer to VKA treatment, so there is far more difficult to achieve good control than in men.The reason for this effect is unclear, but previous studies confirmed this fact and have shown that women are at greater risk of AF-related stroke during VKA treatment, as a result of poor anticoagulant effect of warfarin 14,28,29 .The impact of arterial hypertension on anticoagulant therapy has not been precisely defined, although it has been studied in numerous investigations.Therefore Apostolakis et al have shown that hypertension is associated with lower TTR 14 , while on the other side Veterans AffaiRs Study to Improve Anticoagulation (VARIA) 30 this relationship didn't confirm.This investigation has shown that arterial hypertension is a predictor of poor anticoagulation, and possible explanation of this influence may be associated with interaction of drugs 31 .Finally, diabetes mellitus, as a predictor of the poorer effect of VKA is associated with increased levels of the procoagulant clotting factors (FII, FVII) and a decrease of anticoagulants, such as thrombomodulin, with abnormal fibrinolytic pathway and decreases fibrinolysis 32,33 .In these patients, most often there is a disorder of renal function, which leads to the abnormal elimination of these drugs and the poorer anticoagulant effect. Since of the various effects of VKA and the impact of a number of factors to this therapy it is developed a new era of anticoagulation which is a crucial for all patients who do not have sufficient anticoagulant protection or where the TTR is less than 60%.These are direct oral anticoagulants (DOAC) or new oral anticoagulants (NOAC) (also called a target-specific anticoagulants): on one side, dabigatran, which is a direct inhibitor of thrombin, and on the other side inhibitors of FXa: rivaroxaban, apixaban, edoxaban.A number of meta-analyses have shown that these drugs have a better safety profile than the VKA, lower incidence of bleeding, especially intracranial or gastrointestinal, have fewer interactions with food than VKA, achieve faster antithrombotic effect and there is no need for regular monitoring because of predictable pharmacokinetics [34][35][36] .Compared with warfarin, dabigatran is associated with a reduced risk of ischaemic stroke, intracranial haemorrhage and mortality, but with an increased risk of major gastrointestinal bleeding.It is the only anticoagulant with a specific antidote Idarucizumab.Inhibitors of FXa are recommended for patients with mild renal impairment (only 1/3 of the drug is renal eliminated), high risk of bleeding, and/or potential drug-drug interactions. Conclusion The TTR is undoubtedly useful and beneficial indicator of the effectiveness of VKA anticoagulant treatment.The most important predictors of poorer VKA therapy efficacy are arterial hypertension, diabetes mellitus, patints' gender and the use of amiodarone and antiplatelet (aspirin, clopidogrel) drugs.To improve the quality of VKA therapy, an education of patient and better collaboration with them, as well as a successful team-work with clinicians are also imperative.Revised on January 8, 2019.Accepted on January 10, 2019. Fig. 1 - Fig. 1 -Time in therapeutic range (TTR) and time out of therapeutic range in investigated patients (%). Table 1 . During the one year follow-up of patients on VKA therapy a total of 6105 INR measurements were done, which is 8.13±2.47INRmeasurementsper patient.Average number of days between INR measurements was 34.89±17.26.Characteristics of anticoagulant therapy during the investigated period are shown in Table2. Table 3 . Logistic regression model of independent factors for assessing the efficiency of AVK Received onDecember 14, 2018.
2019-03-28T13:33:17.052Z
2019-01-01T00:00:00.000
{ "year": 2020, "sha1": "fd83d8415554ceb92a7bec19911ec6f777d87da7", "oa_license": "CCBYSA", "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0042-84501900008S", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "edc9b9e70fb178abb19fa06b3888cb6e6dfc39b5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238259453
pes2o/s2orc
v3-fos-license
Local KPZ behavior under arbitrary scaling limits One of the main difficulties in proving convergence of discrete models of surface growth to the Kardar-Parisi-Zhang (KPZ) equation in dimensions higher than one is that the correct way to take a scaling limit, so that the limit is nontrivial, is not known in a rigorous sense. To understand KPZ growth without being hindered by this issue, this article introduces a notion of"local KPZ behavior", which roughly means that the instantaneous growth of the surface at a point decomposes into the sum of a Laplacian term, a gradient squared term, a noise term that behaves like white noise, and a remainder term that is negligible compared to the other three terms and their sum. The main result is that for a general class of surfaces, which contains the model of directed polymers in a random environment as a special case, local KPZ behavior occurs under arbitrary scaling limits, in any dimension. to model the growth of a generic randomly growing surface. If f (t, x) is the height of a d-dimensional surface at time t ∈ R ≥0 and location x ∈ R d , the KPZ equation prescribes that the evolution of f is governed by the equation where ξ is a random field known as space-time white noise, and ν, λ and D are the parameters of the model. Formally, space-time white noise is a distribution-valued centered Gaussian random field, with covariance structure where δ and δ (d) are the Dirac delta functions on R and R d , respectively. (See Section 3 for a precise definition of space-time white noise.) It is difficult to give a rigorous meaning to the KPZ equation, mainly due to the well-known difficulties in defining products of distributions. This problem now has a complete solution in dimension one, using a variety of techniques, such as the Cole-Hopf solution [8], regularity structures [51,52], paracontrolled distributions [47,50], energy solutions [43][44][45]48], and renormalization group [59]. Moreover, many one-dimensional discrete processes have been shown to have a KPZ scaling limit, as in [1,2,10,11,32,33,66,71,72]. All of this is only a small sample of the enormous literature that has grown around rigorous 1D KPZ. For surveys, see [26,67,68]. There are some recent constructions of distribution-valued solutions of the KPZ equation in dimensions greater than one [16,21,24,25,29,35,46,60,61]. These solutions are 'physically trivial', by being equivalent to solutions of a linear stochastic differential equation, called the stochastic heat equation with additive noise. A 'nontrivial' solution of the KPZ equation in d ≥ 2 has not yet been constructed, although a promising breakthrough has occurred very recently for the related 2D stochastic heat equation with multiplicative noise, which is formally the 'exponential' of 2D KPZ [17]. A more detailed discussion of all this is in the forthcoming sections. A fundamental roadblock in constructing nontrivial solutions of the KPZ equation in d ≥ 2 is that we do not know how to take scaling limits of approximate solutions to reach a nontrivial limit. Even in dimension one, there can be many different scaling limits. See, for example, [1,Section 7] for a discussion of the various ways of taking scaling limits of 1D directed polymers, only one of which has been made fully rigorous. But in many 1D models, we know at least one way of taking a scaling limit that leads to a nontrivial solution of the KPZ equation. In higher dimensions, the question becomes less tractable. Physicists believe that for 2D models, the celebrated 'Family-Vicsek scaling' [38] is the correct one, and leads to a function-valued, rather than distribution-valued, solution of the 2D KPZ equation. This has been verified in numerical simulations [55,58,69] for discrete models, but remains out of the reach of rigorous mathematics. (See the end of Subsection 1.4 for a more detailed discussion.) 1.2. Local KPZ growth. The goal of this paper is to take a small step towards understanding KPZ in d ≥ 2 without running into the issue of constructing scaling limits, building on a framework introduced recently in the series of papers [18,19,22]. (Even in d = 1, this new framework may be useful in going beyond exactly solvable models; this will appear in forthcoming work with Arka Adhikari.) Since the 'correct' way to scale is still mysterious, the following workaround is proposed. Consider a general class of growth models, which contains at least one model of widespread interest. Then show that, irrespective of how we take a scaling limit, the growth is always locally like the KPZ equation (1.1), breaking up as the sum of a Laplacian term, a gradient squared term, a noise term, and a residual term that is negligible compared to the other three terms and their sum. Surprisingly, this turns out to be doable. The details are as follows. The first step is to give a precise definition of local KPZ growth. Take any d ≥ 1. Suppose that we have a collection of random functions {f ε } ε>0 from Z ≥0 × Z d into R. A general 'rescaling' of f ε is defined as follows. Let α(ε), β(ε) and γ(ε) be positive real numbers depending on ε, with α(ε) and β(ε) tending to zero as ε → 0. Based on these coefficients, the rescaled version of f ε is the function f (ε) : R >0 × R d → R defined as where ⌈u⌉ denotes the smallest integer greater than or equal to u when u ∈ R, and denotes the vector (⌈u 1 ⌉, . . . , ⌈u d ⌉) when u = (u 1 , . . . , u d ) ∈ R d . Note that this means space and time are rescaled so that successive time points are separated by α(ε) and neighboring points in space are separated by β(ε). The factor γ(ε) is just a multiplicative factor meant to ensure that the limit of f (ε) as ε → 0 (on some appropriate space of functions or distributions) does not blow up to infinity or shrink to zero. This is why we need α(ε) and β(ε) to tend to zero, but there is no restriction on γ(ε). Let A = {0, ±e 1 , . . . , ±e d } be the set consisting of the origin and its nearest neighbors in Z d . Define the 'local average' of f (ε) at a point (t, the 'approximate Laplacian' as and the 'approximate squared gradient' 1 as The above definitions are inspired by the fact that if α(ε) → 0, β(ε) → 0, and f (ε) converges in some strong sense to a smooth function f as ε → 0, then the approximate time derivative, the approximate Laplacian, and the approximate squared gradient converge to ∂ t f , ∆f and |∇f | 2 . Of course, we do not expect f (ε) to converge to a smooth limit in general. For the definition of local KPZ behavior below, and for use in the rest of the paper, recall the meanings of the o P and O P notations. If {X ε } ε>0 and {Y ε } ε>0 1 In the definition of the approximate squared gradient, one may object that it is more natural to x) as the term to be subtracted off. The reason behind choosing are collections of random variables (which are allowed to be constants), we say that In other words, X ε /Y ε → 0 in probability as ε → 0. Similarly, we say that In other words, {X ε /Y ε } ε>0 is a tight family of random variables. Definition 1.1 (Local KPZ behavior). Let all notation be as above. We will say that f (ε) has 'local KPZ behavior' as ε → 0 if for some strictly positive ν(ε), λ(ε), and D(ε), which can vary arbitrarily with ε, some collection of 'noise fields' ξ (ε) : such that the following conditions hold: (1) The noise field ξ (ε) converges in law to white noise on R >0 × R d as ε → 0 (see Section 3 for the definition of this convergence). (2) The remainder term R (ε) (t, x) is o P of the first three terms on the right and their sum, meaning that R (ε) (t, x) divided by any of the first three terms, or by their sum, tends to zero in probability as ε → 0. Just to fully clarify the second condition and remove any scope for confusion, we note that it means that for any fixed (t, x), the quantities and x) all tend to zero in probability as ε → 0. In our examples, we will have that for fixed (t, x) and ε, the noise term ξ (ε) (t, x) is independent of the Laplacian and gradient squared terms. But we omit this from the definition of local KPZ behavior, so as to leave open the possibility of other examples where the independence criterion does not hold. It may seem as if ν(ε), λ(ε), and D(ε) should not be allowed to vary with ε if we want something analogous to (1.1). However, this is not true. In the KPZ literature, it is understood that the coefficients in (1.1) can be allowed to vary when taking a scaling limit, and even be allowed to tend to zero or blow up to infinity. This is especially true in dimensions higher than one. For example, the Family-Vicsek scaling for 2D surfaces [38] requires this (see further discussion in Subsection 1.4). The important point is that we want the time derivative to decompose into a linear combination of the Laplacian, the squared gradient, a noise term that behaves like white noise, and a negligible error term. This is captured by our definition of local KPZ growth. Having defined the notion of local KPZ growth, we define in the next subsection a class of discrete growth models that will be shown to have local KPZ growth under arbitrary scaling limits. 1.3. A class of growing random surfaces. Fix some d ≥ 1. Recall that we defined A = {0, ±e 1 , . . . , ±e d } to be the set consisting of the origin and its nearest neighbors in Z d . Let φ : R A → R be a function. Let z = {z t,x } t∈Z >0 ,x∈Z d be a collection of i.i.d. random variables, which will be called the 'discrete noise field', or simply the 'noise field' when there is no scope for confusion with the noise field ξ (ε) from Definition 1.1. Given ε > 0, consider a function f ε : Z ≥0 × Z d → R growing as follows: f ε (0, x) = 0 for all x, and for each t ≥ 0, Imagine f ε (t, x) to be the height of a d-dimensional random surface at time t and location x. The above recursion says that the height at time t + 1 is a function of the heights at x and its neighbors at time t, plus an independent random fluctuation. Since the function φ 'drives' the growth of f , we will sometimes refer to φ as the 'driving function' (as in [18,19]). Let 1 ∈ R A denote the vector of all 1's. For u ∈ R A , let u denote the average of the coordinates of u. For u, v ∈ R A , let us write u ≥ v if u a ≥ v a for each a ∈ A. We make the following assumptions about φ. • Equivariance under constant shifts. We assume that for all u ∈ R A and c ∈ R, φ(u + c1) = φ(u) + c. Besides being physically natural, this assumption has a long history in the literature on convergence of approximation schemes for partial differential equations, starting with [5]. It is also part of the framework introduced in [18,19,22]. • Zero at the origin. We assume that φ(0) = 0. There is no loss of generality in this assumption, since equivariance ensures that if φ(0) = 0, and we define φ(u) := φ(u) − φ(0), and f ε is defined using φ in (1.2), then This assumption, too, is physically natural and has appeared in related prior work [5,18,19,22]. • Symmetry. We assume that φ(u) remains unchanged under any permutation 2 of the coordinates of u. This is a strengthening of the assumption of 'invariance under lattice symmetries' from [18]. • Regularity. We assume that φ is differentiable everywhere, and twice continuously differentiable in a neighborhood of the origin. As noted in [18,22], this assumption is needed for convergence to KPZ. In the absence of this assumption, the local growth may resemble some other equation, as in [22]. • Nondegeneracy. We assume that the Hessian matrix of φ at the origin is nonzero. This assumption is needed to ensure the presence of the gradient squared term in the KPZ limit. If the Hessian at the origin is zero, we may have a different kind of local growth, as in [22]. • Strict Edwards-Wilkinson domination. The Edwards-Wilkinson surface growth model [37] is described by equation (1.2) with φ(u) = u. We assume that our surface grows at least as fast as the Edwards-Wilkinson surface, meaning that φ(u) ≥ u for all u. Moreover, we assume that this domination is strict, in the following sense: If {u n } n≥1 is a sequence such that φ(u n ) − u n → 0, then u n − u n 1 → 0. This is one of the two key assumptions that allow us to deduce local KPZ behavior under arbitrary scaling limits. 3 In addition to the above assumption on φ, we also make the following set of assumptions on the noise field (in addition to the fact that it is a field of i.i.d. random variables). • Zero mean. We assume that the noise variables have zero mean. • Boundedness. We assume that the noise variables are bounded. That is, there is some constant B such that |z t,x | ≤ B almost surely. This is the second key assumption that ensures local KPZ growth under arbitrary scaling limits. • Absolute continuity. We assume that the law of the noise variables is absolutely continuous with respect to Lebesgue measure. We will refer to this condition by simply saying that the noise variables are 'continuous'. Under the above conditions on the driving function and the noise field, it turns out that the discrete surface f ε has local KPZ growth under arbitrary scaling limits. This result is stated in the next subsection. 1.4. Results. Let f ε be as in the previous subsection, and suppose that all of the stated assumptions on φ and the noise field are satisfied. Let α(ε), β(ε) and γ(ε) be positive real numbers depending on ε, such that α(ε) and β(ε) tend to zero as ε → 0. As in Subsection 1.2, define the rescaled function f (ε) : The following theorem shows that f (ε) has local KPZ behavior under any scaling where ε is sent to zero as the lattice spacing goes to zero. This is the main result of this paper. Theorem 1.2. Under the assumptions on the driving function φ and the noise field z stated in the previous subsection, f (ε) has local KPZ behavior as ε → 0, in the sense of Definition 1.1, for any choice of α(ε), β(ε) and γ(ε) such that α(ε) and β(ε) tend to zero as ε → 0. Moreover, the coefficients ν(ε), λ(ε) and D(ε) of Definition 1.1 turn out to be the following: where σ 2 is the variance of the noise variables, q is the value of the diagonal elements of Hess φ(0) (which are all equal due to the symmetry of φ), and r is the value of the off-diagonal elements of Hess φ(0). To appreciate the meaning of Theorem 1.2, consider the following. It is not hard to guess that local KPZ behavior must be a consequence of Taylor expansion. But to invoke Taylor expansion, we need that for two neighboring points x and y, f ε (t, x) ≈ f ε (t, y). This is trivially true for t = 0, since f ε (0, x) = 0 for all x. Using (1.2), one can then deduce by a crude inductive argument that this continues to hold as long as t does not exceed a threshold determined by ε. But for Theorem 1.2 to be true, we need that f ε (t, x) and f ε (t, y) continue to be close to each other for neighboring x and y even if t is allowed to vary arbitrarily as ε → 0. This is encapsulated by the following result, which is the key step in proving Theorem 1.2. Recall the O P notation defined above Definition 1.1. We will see later that the proof of Theorem 1.3 does not provide much intuition for why the result is true. An intuitive explanation is indicated by the next result. For a function f : Let δf := (δ 1 f, . . . , δ d f ) be the 'gradient' of f . Heuristically, it is possible that for fixed ε, the 'gradient field' δf ε (t, ·) converges in law to a stationary process as t → ∞. It is not clear whether this is true, but the following result gives strong evidence in favor, under one extra assumption. Theorem 1.4. Suppose that in addition to the hypotheses of Theorem 1.2, the following holds: Whenever |u n − u n 1| → ∞, we have φ(u n ) − u n → ∞. Then for any fixed ε > 0, the sequence of gradient fields {δf ε (t, ·)} t∈Z ≥0 is a timehomogeneous Markov chain on the state space (R d ) Z d (endowed with the product topology, which makes it a Polish space), with at least one translation-invariant stationary probability distribution. Moreover, it is a tight family. In this context, it is worth noting that there has been an enormous amount of effort in the last two years on understanding stationary KPZ growth. Stationary solutions of the stochastic Burgers equation -which is supposed to be the equation for the gradient field of the solution of the KPZ equation -have been constructed in dimension one [3,36] and also in dimensions two and three [34]. Stationary solutions of the 1D 'open KPZ' equation were recently constructed in [28] and further developed and analyzed in [6,12,13]. Convergence to stationarity has been studied in [63][64][65]75]. It would be interesting if similar results can be proved in the general setting of Theorem 1.4. Incidentally, the reason why the existence results in [3,34,36] are restricted to d ≤ 3, while Theorem 1.4 holds in any dimension, is that the discreteness of both space and time in Theorem 1.4 makes it easy to overcome the problems of illposedness inherent in continuous-time differential equations. Although the white noise is smoothed in space in [3,34,36], which makes the space effectively discrete, it remains unsmoothed in time, giving rise to genuine technical limitations. This is the same reason why ε can be arbitrary in Theorem 1.4, while it needs to be small enough for the results of [3,34,36]. Another large number of recent results related to the above theorems are about local behaviors of solutions of the 1D KPZ equation and related processes, such as the Airy sheet, the KPZ line ensemble, the Brownian landscape, and the KPZ fixed point [7,27,30,31,[63][64][65]70]. These results contain much more information than Theorem 1.2, but for one-dimensional processes. Again, it would be interesting if some analogous refined results can be proved in the general setting considered above. Theorem 1.2 can be viewed as a KPZ universality result. Roughly speaking, KPZ universality is the notion that the KPZ equation arises as the scaling limit of a large and varied class of growing random surfaces for which exact formulas are not available. Significant progress on 1D KPZ universality has been made in recent years [32,49,53,54,[73][74][75], although much remains to be understood. In dimensions higher than one, almost nothing is known. Theorem 1.2 is a small step towards understanding the universal nature of KPZ growth in general dimensions in the absence of integrability. As a final remark, suppose that d = 1, and we want the coefficients ν(ε), λ(ε) and D(ε) to not depend on ε. Then by the formulas from Theorem 1.2, we need that β(ε) 2 ∝ α(ε), β(ε) 2 ∝ α(ε)γ(ε), and α(ε) ∝ ε 2 β(ε)γ(ε), where the proportionality constants may depend on d and the law of the noise variables. The first two conditions show that γ(ε) must be a constant, and then plugging this into the third condition and using the first condition again, we get α(ε) ∝ ε 4 . Then using the first condition one final time, we have β(ε) ∝ ε 2 . So, when d = 1, the only way to ensure that ν(ε), λ(ε) and D(ε) do not vary with ε is to have α(ε) ∝ ε 4 , β(ε) ∝ ε 2 and γ(ε) = constant. We will see later that for directed polymers in random environment, this gives the 'intermediate disorder' scaling limit constructed in [1]. In forthcoming work with Arka Adhikari, it will be shown that a class of 1D surfaces (of the type considered in this paper) converge in law to this universal scaling limit (known as the Cole-Hopf solution of the 1D KPZ equation) under the above scaling of space and time. Intriguingly, for d ≥ 2, the same logic shows that there is no way to get constant coefficients as ε → 0, if we insist on α → 0 and β → 0 as ε → 0. This suggests that at least one of the coefficients ν, λ and D must tend to zero as ε → 0 for a KPZ scaling limit in d ≥ 2. Indeed, numerical simulations (such as in [55,58,69]) suggest that for d = 2, it may be possible to obtain a function-valued scaling limit by taking (in what is known in physics as the Family-Vicsek scaling [38]) ν ∼ β 2−z , λ ∼ β 2−z−a , and D ∼ β 2+2a−z for certain exponents a and z. Scaling arguments based on Galilean invariance [4] suggest that these exponents should satisfy a + z = 2. If we assume this, then we obtain the scaling ν ∼ β a , λ ∼ 1, and D ∼ β 3a . Theorem 1.2 has something interesting to say here: Let d = 2. Suppose we take some β = β(ε) → 0 as ε → 0, and let α = β z for some exponent z. Suppose that in this setting, we want to have λ ∼ 1 in Theorem 1.2. Then the formula for λ implies that γ ∼ β 2−z . Plugging this into the formulas for ν and D, we get ν ∼ β 2−z and D ∼ ε 2 β 3(2−z) , exactly matching the Family-Vicsek scaling except for the ε 2 . However, this may not be an issue, since ε is often taken to scale like | log β| −1/2 in 2D (as in [15][16][17]21]), and therefore has no role to play in the exponents. Thus, it is possible that Theorem 1.2 may provide a launchpad to an eventual rigorous proof of the Family-Vicsek scaling relation. In this context, it should also be noted that the numerical works are exclusively for discrete models. There does not seem to be any numerical work for the continuum KPZ equation, although there is a considerable body of theoretical physics results (see, e.g., [14]). 1.5. Application to directed polymers. Fix some d ≥ 1, and let where A = {0, ±e 1 , . . . , ±e d }, as before. It is straightforward to verify that φ is equivariant under constant shifts, zero at the origin, monotone, symmetric, twice continuously differentiable, and has a nonzero Hessian matrix at the origin. Moreover, its Hessian matrix is positive semidefinite everywhere, which shows that φ is convex. Thus, the following lemma shows that φ strictly dominates Edwards-Wilkinson growth. Lemma 1.5. If a driving function φ : R A → R is equivariant under constant shifts, zero at the origin, monotone, symmetric, C 2 in a neighborhood of the origin, has a nonzero Hessian matrix at the origin, and is also convex, then φ satisfies the strict Edwards-Wilkinson domination condition. Moreover, it satisfies the additional condition of Theorem 1.4. Let z = {z t,x } t∈Z >0 ,x∈Z d be a collection of i.i.d. random variables (called 'noise variables' below), and for each ε > 0, let f ε be the discrete random surface generated according to (1.2) with zero initial condition, using the driving function φ displayed in (1.3). A simple induction shows that where P t is the set of all lazy random walk paths of length t starting at the originthat is, the set of all P = (p 0 , . . . , p t−1 ) ∈ (Z d ) t such that p 0 = 0 and |p i −p i−1 | ≤ 1 for each i, where | · | is the Euclidean norm. This is the log-partition function of the (d + 1)-dimensional directed polymer model [23] on lazy random walk paths 4 of length t − 1 at inverse temperature ε, in the random environment z. By Lemma 1.5, Theorem 1.2, Theorem 1.4, and the above observations about φ, we get the following result. Theorem 1.6. Suppose that the noise variables are continuous, bounded, and have mean zero. Then f ε has local KPZ growth under any scaling limit, in the sense of Theorem 1.2. Moreover, the gradient fields {δf ε (t, ·)} t∈Z ≥0 satisfy the conclusions of Theorem 1.4. Recall that for d = 1, the only way to get the coefficients of the local KPZ equation in Theorem 1.2 to not depend on ε is to have α(ε) ∝ ε 4 , β(ε) ∝ ε 2 and γ(ε) = constant. Translating this to polymer language, note that for a fixed (t, x), f (ε) (t, x) is the log-partition function for polymers of length α(ε) −1 at inverse temperature ε. Thus, α(ε) ∝ ε 4 means that for polymers of length n, the inverse temperature needs to be proportional to n −1/4 . This is the 'intermediate disorder regime' considered in [1]. It is interesting that this is the only possible way to scale so that we get constant coefficient in the local KPZ equation of Theorem 1.2. It is also intriguing that for d ≥ 2, there is no way to scale so that the coefficients in the local KPZ equation do not vary with n while the inverse temperature goes to zero as n → ∞. In this context, it is worth noting (as pointed out by one of the referees) that in the case of the continuous polymer model (Brownian motion paths in a regularized in space white noise environment, as in [62]), it is actually straightforward that under any arbitrary scaling, the rescaled log-partition function is the solution of a regularized KPZ equation which features similar scaling-dependent coefficients and a noise that converges to the white noise. One can see that thanks to the Feynman-Kac and Itô formulas and the Brownian/white noise scaling properties (see, e.g., [62,Section 2.3], where it is done for the diffusive scaling, but any other scaling also works). In d = 2, there are very few results about scaling limits of the directed polymer model. A scaling limit for the partition function (rather than the log-partition function considered here) of the (2 + 1)-dimensional model has been obtained in [15], and the convergence of polymer paths to Brownian motion in the subcritical regime has been recently proved in [41]. There are a number of closely related results about 'continuum polymers', which have been used to construct distribution-valued solutions of the KPZ equation. Rough calculations indicate that one might be able to obtain scaling limits of discrete directed polymers using similar arguments. For example, the results from [16,21,46] indicate that for d = 2, a distribution-valued solution of the KPZ equation may be obtained, in the language of this paper, by taking α(ε) ∝ e −Cε −2 for sufficiently large C, β(ε) ∝ α(ε), and γ(ε) ∝ ε −1 , as ε → 0. One might argue, though, that these constructions are not really solutions of the 2D KPZ equation, because it has been shown that they reduce to solutions to the stochastic heat equation with additive noise. The recent work [17], which gives a non-Gaussian construction 'at criticality', offers a more promising avenue to the construction of a 'true' distribution-valued solution of the 2D KPZ equation. For d ≥ 3, similar rough calculations indicate that the constructions in [20,24,25,35,60,61] correspond to taking the scaling limit of discrete directed polymers keeping the inverse temperature fixed (and small) while sending the spatial and temporal lattice spacings to zero in a certain way. This approach does not fit into our framework. A final remark about phase transitions for 2D polymers, proved in [15]: One may wonder why a phase transition in the temperature parameter, as proved in [15] for the 2D polymer model, does not manifest itself in Theorem 1.6. The possible reason is that Theorem 1.6 is about the relation between local spatial and temporal derivatives of the height function, and not the height function itself. So, although the height may behave differently in different regimes, the behavior of its infinitesimal growth will exhibit no such transition, according to Theorem 1.6. Let z = {z t,x } t∈Z >0 ,x∈Z d be a collection of i.i.d. random variables, and let f ε be defined by (1.2) with zero initial condition, which can be rewritten as where f ε is the local average In other words, the discrete time derivative of f ε equals the sum of the discrete Laplacian, a noise term, and functions of discrete spatial derivatives. We may refer to this as a 'generalized discrete KPZ equation'. Choosing ψ(x) = x 2 would make it exactly like a discrete KPZ equation, but that ψ does not satisfy the bounded derivative condition required for the result stated below (which prevents φ from satisfying the monotonicity assumption). The ψ displayed in equation (1.4) This concludes the statements of results. The rest of the paper is organized as follows. A list of open problems is given in the next subsection. A sketch of the proof of Theorem 1.2 is in Section 2. Section 3 contains a discussion of space-time white noise. All proofs are in Section 4. Open questions. The main open question is to construct nontrivial solutions of the KPZ equation in d ≥ 2, and then show that discrete processes such as directed polymers converge to these nontrivial solutions under appropriate scaling limits. This is a very hard problem, completely out of the reach of existing technology. Theorem 1.2 gives hope that something like this can eventually be proved, because it shows that local KPZ behavior holds for any scaling limit -and so, once an appropriate scaling is identified, convergence would probably hold, and the challenge would only be to prove nontriviality of the limit. Another class of open problems is to understand the stationary probability measures of the gradients fields, which are guaranteed to exist by Theorem 1.4. Is the stationary measure unique? If not, what is the set of all stationary measures? What initial conditions lead to which stationary limits? What can be said about rates of convergence? The C 2 assumption on the driving function φ is restrictive, but is crucial for the notion of local KPZ universality considered in this paper. Is it possible to have a different formulation that allows driving functions that are not C 2 ? Such driving functions arise in many important models, such as last-passage percolation, ballistic deposition, etc. (e.g., see [19,22]). For the same reason, it would be nice to be able to extend the framework of this paper to asynchronous updates, where each site is given an independent Poisson clock and the height is updated whenever the clock rings (such as in [42]). Removing the boundedness assumption on the noise variables is also a worthy goal. It is not clear how the proof technique of this paper can be extended to unbounded noise variables without introducing some constraints on how α(ε), β(ε) and γ(ε) can vary with ε. Finally, in the setup of Subsection 1.6, it would be interesting to see if local KPZ behavior under arbitrary scaling limits hold when ψ(x) = cx 2 for some constant c, which is the 'true' discretization of the KPZ equation. One can make c vary with ε if that helps in reaching a nontrivial scaling limit. SKETCH OF THE PROOF OF THEOREM 1.2 First, note that by the equivariance property of φ, where f ε is the local average of f ε defined in equation (1.5), and Now, if for some x (possibly depending on ε and the noise variables), f ε (t, x) ≈ f ε (t, x + a) for all a ∈ A, then by Taylor expansion and using the facts that φ(0) = 0, ∇φ(0) has all coordinates equal (by symmetry), and the Hess φ(0) has all diagonal elements equal and all off-diagonal elements equal (again, by symmetry), it follows that where K is a constant depending on φ, and the remainder term is negligible compared to the first term on the right. Together with the preceding display, this gives which is local KPZ behavior, except that it holds only under the crucial assumption that f ε (t, x + a) ≈ f ε (t, x) for all a ∈ A. (We also need that the remainder term is negligible compared to the Laplacian term, the noise, and the sum of the Laplacian, noise and gradient squared terms, but let us ignore that for the time being.) This condition holds trivially at time t = 0, since f ε (0, x) = 0 for all x. If ε is small, it continues to hold for t = 1, and inductively, for all t up to a threshold depending on ε. But to get local KPZ behavior under arbitrary scaling limits, we need to have f ε (t, x + a) ≈ f ε (t, x) for all a ∈ A even if t and x are allowed to vary arbitrarily as ε → 0. The argument for this is outlined below. The first step is to show, using a random walk representation introduced in [19], that for any x ∈ Z d and 1 ≤ s ≤ t, As a straightforward consequence of this identity, it follows that if z 1,y is replaced by 0 for each y, then the value of f ε (t, x) changes by at most Bε, where B is a constant upper bound on the magnitude of the noise variables. (Here we use the assumption that the noise variables are bounded.) Note that this bound has no dependence on t and x. For each t and x, let g ε (t, x) be the value of f ε (t, x) after replacing all z 1,y by zero. Note that g ε (1, x) = 0 for each x. Thus, g ε is just like f ε , except that instead of starting with an all zero initial condition at time 0, we start with an all zero initial condition at time 1. Thus, g ε (t + 1, x) has the same law as f ε (t, x). By the conclusion of the previous paragraph, this implies that The above deduction is the first main trick in the proof. 5 The second trick is the following. Since the law of f ε (t, x) is the same for all x, the above inequality gives Since the noise variables have mean zero, x + a)) a∈A )). Combining, and using equivariance of φ under constant shifts, we get where recall that q ε (t, x) = (f ε (t, x + a) − f ε (t, x)) a∈A . Note that the vector q ε (t, x) belongs to the hyperplane H := {u ∈ R A : u = 0}. By Edwards-Wilkinson domination, φ is nonnegative everywhere on this hyperplane. Thus, for any η > 0, By strict Edwards-Wilkinson domination, φ(u n ) → 0 implies u n → 0 on H. This is equivalent to saying that for any δ > 0, there exists η(δ) > 0 such that if u ∈ H and φ(u) ≤ η(δ), then |u| ≤ δ. Thus, Note that this bound has no dependence on t and x. Thus, if ε → 0 and t ε , x ε vary arbitrarily with ε, we have |q ε (t ε , x ε )| → 0 in probability. This allows us to apply 5 Taylor expansion and deduce (2.1), even if t and x vary arbitrarily as ε → 0. Some more work is needed to establish that the remainder term is negligible compared to the other terms (this requires the assumption that Hess φ(0) = 0) and that the noise term converges to white noise. SPACE-TIME WHITE NOISE In this section, we recall the definition of space-time white noise and construct the field ξ (ε) that converges in law to space-time white noise in Theorem 1.2. For the construction of space-time white noise, we follow the prescription outlined in [56,Chapter 1]. Take any n ≥ 1. For α = (α 1 , . . . , α n ) ∈ Z n ≥0 , define |α| := α 1 + · · · + α n , and let D α be the differential operator acting on C ∞ (R n ). Moreover, for any x = (x 1 , . . . , x n ) ∈ R n , let In other words, f and all its derivatives are decaying faster than any polynomial at infinity. The space of Schwartz functions is denoted by S(R n ). The standard topology on S(R n ) is the topology generated by the countable family of semi-norms {p α,β : α, β ∈ Z n ≥0 }. This space is metrizable, for example by the metric It is well known that under the above topology, S(R n ) is Fréchet space. A continuous linear of functional on S(R n ) is called a tempered distribution. The space of tempered distributions on R n is denoted by S ′ (R n ). Let φ, f denote the action of φ ∈ S ′ (R n ) on f ∈ S(R n ). When φ is a bounded measurable function on R n and f ∈ S(R n ), let φ, f denote the usual L 2 inner product of φ and f . It is easy to check that this defines a continuous linear functional on S(R n ). Thus, bounded measurable functions may be viewed as tempered distributions. There is a natural topology on S ′ (R n ), called the 'strong dual topology', defined as follows. Recall that a subset B of a topological vector space is said to be bounded if for any open neighborhood V of the origin, there is some λ > 0 such that B ⊆ λV . The strong dual topology on S ′ (R n ) is generated by the family of seminorms It turns out that S ′ (R n ) is a countable union of Polish spaces under this topology. On such spaces, the usual notion of convergence of probability measures remains unchanged -a sequence {µ n } n≥1 of probability measures on the Borel σ-algebra of S ′ (R n ) is said to converge to a probability measure µ if F dµ n → F dµ for every bounded continuous function F : An important fact about the weak convergence of S ′ (R n )-valued random variables (called 'random distributions') is that a sequence {Φ n } n≥1 of random distributions converges in law to a random distribution Φ if and only if Φ n , f converges in law to Φ, f for every Schwartz function f . This is a nontrivial result, due to Fernique [39,40]. For a simplified proof, see [9]. We will use this fact below. 6 It is a consequence of the Minlos-Bochner theorem (see [9]) that if E is a continuous, symmetric, positive semidefinite bilinear form on S(R n ) × S(R n ), then there is a unique centered Gaussian measure ν on S ′ (R n ) whose covariance kernel is E. This means that for any φ ∈ S ′ (R n ) and f ∈ S(R n ), φ, f is a centered Gaussian random variable, and the covariance of φ, f and φ, g is E(f, g). In our context, we wish to define space-time white noise on R >0 × R d . So let us take n = d + 1 and consider R >0 × R d as a subset of R n . Define the bilinear form It is easy to verify that this is a continuous, symmetric, positive semidefinite bilinear form. Thus, there is a unique centered Gaussian measure ν on S ′ (R n ) whose covariance kernel is given by E. This is the law of space-time white noise on R >0 × R d . Let us now construct the field ξ (ε) needed for Theorem 1.2. With all notation as in Theorem 1.2, define, for (t, x) ∈ R >0 × R d and ε > 0, where σ is the standard deviation of the noise variables. Since any realization of ξ (ε) is a bounded measurable function, we can view ξ (ε) as a random tempered distribution. The following proposition shows that it converges in law to white noise as ε → 0. Proposition 3.1. As ε → 0, the field ξ (ε) converges in law to white noise on R >0 × R d , in the sense defined above. Proof. Take any f ∈ S(R × R d ). By the discussion above, we have to show that ξ (ε) , f converges in law to a Gaussian random variable with mean zero and variance x)dtdx 6 I thank Abdelmalek Abdesselam for telling me about this result, and also about the strong dual topology on S ′ (R n ), which I was unaware of. denote the average value of f in B n,v . Then by the decay properties of f , it is not hard to justify that Thus, ξ (ε) , f is a linear combination of i.i.d. random variables. The required central limit theorem for ξ (ε) , f now follows by standard methods (e.g., using characteristic functions) and the decay properties of f . The details are omitted. PROOFS First, we prove Theorem 1.2 and Theorem 1.3. Throughout, we will assume that the conditions on φ and the noise variables stated in Section 1.3 hold. Fix a realization of f ε . Then, for any t ∈ Z ≥0 and x ∈ Z d , define a random walk on Z d as follows. The walk starts at x at time t, and goes backwards in time, until reaching time 0. If the walk is at location y ∈ Z d at time s ≥ 1, then at time s − 1 it moves to y + a with probability ∂ a φ((f ε (s − 1, y + a)) a∈A ), for a ∈ A, where ∂ a φ is the derivative of φ in coordinate a (which exists, by our assumption that φ is differentiable everywhere). By [19,Lemma 3.1], these numbers are nonnegative and sum to 1 when summed over a ∈ A. Therefore, this describes a legitimate random walk on Z d , moving backwards in time. (Incidentally, when φ corresponds to the polymer model, then the law of the above random walk, conditional on the noise variables, is given by the classical polymer measure. Therefore, this random walk generalizes the polymer random walk to a general class of growth models.) The following result is a special case of [19, Proposition 3.2]. Proposition 4.1. Fix a realization of f ε . Take any 1 ≤ s ≤ t and x, y ∈ Z d . Let {S r } 0≤r≤t be the backwards random walk defined above, started at x at time t. This yields the following corollary. Proof. This is a consequence of Proposition 4.1 and the fact that if f is a differentiable real-valued function on R n for some n, and |∇f (x)| 1 ≤ ε for all x, then |f (x) − f (0)| ≤ ε|x| ∞ , where | · | ∞ denotes ℓ ∞ norm. This holds because, by the multivariate mean-value theorem f (x) − f (0) = x · ∇f (y) for some y on the line joining x and 0. The above corollary allows us to prove the following lemma. where B is a constant upper bound on the magnitude of the noise variables. Proof. Let g ε (t, x) be the value of f ε (t, x) after replacing all z 1,y by 0. Note that g ε (1, x) = 0 for each x. Thus, g ε is just like f ε , except that instead of starting with an all zero initial condition at time 0, we start with an all zero initial condition at time 1. This implies that g ε (t + 1, x) has the same law as f ε (t, x), which gives + 1, x)). By Corollary 4.2, the quantity on the right is bounded by Bε. As a corollary, we obtain the following important bound. Corollary 4.4. For any t ∈ Z ≥0 and x ∈ R d , where f ε is the local average defined in equation (1.5). Proof. Since f ε starts from an all zero initial condition, it follows that E(f ε (t, y)) does not depend on y. Thus, t, x)). Since the noise variables have mean zero, x + a)) a∈A )). Using the above two displays and Lemma 4.3, we get the desired inequality. Our next goal is to show that φ(u)−u grows at least quadratically in the distance of u from u1 when φ(u) − u is small enough. We need two technical lemmas. Proof. The symmetry of φ ensures the equality of all diagonal entries of Hess φ(0), and also the equality of all off-diagonal entries. Next, for t ∈ R, let g(t) := φ(t1). By the equivariance property, g(t) = φ(0) + t = t, and hence g ′′ (t) ≡ 0. On the other hand, simple calculation using solely the identity g(t) = φ(t1) shows that g ′′ (0) = 1 · Hess φ(0)1. Therefore, we get 1 · Hess φ(0)1 = 0, which is the same as q + 2dr = 0. By the nondegeneracy assumption, at least one of q and r is nonzero. But then, the identity q + 2dr = 0 implies that both of them must be nonzero. Consequently, q − r = 2dr − r = (2d − 1)r is also nonzero. Armed with the above lemmas, we are now ready to prove the following key fact. The proof uses the assumption of strict Edwards-Wilkinson domination. Proof. Suppose that the claim is not true. Then for any positive M and c, there is some u such that φ(u) − u ≤ M , but φ(u) − u < c|u − u1| 2 . For each n, find such a point u n for M = c = 1/n. Since φ(u n ) ≥ u n (by Edwards-Wilkinson domination), this implies that φ(u n ) − u n → 0 and Thus, we can divide throughout by |u n − u n 1| 2 , and get φ(u n ) − u n |u n − u n 1| 2 → 0. Let y n := u n −u n 1. Since φ(u n )−u n → 0, strict Edwards-Wilkinson domination gives us y n → 0. Also, note that y n = 0, φ(0) = 0, and by Lemma 4.5, ∇φ(0) = (2d + 1) −1 1. So, by the equivariance property of φ and Taylor expansion (recalling that y n → 0), as n → ∞. Dividing both sides by |y n | 2 , and letting z n := y n /|y n |, we get which, by (4.1), implies that z n · Hess φ(0)z n → 0. But |z n | = 1 for each n, and so, passing to a subsequence if necessary, we may assume that z n → z for some z with |z| = 1. Then z · Hess φ(0)z = 0. By Lemma 4.6, this is the same as where q and r are as in Lemma 4.6. But z n = 0 for each n, and so z = 0. Also, by Lemma 4.6, q − r = 0. Thus, the above display shows that z = 0, giving a contradiction to the prior observation that |z| = 1. This completes the proof. Henceforth, let us fix two collections {t ε } ε>0 and {x ε } ε>0 in Z >0 and Z d , respectively. We make no assumptions about these collections; they can be completely arbitrary. Let us define some quantities whose behaviors, as ε → 0, will be of interest to us. Let We now prove a series of lemmas about these quantities. A general fact that we will use a number of times is the following. Proof. Take any δ, Then This shows that Since η is arbitrary, the left side must be equal to zero. Since δ is arbitrary, this proves that X ε Y ε = o P (c ε d ε ). Lemma 4.9. As ε → 0, B ε = O P (ε). Proof. First, note that by the equivariance property of φ, where x ε )) a∈A . Using Taylor expansion, the assumption that φ(0) = 0, the observation that 1·Q ε = 0, and the formulas for ∇φ(0) and Hess φ(0) from Lemma 4.5 and Lemma 4.6, we get is a function such that h(x) → 0 as x → 0. Since q = r and B ε = 1 2 (q − r)|Q ε | 2 , the above inequality and the fact that h(x) → 0 as x → 0 show that for any η > 0 there is some δ > 0 such that for any ε, Proof. Note that ε −1 A ε can be written as bz tε,xε + R ε , where R ε and z tε,xε are independent, and b = −2d/(2d + 1). Since the law of z is absolutely continuous with respect to Lebesgue measure, it is a standard fact that for any δ > 0 there is some η > 0 such that P(z ∈ S) < δ for any Borel set S with Lebesgue measure less than η. This shows that for any K > 0 and r ∈ R, where f (K) is a function only of K, with no dependence on r, that tends to zero as K → ∞. Since f (K) has no dependence on r and ε, we can take expectation over r on the left side and arrive at the desired result. Proof. Note that ε −1 (A ε + B ε + C ε ) can be written as z tε+1,xε + Q ε , where Q ε and z tε+1,xε are independent. The rest of the proof proceeds exactly as in the proof of Lemma 4.13. We now have all the ingredients for the proofs of Theorem 1.2 and Theorem 1.3. Similarly, for any a ∈ A, This implies that Similarly, note that Let ξ (ε) be defined as in equation (3.2). Then note that Finally, let Using all of the above, and the definition of D ε , we get By Lemma 4.10 and Corollary 4.15, D ε is o P of A ε , B ε , C ε , and A ε + B ε + C ε . By Proposition 3.1, ξ (ε) converges in law to white noise as ε → 0. This completes the proof. Proof of Theorem 1.3. Note that for any a ∈ A, and apply Lemma 4.9 and the fact that q = r. Next, let us prove Theorem 1.4. The proof requires the following lemmas. Proof. If {f n } n≥1 is a tight family, then the continuity of the projection f → f (x) shows that for any x, {f n (x)} n≥1 is a tight family. Conversely, suppose that {f n (x)} n≥1 is a tight family for each x. Fix some δ > 0. Then for every x, there is a compact set K x ⊆ R d such that P(f n (x) / ∈ K x ) ≤ 2 −|x| δ for all n. Let K := x∈Z d K x . Then K is a compact set under the product topology, and for any n, where C does not depend on n. This completes the proof. Note that q ε (t, x) ∈ H, where H := {u ∈ R A : u = 0}. By Edwards-Wilkinson domination, φ(u) ≥ 0 for all u ∈ H. Moreover, by the additional condition of Theorem 1.4, we have that for any K > 0, there is some L > 0 such that if u ∈ H and |u| > L, then φ(u) > K. Thus, By the inequality displayed in the proof of Theorem 1.3, this proves the tightness of {δ i f ε (t, x)} t∈Z ≥0 . Proof. Note that by the equivariance property of φ, This shows that δf ε (t+1, ·) is a function of δf ε (t, ·) and {z t+1,x } x∈Z d , from which it is clear that {δf ε (t, ·)} t∈Z ≥0 is a time-homogeneous Markov chain. Let T denote the transition kernel of the Markov chain from Lemma 4.18. That is, for a probability measure µ on (R d ) Z d , T µ denotes the probability law after taking one step from the chain if the initial state has law µ. Proof. Let Ψ be a bounded continuous function from (R d ) Z d into R. Let {µ n } n≥1 be a sequence of probability measures on (R d ) Z d converging weakly to a probability measure µ. Let ν n := T µ n and ν := T µ. For each n, let f n be a (R d ) Z d -valued random variable with law µ n . Let z := {z x } x∈Z d be a collection of i.i.d. random variables having the same law as our noise variables, independent of the f n 's. Then, since φ is differentiable everywhere -and hence, continuous -it is not hard to see that there is a continuous function Φ : Now, note that (f n , z) converges in law to (f, z), where f has law µ and is independent of z. Since Ψ • Φ is a bounded continuous function, this implies that Thus, ν n → ν weakly, which completes the proof. We are now ready to prove Theorem 1.4. Proof of Theorem 1.4. Let γ t be the law of δf ε (t, ·). Define By Lemma 4.17, {γ t } t∈Z ≥0 is a tight family. From this, it follows that {µ t } t∈Z ≥0 is also a tight family. Therefore, by Prokhorov's theorem, it has a weakly convergent subsequence. Passing to this subsequence if necessary, let us assume that µ t converges weakly to some µ. We claim that µ is an invariant probability measure for the Markov kernel T . To see this, let ν t := T µ t and λ t := T γ t . Let Ψ : (R d ) Z d → R be a bounded continuous function. Then by the linearity of T , Ψdλ s . But for each t, λ t = γ t+1 . Thus, By the boundedness of Ψ, the second term on the right goes to zero as t → ∞. By assumption, µ t → µ, and so by Lemma 4.19, ν t → ν := T µ. Combining, we get that Ψdν = Ψdµ. Since Ψ is an arbitrary bounded continuous function, this shows that ν = µ. Thus, µ is an invariant probability measure for the kernel T . The translation invariance of µ follows from the translation invariance of each γ t . Next, let us prove Lemma 1.5. We need the following lemma. Lemma 4.20. Suppose that the hypotheses of Lemma 1.5 hold. Then there exists δ > 0, depending only φ, such that for any u ∈ R A with u = 0, we have φ(u) ≥ 1 4 (q − r)|u| min{δ, |u|}, where q and r are as in Lemma 4.6. Moreover, we have that q > r. Proof. By Lemma 4.6 (whose proof does not use Edwards-Wilkinson domination), we have that for any u with u = 0, u · Hess φ(0)u = (q − r)|u| 2 . (4.5) Since Hess φ(0) is positive semidefinite due to the convexity of φ, this immediately shows that q ≥ r. By Lemma 4.6, q = r. Thus, q > r. In the following, |M | denotes the Euclidean norm of a matrix M -that is, the square-root of the sum of squares of the entries. Since φ is C 2 in a neighborhood of the origin and q > r, there exists δ small enough such that φ is C 2 in the open ball of radius 2δ centered at the origin, and | Hess φ(u) − Hess φ(0)| < (q − r)/2 for all u in this ball. We are now ready to complete the proof of Lemma 1.5. Finally, let us prove Theorem 1.7. Proof of Theorem 1.7. The claims follow from Theorem 1.2 and Theorem 1.4, if we can just verify that φ satisfies the necessary conditions. It is easy to see that φ is equivariant under constant shifts, symmetric, and zero at the origin. A simple calculation shows that any mixed partial derivative of φ at the origin is equal to C(d)cψ ′′ (0), where C(d) is a nonzero constant depending only on d. Since c > 0 and ψ ′′ (0) = 0, this shows that φ satisfies the nondegeneracy condition. Next, note that ∂φ ∂u a = 1 2d + 1 + cψ ′ (u a − u) − c 2d + 1 b∈A ψ ′ (u b − u). By the uniform boundedness of |ψ ′ |, the above expression shows that φ is monotone if we choose c small enough -specifically, if c ≤ (4d|ψ ′ | ∞ ) −1 . Next, let us show that φ satisfies the strict Edwards-Wilkinson domination condition. Since c > 0 and ψ ≥ 0 everywhere, we have that φ(u) ≥ u for all u. Next, take any sequence {u n } n≥1 such that φ(u n ) − u n → 0. Suppose that {u n − u n 1} n≥1 is an unbounded sequence. Since ψ(x) is bounded away from zero as |x| → ∞ and ψ ≥ 0 everywhere, this implies that φ(u n ) − u n cannot converge to zero, contradicting our hypothesis. Thus, {u n − u n 1} n≥1 must be a bounded sequence. Since ψ is continuous and nonnegative, and the only point where it is zero is the origin, we conclude that any convergent subsequence of {u n − u n 1} n≥1 must converge to zero. Thus, u n − u n 1 → 0. This proves that φ satisfies the strict Edwards-Wilkinson domination condition. Finally, if ψ(x) → ∞ as |x| → ∞, then it is clear, by the nonnegativity of ψ, the function φ satisfies the extra condition of Theorem 1.4.
2021-10-05T01:16:23.805Z
2021-10-03T00:00:00.000
{ "year": 2021, "sha1": "cd7394a770384ce6e1d9a7747856d48e5401f2c3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "76c8c2f6cf6ac3c18d76a3a813d56762490cfb38", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
240202601
pes2o/s2orc
v3-fos-license
Dirac-source diode with sub-unity ideality factor An increase in power consumption necessitates a low-power circuit technology to extend Moore’s law. Low-power transistors, such as tunnel field-effect transistors (TFETs), negative-capacitance field-effect transistors (NC-FETs), and Dirac-source field-effect transistors (DS-FETs), have been realised to break the thermionic limit of the subthreshold swing (SS). However, a low-power rectifier, able to overcome the thermionic limit of an ideality factor (η) of 1 at room temperature, has not been proposed yet. In this study, we have realised a DS diode based on graphene/MoS2/graphite van der Waals heterostructures, which exhibits a steep-slope characteristic curve, by exploiting the linear density of states (DOSs) of graphene. For the developed DS diode, we obtained η < 1 for more than four decades of drain current (ηave_4dec < 1) with a minimum value of 0.8, and a rectifying ratio exceeding 108. The realisation of a DS diode represents an additional step towards the development of low-power electronic circuits. 4. : I am not sure how the authors are claiming that an ideality factor of less than 1 is sustained for >2 decades. By the green line drawn, it is clear that ideality factor of less than 1 is maintained for a little over 1 decade for VBG = -15 V, -30 V and -45 V. 5. How repeatable are the I-V measurements? Will the ideality factor be retained for 10 consecutive sweeps? Please include comments in the manuscript. 6. Will the device behave the same way if one were to use CVD-grown MoS2 and graphene? Please include comments in the manuscript. Reviewer #3 (Remarks to the Author): The manuscript entitled "Dirac-Source Diode with Sub-unity Ideality Factor" have demonstrated a Dirac Source diode, which exhibits a steep-slope characteristic curve by using graphene electrodes. This topic and their findings are very interesting. However, one of the major problems is the large leakage current for Ids-Vds measurement. As shown in Fig. 1b and Fig. 3, the Ids current cross zero (y-axis) at very negative Vds voltage, indicating the leakage current have been coupled into the Ids measurement. Therefore, the measured Ids may not truly reflect the diode behavior . This could be important because the Ids current (of sub-1-ideality factor region) is only one order higher than the leakage current. To ensure the accurate ideality factor, lower leakage current (below pA) or higher resolution equipment need to be used, which should be satisfied by standard semiconductor analyzer. Similarly, if we look into the data in Fig. S8a, b, the top gate leakage current also have a strong influence to the off-state current, which could be another reason for the measured SS<60 mV/dec (which also happens at the device off state). Hence, this top gate leakage should also be minimized to support the author's claim. Based on the present data, I can not support for its publication for Nature Communications. There are some other minor questions as below 1. The device optical image should be included in Fig. 1. 2. The diode has a local top gate, but the fabrication processes are not described in the manuscript. 3. At negative back gate voltage, large Schottky barrier should exist between p-type graphene and n-type MoS2, as claimed by the author "When a negative back-gate voltage is applied, the Schottky barrier height increases". However, the author also claim the device is dominated by the barrier at graphite and MoS2: "the device current is mainly modulated by the Schottky barrier at the interface between the graphite and monolayer MoS2". This point should be more clarified. 4. Similar as the previous question, the diode behavior only exists at negative back gate voltage according to Fig. 2. Because the graphene/MoS2 barrier is more sensitive to the gate voltage while graphite/MoS2 barrier is nearly insensitive to the gate voltage, does this behavior suggest the observed ratifier behavior is more governed by the graphene/MoS2 rather than the proposed graphite/MoS2 junction? 5. For band diagram in Fig. 2b, the graphene should have large Schottky barrier with MoS2. For diagram in Fig. 2c, if the MoS2 is degenerated, the band bending is confusing at graphite part. What is the work function of graphite used here? Reviewer #1 (Remarks to the Author): In this work, Myeong et al. reported an asymmetric MoS2 diode structure contacted by graphene and graphite. They exploit the tunable DOS of graphene which acts a cold source of electron injection to achieve sub-unity ideality factor in the diode. The concept of the device is similar to those reported in Our reply: We thank the Referee for bringing up this point. In fact, the important part of graphene to act as a cold source of electron injection is both G1 and G2. Since the channel of the FET (MoS2) is ntype, graphene source of both G1 and G2 should be p-type as shown in the main Figure 1 (below) of previous manuscript for the graphene to inject cold electrons into the channel (see Supplementary section 1). The difference in G1 and G2 arises since the doping of bare graphene is different from the doping of graphene on top of MoS2 as shown in the supplementary Figure S5 of previous supplementary manuscript (above). As VBG changes, the resistance of graphene, i.e. both parts of bare graphene and graphene on top of MoS2 shows two peaks as in Figure c) above. However, in our new device, the doping of graphene due to MoS2 was negligible (see below). It seems that the doping level changes from sample to sample depending on the initial doping of bulk MoS2 crystal we use to exfoliate monolayer MoS2. Therefore, we have now removed the G1 and G2 and instead put integrated region G (graphene), which again should be p-type for cold charge injection into n-type MoS2 channel. We have modified the main Figure In the revised manuscript, we studied different doping levels on switching properties of DS diode by si mulations as shown in Fig. S7 and S8 as below and added a discussion at line 1, page 7 of the Suppl ementary Information: "We also studied the impact of the doping level of graphene on switching properties of DS diode as shown in Fig. S7(b). The ideality factor of DS diode with p-type graphene is less than one at the bias voltage region between -0.1 V and -0.3 V, and the current is increased over four orders of magnitude. While, ideality factor gets larger than one as graphene is intrinsic or n-type as shown in Fig. S8(a), because the Dirac point is always below the top of the channel barrier and cannot filter high energy thermionic current. Fig. S8 (b) shows that there is an obvious phase transition of ideality factor from sub-unity to over-unity as graphene is doped from p-type to n-type." Our reply: As shown in the main Figure 1 of the new (and old) manuscript, the graphene region should be p-type for the graphene source to inject cold electrons into the n-type diode channel (MoS2). The mechanism of how sub-unity diode behavior occurs is as follows: As the bias voltage applied to graphene increases to positive direction (reverse bias) from negative forward bias (main Figure 1c)), the electrochemical potential of graphene electrode decreases due to the negative electron charge. Therefore, the energy above which electrons inject becomes closer to the Dirac point as the bias voltage increases (to reverse bias) and the diode becomes off-state. Due to the linear DOS, this superexponential decrease of carrier density (please see Supplementary Figure 1) allows steeper slope of the diode on-off transition in our DS Schottky diode than the conventional ideal Schottky diode. In the revised manuscript, we performed more calculations to study different operation regimes and added a discussion at line 8, page 7 in the Supplementary Information: "Next, we studied electron doped region of graphene to realize sub-unit ideality factor. There are two important factors in achieving an ideality factor less than one in a DS diode: the doping type of graphene and the Schottky barrier height between graphite and ML MoS2. In order to realize sub-unity ideality factor in electron doped region of graphene, a negative gate voltage has to be applied to achieve carrier transport by valence band of ML MoS2. We first fix the Schottky barrier height (ФB = 0.48 eV) between graphite and ML MoS2 as that in hole doped region of graphene, and apply n-type graphene with p-type Ohmic contact between graphene and ML MoS2. Fig. S9 (a, b) show such device cannot reach ideality factor less than one when current is larger than 1×10 -10 μA/μm, because there is a larger p-type Schottky barrier height between graphite 3. The FET mechanism in Fig. S8 Figure S10). Here we show the ID vs control gate (instead of top gate) which has overlap with graphene. As shown in the band diagram below, as the DS FET turns from on-state to off-state, the DOS of graphene outside the control gate region decreases which agrees well with previous reports of DS FET operation (Qiu, C. while not all the electrons above EF in graphene contribute to the current. The injected current density from graphene is given by: Where is the Dirac point and is the Fermi level of graphene. So, as the channel barrier gets lower than the Dirac point, availible density of states from graphene around E = ECBM (ECBM is the conduction band edge of MoS2) increases due to 0 | − |. So, injected current increased superexponentially and the device works as a DS-FET." Our reply: As the referee points out, we tried to demonstrate the DS diode with p-type transition metal dichalcogenide such as WSe2 or MoTe2. However, we have realized that those p-type materials cannot be used to realize p-type DS Schottky diode at least in the device structure we use. We might be able to devise a new device structure in the future to realize p-type DS diode, but this work is beyond the current research we report here. The authors need The conditions which graphene and p-type TMDC should satisfy simultaneously are as follows: (1) graphene should be n-type (since the channel is p-type) => this means that positive VBG should be applied because charge neutrality point of graphene is usually near VBG=0V, (2) at positive VBG the channel should be p-type. We can make the graphene p-type by applying VBG>0, however as far as we know, no 2D materials are p-type at positive gate voltage (please see the figures below from different references). Therefore the conditions for p-type DS Schottky diode cannot be satisfied with any p-type 2D materials we can use. We have fabricated Schottky diode with graphene and p-type TMDC WSe2. The characteristic curve is as follows. Nature Here the graphene and graphite were used to contact p-type WSe2. We had to apply negative gate voltage, otherwise the channel becomes either n-type or inside a bandgap (see the bottom graph above). We could only observed nearly ideal Schottky diode behavior with ideality factor slightly greater than 1 at negative gate voltages, where graphene is p-type while channel is p-type. Again, for this p-type Schottky diode to perform as DS diode, the graphene should be n-type and the channel should be ptype. Both conditions cannot be met simultaneously since negative gate voltage should be applied for WSe2 to be p-type while positive gate voltage should be applied for graphene to become p-type. One could doubt why the graphene and MoS2 cannot be gated separately. However, it is physically almost impossible to put two gates to make all the graphene region n-type and all the MoS2 region ptype at the interface between them. In a real device, even if two gates are separately placed on top of MoS2 and graphene, they affects the other regions as well. In principle, metal contact having high work function should form Schottky barrier to n-type MoS2 similar to graphite contact. However, deposition of metal contacts onto the 2D materials has been reported to have invasive effects (Liu, Y. et al. Approaching the Schottky-Mott limit in van der Waals metal-semiconductor junctions. Nature. 557, 696-700 (2018)). Thus we tried to avoid this problem by using graphite rather than metal contacts. We agree that achieving scalable (while not invasive) contacts for MoS2 is very important problem although it is beyond the scope of our research. 5. The thermionic behavior in diode and FET is temperature dependent. In order to demonstrate the breaking of thermionic limit in a more convincing manner, low-temperature measurements are suggested. Our reply: We used the term "thermionic behavior in conventional diode" by meaning as follows: (please also see Supplementary Figure 1 below) The continuous density of states (DOS) of a normal metallic source exhibits the Boltzmann distributed electron density given by n(E) ≈ exp ((EF -E)/kBT). For a Dirac source, the linearly varied DOS produces a super-exponentially decreasing electron density (given by n(E) ≈ (EDirac -E) exp((EF -E)/kBT)) with decreasing energy. Therefore, carrier distribution in the source of DS diode and FET has steeper super exponential decays with energy, overcoming thermionic exponential distribution of carrier density in conventional diode and FET. Therefore, the ideality factor and subthreshold swing of our DS diode and DS FET can have lower value than the conventional limits (ideality factor = 1 and subthreshold swing = 60mV/dec at 300K), which again originates from thermionic carrier distribution of the source in the conventional diode and FET. To clarify the term "thermionic limit", we add the above statement in the Supplementary Section 1. "Therefore, carrier distribution in the source of DS diode and FET has steeper super-exponential decays with energy, overcoming thermionic exponential distribution of carrier density in conventional diode and FET. Therefore, the ideality factor and subthreshold swing of our DS diode and DS FET can have lower value than the conventional limits (ideality factor = 1 and subthreshold swing = 60mV/dec at 300K), which again originates from thermionic carrier distribution of the source in the conventional diode and FET" As the referee suggested, we have additionally performed temperature-dependent measurements. To decrease noise level (off state current level in diode measurement) from ~ 50 pA to < 10 fA (three orders of magnitude lower noise level), we newly set up the measurement equipment by replacing all the measurement cables with triaxial cables. In the new system, we cannot cool the device, but can heat up the device up to 350K. We have now included the temperature-dependent measurement in Supplementary Section 13 with Supplementary Figure. S13. We observed that ideality factor decreases with temperature, which can be understood by the analytical formula Eq. (4) in the Supplementary Information. In a conventional ideal Schottky diode, an unity ideality factor does not change with temperature, since the current = 0 (1 − exp(− ). The ideality factor of DS diode is given The energy difference between and does not chang e with temperature and is much larger than . So, the ideality factor gets smaller as the in creasing of temperature as shown in Figure. S16(c). The measured temperature dependence of ideality factor can be well described by the formula (the red solid line) in Fig. S16(b). It sh ould be noted that ideality factors can only be compared at the same temperature: the smaller the ideality factor, the better switching performance. Sub-threshold swing of DS diode increases with temperature, which is consistent with that of DS FET. In the revised suppleme ntary information section 14, we added a discussion about tem-perature dependence of ideality factor as follows: " Figure. S16(a) shows temperature dependent DS-diode measurement from 300K to 350K at fixed gate voltages VCG=0V, VTG=-0.7V, and VBG=-6V. We observe that averaged ideality factor decreases with temperature in Figure. S16(b) . The ideality factor of DS diode is simplified as follows according to Eq. The energy difference and between does not change and the ideality factor gets smaller as the increasing of temperature as shown in Figure. S16(b). The measured temperature dependence of ideality factor can be well described by the formula ( the red solid line) in Figure. S16(b). It should be noted that ideality factors can only be compared at the same temperature: the smaller the ideality factor, the better switching performance. The decreasing of ideality factor as a function of temperature does not mean the improvement of switching properties. The idea of the manuscript is good, but unfortunately the writing and the figures are quite unclear. 1. The fabrication process is not clear. For example, Figure S2: the graphene as shown in the Figure below cropped from the reference. We followed the same procedure to make edge contacts to graphene and graphite by using plasma etching and depositing Cr/Au. To make the edge contact to graphene clarified, we added "One-dimensional edge contact on graphene was formed in this process 1 ." In the Supplementary Section 2, Device Fabrication. 2. Figure 1b 3. Figure 2: Again, what is VTG here? In the older version of Figure 3 in previous manuscript, graphene was ptype at VBG = -30V. Main Fig. 1 shows that as negative bias voltage is applied to graphene, the diode becomes forward bias state (on state). At this negative bias regime, electrons move (injects) from graphene to graphite, while holes injects from graphite to graphene. Please note that hole injection is the same of electron injection. Figure 3: I am not sure how the authors are claiming that an ideality factor of less than 1 is sustained for >2 decades. By the green line drawn, it is clear that ideality factor of less than 1 is maintained for a little over 1 decade for VBG = -15 V, -30 V and -45 V. 4. Our reply: The red (not green in the previous manuscript) line indicates ideal diode curve with ideality factor = exactly 1. The green line indicated average ideality factor over one decade of drain current that we observed. To avoid confusion, we denote the average ideality factor over 1 decade as ηave_1dec. In the Figure 3 of new manuscript (from additional experiment with additional device) clearly shows that average ideality factor is less than 1 over more than 3 decades of drain current since the average slope over 3 decade of current is steeper than the ideal diode curve with ideality factor = 1. How repeatable are the I-V measurements? Will the ideality factor be retained for 10 consecutive sweeps? Please include comments in the manuscript. Our reply: We thank the Referee for bringing up this point. In our additional device, we performed the suggested measurement. We have found that 10 consecutive measurements shows nearly same IV curves with the nearly same ideality factor ηave_1dec=0.84. We added the repeatability of the device into Supplementary Figure S16, and Supplementary section 14. Will the device behave the same way if one were to use CVD-grown MoS2 and graphene? Please include comments in the manuscript. Our reply: We thank the Referee for bringing up this point. We have changed the last sentence in the conclusion paragraph from "The realisation of a steep-slope DS diode paves the way for the development of low-power circuit elements and energy-efficient circuit technology." to "By using CVD-grown MoS2, graphene and graphite, integrated circuits using steep slope DS-FETs and DSdiodes can be fabricated in a large scale and pave the way for energy-efficient circuit technology." Reviewer #3 (Remarks to the Author): The manuscript entitled "Dirac-Source Diode with Sub-unity Ideality Factor" have demonstrated a Dirac Source diode, which exhibits a steep-slope characteristic curve by using graphene electrodes. This topic and their findings are very interesting. However, one of the major problems is the large leakage current for Ids-Vds measurement. As shown in Fig. 1b and Fig. 3 Similarly, if we look into the data in Fig. S8a Fig. 1. The diode has a local top gate, but the fabrication processes are not described in the manuscript. Our reply: We have a phrase "Additional e-beam lithography and deposition processes are performed to facilitate top-gate placement." in the Supplementary section 2. However, the fabrication process was missing in the Figure S2 showing device fabrication procedure. Now, we have added the additional final step of depositing top gate in the supplementary Figure S2 j) as below. Our reply: We agree that at large negative gate voltages, Schottky barrier also forms between p-type graphene and n-type MoS2. However, this Schottky barrier at graphene/MoS2 interface is much smaller than the Schottky barrier at graphite/MoS2, which is implied by IV curve measurement in MoS2 devices contacted by graphene contacts or by graphite contacts (please see the Supplementary Figure S6). At negative back gate voltage, large Schottky barrier should exist between p-type graphene and n- The domination of Schottky barrier at the interface between graphite and MoS2 rather than graphene /MoS2 in the asymmetrically contacted graphene-MoS2-graphite diode could also be seen in the bias dependent IV curve at large negative bias voltage applied to graphene contact with graphite contact at ground. We apply bias voltage always to graphene as shown in Figure 1 while graphite is grounded. If graphite/MoS2 Schottky barrier dominates, on current should appear at positive bias voltage rather than negative bias voltage at negative back gate voltages since for n-type Schottky barrier, positive forward bias should be applied to electrode (see the figure below). Since the diode becomes on (forward bias condition satisfied) when we apply negative bias to graphene with graphite grounded, the Schottky barrier is dominated at the interface of graphite/MoS2, not graphene/MoS2. Now, we have added a Supplementary Section 8 and Supplementary Figure S10 to explain why the measured diode IV curves show the dominance of Schottky barrier at the interface of graphite/MoS2 rather than graphene/MoS2. Fig. 2b (2012)). Although we agree that Schottky barrier is formed at graphene/MoS2, the Schottky barrier at the interface of graphite/MoS2 dominates the device as we explained in the answer to the question 3 and 4. Therefore, to avoid confusion, we did not include the Schottky barrier at graphene/MoS2 in the figure. For band diagram in I appreciate authors' efforts to answer my questions. Most of the questions have been addressed adequately by either performing new experiments or simulations. For those that the authors cannot address, explicit reasons are provided. Overall the quality of the manuscript is improved, and the presentations are more clear than the original one. I can recommend publication of this paper in Nature Comm. Reviewer #2 (Remarks to the Author): My concerns have been addressed sufficiently by the authors. I recommend the manuscript for publication. Reviewer #3 (Remarks to the Author): In the revised manuscript, the author has made efforts to improve the manuscript quality. However, the major questions about the SS<60 meV/dec and data interpretation still remains, as explained below. Therefore, I can not support for its publication. As shown in Fig. S12, the SS<60 meV/dec only exist below 3*10^-14 A, and is very close to gate leakage current. Therefore, the gate leakage current (either positive or negative value) may be coupled into the measurement Ids, leading to underestimated SS. Although the 4-decades SS is also below 60 meV/dec, the actually SS<60 region is only less than 0.5-decade, which could mislead readers. Similarly, for the ideal factor <1 (presented in Fig. 3), the Ids current is still influenced by the gate leakage current (because the <1 region have very small current and can be easily influenced). Therefore, the represented data here can not support the main conclusion of this manuscript. Another minor point is the band diagram. Lots of literatures have theoretically and experimentally suggested graphene is gate tunable and p-type graphene (especially under negative gate voltage) would form large Schottky barrier with MoS2. For example, if negative gate voltage applied, the Fermi level of graphene is lower than 4.7 eV of graphite. The author should explain why small graphene Schottky barrier is shown in the band diagram in Fig. 2. Reviewer #3 (Remarks to the Author): In the revised manuscript, the author has made efforts to improve the manuscript quality. However, the major questions about the SS<60 meV/dec and data interpretation still remains, as explained below. Therefore, I can not support for its publication. 1. As shown in Fig. S12, the SS<60 meV/dec only exist below 3*10^-14 A, and is very close to gate leakage current. Therefore, the gate leakage current (either positive or negative value) may be coupled into the measurement Ids, leading to underestimated SS. Although the 4-decades SS is also below 60 meV/dec, the actually SS<60 region is only less than 0.5-decade, which could mislead readers. Similarly, for the ideal factor <1 (presented in Fig. 3), the Ids current is still influenced by the gate leakage current (because the <1 region have very small current and can be easily influenced). Therefore, the represented data here can not support the main conclusion of this manuscript. Our Reply: The reviewer wrote that SS<60 and ideality factor <1 is only within very small range of current near the leakage current. The reivewer wrote that SS<60mV/dec is only less than 0.5 decade of drain current. To show that this is not consistent with our data, we plot (below) drain current versus ideality facotr for the data shown in main Figure 3 c) and drain current versus SS for the data shown in Supplementary Figure 12. Here, both ideality factor <1 and SS < 60mV/dec holds at least for 2 decade of drain currents from the leakage current. SS<60mV/dec holds from 10 -14 A to at least 10 -12 A. Also, the ideality factor <1 holds from 10-2*14A to over 2*10 -12 A, which is more than 2 decades of drain current. Since the figure of merit for low-power transistor is known to be average SS over 4 decades of current, we measured the average SS over 4 decades of current as defined in the literatures and confirmed that it is below 60mV/dec. Also, average ideality factor over 4 decades of current is less than 1. Please note that ideality factor <1 does not guarantee that SS<60mV/dec since SS can be further improved by decreasing the thickness of gate dielectric materials. Ideality factor <1 holds for more than two decades of drain currents cannot be explained by the leakage current, since the leakage current will affect the drain current region very close to the leakage current level (maybe within 0.5 decades of current or so) as the reviewer pointed out. Therefore, we believe our data of ideality factor less than unity over more than two decades of current and average ideality factor over four decades of current less than one support our main conclusion. Another minor point is the band diagram. Lots of literatures have theoretically and experimentally suggested graphene is gate tunable and p-type graphene (especially under negative gate voltage) would form large Schottky barrier with MoS2. For example, if negative gate voltage applied, the Fermi level of graphene is lower than 4.7 eV of graphite. The author should explain why small graphene Schottky barrier is shown in the band diagram in Fig. 2. Our Reply: As shown in Figure S6 Nano Lett. 15, 5, 3030-3034 (2015). 10.1021/nl504957p. The data from the paper is plotted below. At room temperature, monolayer MoS2 contacted by graphene shows Ohmic behavior, which is consistent with our result. REVIEWER COMMENTS Reviewer #1 (Remarks to the Author): The reviewer #3 did raise some legitimate concerns about the extent to which the leakage current affects the ideality factor and sub-60mV/dec SS. To fully address reviewer #3's question, the authors should probably present the leakage current under the measurement conditions. However, I understand that for the presented devices it may be difficult as the gate leakage may not have been recorded during the measurements. In that case results of new devices may be presented as supplementary material to demonstrate the robustness of the sub-1 ideally factor. For the questions regarding the band alignment, I believe the authors made a reasonable explanation. Reviewer #2 (Remarks to the Author): I think that the authors' justifications are believable and I recommend the manuscript for publication. The SS below 60 mV/dec is sustained for 4 decades of current, even though the current values are low. Upon changing cables, the leakage current that is measurable has decreased, which is expected. Hence, the SS<60 mV/dec range has decreased. About the band diagram, I agree with the authors that graphene/MoS2 barrier can be low enough to be ohmic (https://doi.org/10.1021/nn501723y). Hence I agree with the authors' representation of the band diagrams. Although the author did a quick response to my previous concerns, their explanation is not solid and I can not support its publication without addressing these key questions, as explained below. In the original manuscript of Fig. S8 (I attached below), the SS<60 mV/dec region first emerges at Ids current level ~100 pA to 1 nA (10 -10 A to 10 -9 A), which is very close to the leakage current of the measurement system. I raised this question in my previous comments, and thanks to the authors, they have realized this problem and replaced all the measurement cables with triaxial cables with higher resolution (as shown in the revised Fig. S12, and attached below). However, after improving the measurement setup, the SS<60 mV/dec did not show up again at Ids ~100 pA to 1 nA, and author did not explain why this happens. Instead, it first emerges at much lower current level ~0.03 pA (3*10 -14 A), as highlighted in the attached figure below), which is also close to the leakage current of the system. Since the SS<60 mV/dec region is always at the same range with the leakage current no matter what measurement setup used, the explanation of the working mechanism is not solid, and it implies that the leakage current impacts the SS extraction. Without solving this key question, the data presented can not support the authors claim and I can not support the manuscript for publication. For the question regarding to the band diagram. In the author mentioned reference (Nano Lett. 15, 5, 3030 (2015)), they indicate the graphene is highly gate tunable and Ohmic contact only forms at large positive gate voltage (+80 V for monolayer and +60 V for 20 layer device, as shown in authors response letter). At negative gate voltage, large Schottky actually forms between graphene and MoS2. The Schottky barrier (at negative gate voltage) is large enough to dominate the whole carrier transport, and the large barrier actually enables a new field of vertical transistors to switch the device off [Appl. Phys. Lett. 105, 083119 (2014); Appl. Phys. Lett. 106, 223103 (2015); Nat. Mater. 12, 246-252 (2013)]. Therefore, the author should carefully label the energy of each part (graphene at positive/negative gate, graphite, MoS2 conduction band and valence band) in their band diagram in Fig. 2 to analysis the band alignment and clarify this question. Especially the band is not bending at all at graphene side, regardless graphene Fermi level difference between Fig.2b and 2c. Reviewer #1 (Remarks to the Author): The reviewer #3 did raise some legitimate concerns about the extent to which the leakage current affects the ideality factor and sub-60mV/dec SS. To fully address reviewer #3's question, the authors should probably present the leakage current under the measurement conditions. However, I understand that for the presented devices it may be difficult as the gate leakage may not have been recorded during the measurements. In that case results of new devices may be presented as supplementary material to demonstrate the robustness of the sub-1 ideally factor. For the questions regarding the band alignment, I believe the authors made a reasonable explanation. Our reply to Reviewer 1: We greatly thank the reviewer for giving us advices on how to address the concerns raised by referee #3 and to improve our manuscript further. As the referee points out, we have measured the leakage current of the system and the gate, which was 2-5fA (please see the red dots below). On the other hand, reverse bias current in diode measurement and off-current in field-effect transistor measurement were measured to be ~50fA and ~20fA, respectively, which are much larger than the leakage current of the measured system and gate leakage. Now we have added the leakage current measured along with currents through device in Supplementary Figure S17. Our first version of manuscdript showed high leakage current (~50pA), and we fully understood there was concern about high leakage current. Because the reverse bias current in diode measurement and off-current in field-effect transistor measurement were the same with leakage current of measurement system (~50pA), the measured off-current and reverse bias current could not represent the actual off-current and reverse bias current of the measured device (please see Figure below). The figure below shows leakage current (red) and the current we measure in the device (black). As the leakage current of the system is very high ~ 50pA, the off-state current is dominated by this system leakage current. In the case when system leakage dominates the off-state current, IV curve can be interpreted in a wrong way in the range of current near the off-state current level. For example, when we measured (displayed in the measurement instruments) forward current of -50pA and -500pA (×10=1decade), the actual current flow through the device can be regarded as -100pA and -550pA (×5.5=0.74decade) due to the +50pA of leakage current. We fully understood that the original data had a problem due to this large system leakage current and therefore we performed measurement with new system having much lower system leakage current in a new device. Considering the same interpretation as in the case of high leakage current, when the measured reverse bias-and off-current level is much higher than the leakage current, forward bias-and on-current near the off-state current level can represent the actual current flowing in the device. For example, when we measured (displayed in the measurement instruments) forward current of -50fA and -500fA (×10=1decade), the actual current flow through the device can be regarded as -52fA and -502fA (×9.65=0.98decade) due to the +2fA of leakage current, which is almost similar with measured data. Therefore, we claim that our data measured with ultralow leakage current (2-5fA) support the steep-slope behavior in diode and field-effect transistor above the leakage current level. Our reply to Reviewer 3:  We thank the reviewer for pointing out the important concern and the way to improving our manuscript. The reviewer pointed out concerns about the change in the current level at which the steep slope behavior occurs in field-effect transistor measurement between the original manuscript and the re-submitted manuscript. In the first round of review, the reviewer raised concern about high leakage current (~50pA), and we fully understood the issue. Because the reverse bias current in diode measurement and off-current in field-effect transistor measurement were the same with leakage current of measurement system (~50pA). As reviewer pointed out, measured off-current and reverse bias current cannot represent the actual off-current and reverse bias current of the measured device (please see below figure 1). The figure below shows leakage current (red) and the current we measure in the device (black). As the leakage current of the system is very high ~ 50pA, the offstate current is dominated by this system leakage current. In the case when system leakage dominates the off-state current, IV curve can be interpreted in a wrong way in the range of current near the leakage current level. For example, when we measured (displayed in the measurement instruments) forward current of -50pA and -500pA (×10=1decade), the actual current flow through the device can be regarded as -100pA and -550pA (×5.5=0.74decade) due to the +50pA of leakage current. We fully understood that the original data had a problem due to this large system leakage current and therefore we performed measurement with new system having much lower system leakage current in a new device. In the newly measured data with tri-axial cables, the leakage current of measurement system and gate was measured about 2~5fA (see red dots in Figure 2 below), which is reduced more than 3-orders of magnitude compared with previous measurement system. On the other hand, reverse bias current in diode measurement and off-current in field-effect transistor measurement were measured to be ~50fA and ~20fA, respectively, which are much larger than the leakage current of the measured system and gate leakage. Now we have added the leakage current measured along with currents through device in Supplementary Figure S17. Considering the same interpretation as in the case of high leakage current, when the measured reverse bias-and off-current level is much higher than the leakage current, forward bias-and on-current near the off-state current level can represent the actual current flowing in the device. For example, when we measured (displayed in the measurement instruments) forward current of -50fA and -500fA (×10=1decade), the actual current flow through the device can be regarded as -52fA and -502fA (×9.65=0.98decade) due to the +2fA of leakage current, which is almost similar with measured data. Therefore, we claim that our data support the steep-slope behavior in diode and field-effect transistor above the leakage current level. To avoid confusion and noise in steep-slope field-effect transistor measurement (revised Fig.S12), we re-plot the original graph with 10mV steps which was previously 2mV steps with noise (please see below, revised Fig.S12).  Regarding the band diagram, we have observed large Schottky barrier which is also consistent with literature (ACS Nano 2014, 8, 6, 6259, Nano Lett. 15, 5, 3030-3034 (2015. 10.1021/nl504957p.). The Nano Letter reports Ohmic contact behavior at both positive and negative gate voltages as shown below (Vg = − 60V to 80V). From our measurement (Suplementary Figure S6), we observed Ohmic contact (meaning no significant barrier causing nonlinear IV curves at room temperature) formed between monolayer MoS2 and graphene at all gate voltages. We understand that other literatures report large Schottky barriers, so this contact problem needs further studies.
2021-10-30T15:07:07.871Z
2021-10-28T00:00:00.000
{ "year": 2022, "sha1": "586b5154b5671a0e3cc6e7dc559e1b185985f2cd", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-022-31849-5.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "38950f761d8dc7b5f6cfb04b3b9da35857c002a9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15396209
pes2o/s2orc
v3-fos-license
Complementarity, measurement and information in interference experiments Different criteria (Shannon's entropy, Bayes' average cost, Durr's normalized rms spread) have been introduced to measure the"which-way"information present in interference experiments where, due to non-orthogonality of the detector states, the path determination is incomplete. For each of these criteria, we determine the optimal measurement to be carried on the detectors, in order to read out the maximum which-way information. We show that, while in two-beam experiments, the optimal measurement is always provided by an observable involving the detector only, in multibeam experiments, with equally populated beams and two-state detectors, this is the case only for the Durr criterion, as the other two require the introduction of an ancillary quantum system, as part of the read-out apparatus. Introduction The debate on double-slit interference experiments, with photons or matter particles, and on the possibility of detecting, as proposed by Einstein, "which-way" individual particles are taking, helped to shape the basic concept of complementarity in quantum mechanics. According to this early discussion, Young interference experiments were showing the wave nature of both radiation and matter and any attempt to exhibit their, complementary, particle nature, by detecting which path each an individual quantum was travelling, was regarded as implying a disturbance capable of destroying the interference pattern. * It was, however, much later noticed that "in Einstein's version of the double-slit experiment, one can retain a surprisingly strong interference pattern by not insisting on a 100% reliable determination of the slit through which each photon passes" [2]. More recently this problem has been thoroughly investigated both from a theoretical and an experimental point of view, by proposing gedanken-experiments or actually performing them, in which the quantum unitary evolution of both the system and the detector is completely under control. In many cases care is taken of having the detectors acting on internal degrees of freedom, so that they do not disturb directly the centre of mass motion. (1.2) Depending on the value of < χ 1 |χ 2 > there is a continuum between the extreme cases of no which-way detection (|χ 1 >= |χ 2 >), where the wave nature is exhibited by interference fringes with maximum contrast, and perfect which-way detection (< χ 1 |χ 2 >= 0), where the interference fringes disappear. For example, in the experimental realization [3] of Feynman's gedanken-experiment [4], the states |χ i > describe the scattered photon needed to detect whether the atom (rather than the electron, as in the original discussion) has passed through slit 1 or 2 and the quantity < χ 1 |χ 2 > can be varied by changing the spatial separation between the interfering paths at the point of scattering. In the experimental setup proposed in [5] the which-way detection is performed by micro-maser cavities inserted on the beams of previously exited atoms. Atomic decay in one of the cavities provides a which-way information whose predictability depends on the initial state of the cavities. However we should point out that the detector needs not be a separate physical system: the which-way information may indeed be stored in some internal degrees of freedom of the interfering particles, as it happens in neutron interference experiments [6], where the spin of the neutron in one of the beams is rotated with respect to the original common direction. Notice that, in each of these examples, the structure of the interference fringes, as it is clear from Eq. (1.2), depends on the entanglement of the system with the apparatus, from which a "which-way" information may be eventually recovered by means of an appropriate measurement, and not on the fact of actually performing it. Eq. (1.1) describes only a premeasurement. Therefore the actual measurement relative to the "which-way" information may be arbitrarily delayed. As Schrödinger puts it, in his "general confession" [7], motivated by the appearance of the Einstein, Podolsky, Rosen paper [8], "entanglement of predictions" goes "back to the fact that the two bodies at some earlier time formed in a true sense one system, that is were interacting, and have left behind traces on each other". Furthermore, it should be stressed that, apart from the extreme case in which < χ 1 |χ 2 >= 0, no measurement can provide full information on the way that an individual quantum has taken. One is actually dealing with a problem in quantum detection theory, that is, in statistical decision theory. In order to decide what measurement should be carried out to extract the best possible which-way information, it is necessary to spell out a strategy in which an a priori evaluation criterion is given. In the pioneering work of Wootters and Zurek, [2], Shannon's definition of information entropy [9] was taken as a quantitative measure of the gain in "which-way" information obtained by actually performing a measurement on the detector state. In this framework evidence was produced that "the more clearly we wish to observe the wave nature ...the most information we must give up about its particle properties". Following this suggestion, Englert [10], by using a different criterion for evaluating the available information, was able to establish, for equally populated beams, a complementarity relationship between the distinguishability, that gives a quantitative estimate of the ways, and the visibility that measures the quality of the interference fringes: with equality sign holding if the detector is prepared in a pure state. As usual V is defined in terms of the maximum and minimum intensity of the fringes (I M and I m ), V = (I M − I m )/(I M + I m ). D is simply related to the optimum average Bayes's costC opt , traditionally used in decision theory, by the relation D = 1 − 2C opt † . New problems arise in going from the case of two beams to a multibeams interference process. As shown by Dürr [11], the complementarity relationship [Eq. (1.3)] still holds when the visibility and the distinguishability are taken to be, the first as the, properly normalized, deviation of the fringes intensity from its mean value, and the second, following an alternative notion of entropy introduced in Ref. [12], as the maximum average rms spread of the a posteriori probabilities for the different paths (see Sec. 2). The purpose of this paper is to examine an interesting physical aspect of the problem, that seems to have been overlooked, so far, and it is the following: once a specific criterion to measure the which-way information is chosen, what is the actual measurement that has to be performed on the detectors, in order to extract the optimum information? The usual attitude † In Ref. [10] the distinguishability is expressed in terms of the optimum likelihood Lopt for "guessing the way right". This optimum likelihood is one minus the optimum average Bayes costCopt to address this question, is to consider the set A D of all observables A, relative to the detector, and to search, among them, for the observable that delivers most information. However, it is known from quantum detection theory [13,14], that the amount of information that can be obtained in this way does not represent, in general, the absolute maximum. Sometimes, it is possible to do a better job by introducing, in addition to the detector, an ancilla, namely an auxiliary quantum system, neither interacting with the detector, nor having any correlation with it. Despite the fact that the detector and the ancilla are, under all respects, independent systems, it may happen that a larger amount of information can be obtained, by measuring an observable relative to the combined system. In connection with this issue, we point out that, even if the quantity D appearing in Eq. (1.3) is usually defined in relation with A D , the proofs leading to Eq. (1.3), say in Refs. [10,11], remain valid if one includes the observables for the system formed by the detector and the ancilla together. It follows that the quantity D really refers to all possible detector+ancilla systems. Since the need for an ancilla seems to us a source of undesirable complication for the read out apparatus, it would be interesting to know under what circumstances the ancilla is really required. In particular, it would be interesting to know if there exist criteria to measure the which-way information, such that the optimal measurement turns out to be an ordinary observable relative to the detector, and the inclusion of an ancilla does not lead to any improvement. We show that, in the case of two-beams interference experiments, with either one of the two proposed measures of information, the optimal measurement does not involve an ancilla. On the contrary, in the case of multibeam experiments, it is only with the criterion introduced in Ref. [11] that the ancilla is unnecessary, while it is required for the other two criteria, in general. It is interesting to notice that the criterion for which ordinary measurements are good enough is the one that leads to the complementarity relation given by Eq. (1.3). Finally, let us notice that, while inspired by the problem of complementarity in interference experiments, our work is a contribution to the difficult problem of optimization in quantum decision theory. The paper is organized as follows. In Sec. 2 the quantum detection problem for nonmutually orthogonal detector states is presented and the notion of ancilla is introduced. We review a fundamental theorem by Neumark, stating that measurements involving an ancilla in the enlarged detector-ancilla Hilbert space, can be equivalently described by means of positive operator-valued measures (POVM) on the detectors's Hilbert space, generalizing the ordinary projection-valued measures (PVM), that describe measurements not involving the ancilla. We then list the conditions that must be satisfied by any function, for it to be a good measure of the amount of information provided by a POVM. The different choices present in the literature for such a function are considered, and the resulting optimization problems are studied in Sec. 3, for the case of two beams, and in Sec. 4 for multibeam interferometers. Some of the proofs are postponed to an Appendix. Final remarks and a discussion of perspectives close the paper. The quantum decision problem. We consider a n-beam interference experiment: a single beam of identical microscopic systems, like photons, electrons, neutrons, atoms etc. (generically referred to as particles), is divided into n spatially separated beams by some sort of beam-splitter, like a screen with n slits. The n beams are then recombined on a screen, and the interference figure is observed. It is assumed that the intensity of the beam is adjusted so that only one particle at a time passes trough the interferometer, and that the populations ζ i of each of the n beams can be adjusted at will. We imagine now that a detector, designed to provide which-way information on individual particles passing through the interferometer, is placed along the trajectories of the beams. It is assumed that the detector also can be treated as a quantum system, and that the systemdetector interaction gives rise to some unitary process. The detector will serve as which-way detector if, once prepared in some fixed state |χ 0 >, it is brought by the interaction with the particles into a new state, that depends on the beam occupied by the particle. In formulae, this amounts to requiring that, after the interaction, the state of the particle-detector system is the following entangled state, generalizing [Eq. Here, |ψ i > denote the normalized particles wave-functions for the individual beams, while |χ i > are n normalized (but not necessarily orthogonal !) states of the which-way detectors. We define the detector's Hilbert space H D as the linear span of the states |χ i >: (Of course, it may very well happen that the set of all possible states of the detector, as a physical system, is actually larger than H D .) In concrete experiments |χ i > may in fact be internal states of the particles themselves, in which case |ψ i > denotes the space-part of the particles wavefunction. We assume that the amplitudes c i are known in advance, such that the weights ζ i = |c i | 2 give the a priori probabilities for a particle to pass through the i-th slit. The state [Eq. (2.1)] describes a situation in which there is complete correlation between the beams and the internal states of the detector, such that, if the detector is found to be in the state |χ i >, one can tell with certainty that the particle passed through the i-th slit. Thus the problem of determining the trajectory of the particle reduces to the following one: after the passage of each particle, is there a way to decide in which of the n states |χ i > the detector was left? If the states |χ i > are orthogonal to each other, the answer is obviously yes. Indeed, if we let A D the set of all hermitean operators in H D , we can surely find in A D an observable A , such that: If A is measured, and the result λ i is found, one can infer with certainty that the detector was in the state |χ i >. If, however, the states |χ i > are not orthogonal to each other, for no choices of A one can fulfil Eq. (2.3): whichever A one picks, there will be at least one eigenvector of A, having a non-zero projection onto more than one state |χ i >. Therefore, when the corresponding eigenvalue is obtained as the result of a measurement, no unique detector-state can be inferred, and only probabilistic judgments can be made. Under such circumstances, the best one can do is to select the observable that provides as much information as possible, on the average, namely after many repetitions of the experiment. Of course, this presupposes the choice a definite criterion to measure the average amountF (A) of which-way information delivered by a certain observable A (the properties ofF (A), and the various choices proposed so far for this quantity are discussed later in this Section). After this choice is made, the distinguishability D of the trajectories is usually related to the supremum, It may now come as a surprise to notice, as pointed out in the Introduction, that the quantity F D does not always represent the absolute maximum information that is actually available. Indeed, it is an intriguing feature of the quantum detection problem, for non orthogonal states, that a larger amount of information on the state of the detector can be obtained by considering the detector in combination with an auxiliary quantum system, called ancilla [13,14]. The ancilla does not interact with the detector, and is prepared in a fixed known state |φ 0 >∈ H aux , such that the combined system is in one of the n uncorrelated states |χ i > ⊗ |φ 0 >, belonging to the total Hilbert space H tot = H D ⊗ H aux . Let now A tot the set of all hermitean operators in H tot and F tot the supremum ofF (A) over A tot . Surprisingly enough, even if the detector and the ancilla are uncorrelated, it may happen that F tot > F D , showing that the inclusion of an ancilla may improve the amount of which-way information that can be read-out from the detectors. Since the state of the ancilla is fixed once and for all, it is possible though to express the probabilities of the possible outcomes resulting from the measurement of any observable A tot in H tot , in terms of quantities defined directly in H D . We let P µ , µ = 1, . . . , N the orthogonal decomposition of the identity in H tot , relative to A tot (we consider for simplicity an observable with a finite number N of distinct outcomes). Then, the probability P iµ that the outcome µ is observed, in the state |χ i > ⊗ |φ 0 > is given by the well known formula: where ρ i = |χ i >< χ i | and ρ aux = |φ o >< φ 0 |. If the trace is performed in two steps, first on the ancillary Hilbert space and then on H D , we can rewrite the above expression as where and T r aux denotes the partial trace over the ancilla Hilbert space. The hermitean operators A µ belong to A D , and it is easy to check that they are positive definite, and that they provide a decomposition of the identity on H D : However, in general, they are not projection operators, neither they commute with each other. We point out also that the number N of different outcomes needs not be the same as neither the number n of detector-states, nor the dimensionality of H D . The collection {A µ } of operators constitutes an example of a positive operator-valued measure (POVM) in H D . More generally [13,14], a POVM is a map that associates to every (Borel) subset ∆ of the real line R, a non-negative (self-adjoint) operator Π(∆), such that: i) the empty set ∅ is mapped to zero; ii) the entire real line is mapped to the identity operator: iii) the union of any number of disjoint sets is mapped to the sum of the corresponding operators. The probability P (∆) for the outcome to be in the set ∆ is given by the following expression, generalizing equation (2.5): The axioms i), ii) and iii) listed above ensure the consistency of the above probabilistic interpretation. POVM's thus represent a generalization of the projection-valued measures (PVM), usually considered in Quantum Mechanics, and it is a theorem due to Neumark [16], that all POVM's on H D can be realized by means of an appropriate ancillary system, in the way sketched above. Since any quantum system not interacting with the detector can play the rôle of the ancilla, this theorem implies that every POVM can be realized by an experimental procedure falling within the usual framework of Quantum Mechanics. Thus, in order to determine what is the maximum amount of which-way information that can obtained by observing the detector, we should maximizeF over the set of all POVM's in H D , and not just over the set of all PVM's. It is time now to define precisely the average which-way informationF delivered by a POVM. For any POVM {A µ , µ = 1, . . . , N } (we shall always consider POVM with a finite number N of different outcomes, in what follows), consider the a posteriori probabilities Q iµ for observing the µ−th outcome, when the detector is in the state |χ i >. According to Bayes' formula: where q µ is the a priori probability for the occurrence of the outcome µ: In order to measure the amount of which-way information, that is gained if the µ-th outcome is observed, we consider the quantity F µ = F ( Q µ ), where Q µ = (Q 1µ . . . , Q nµ ) and F is some function. It is reasonable to require from F the following properties: (1) F should be invariant under any permutation of its n arguments. (2) F should reach its absolute minimum when its N arguments are all equal to 1/N (which corresponds to complete lack of information on the detector state); (3) F should reach its absolute maximum when any of its arguments is equal to one, while all the others are equal to zero (which on the contrary corresponds to certain knowledge of the detector state); (4) F should be convex, i.e. for any λ ∈ [0, 1] it should hold: ‡ The intuitive meaning of this condition is clear if we interpret Q ′ and Q ′′ as giving the a posteriori probabilities of n alternative hypothesis, for two distinct tests A ′ and A ′′ . For any λ ∈ [0, 1], we can consider the combination A λ of the tests A ′ and A ′′ , which consists in performing randomly either A ′ or A ′′ , with relative probabilities λ and 1 − λ, respectively. Equation (2.11) than states that the test A λ cannot carry more information than the weighted sum of the informations obtained from A ′ and A ′′ , separately. The overall average information delivered by the POVM is defined as the averageF of the numbers F µ , over all possible outcomes, weighted with the a priori probabilities q µ : (2.12) The optimization problem consists in searching for the POVM which maximizesF . Notice that, among the unknowns, we have to consider also the number N of elements of the POVM. Of course, the solution depends on the choice of the function F , above. Over the past years, several different choices have been adopted. For example, as we said in the Introduction, the authors of Refs. [2,14,15] consider the negative of Shannon's entropy [9] H, which corresponds to taking: References [10,13] use the negative of Bayes' cost function C: where, for each µ, j(µ) is any index such that Q j(µ)µ = Max{Q 1µ , . . . , Q nµ }. Finally, more recently, Dürr [11] considered the normalized rms spread K: When n = 2, it is easy to check that K µ = 1 − 2 C µ , and thus the two criteria (2.14) and (2.15) are inequivalent only for more than two beams. Notice also that, while Shannon's entropy and the rms spread are strictly convex, the Bayes cost function is only convex. Solving the optimization problem is a difficult task, and so far no general solution is known. However, partial results are available. For POVM's consisting of a finite number of elements, by using the convexity of the function F , it is easy to show [15] that the optimal POVM can be chosen to consist of rank one operators, namely: where φ µ ≤ 1. Moreover, if H D is finite dimensional and d is its dimension, it has been shown [15] that the number N of elements of the optimal POVM can be taken to satisfy: (2.17) Two-beam interferometers. In this short Section, we consider a two-beam interferometer. For such a case, as pointed out in the previous Section, the criterion using the Bayes cost function [Eq. (2.14)] turns out to be equivalent to that based on the rms spreads [Eq. (2.15)]. The quantum detection problem, with the Bayes cost function as measure of information, is studied at length in Ref. [13]. There, it is shown that, for any number n of linearly independent states |χ i > and arbitrary a priori probabilities ζ i , the optimal measurement is always a PVM. Since, in two-beam interferometers, the detector states |χ 1 > and |χ 2 > must be distinct, for any path discrimination to be possible, they are necessarily linearly independent and thus it follows, from the quoted result, that the optimal measurement is a PVM. To our knowledge, there is no published proof that the optimal measurement is a PVM, even when one uses Shannon's entropy, as a measure of the which-way information. We have proven it, in the special case of equally populated beams, ζ i = 1/2. The rather elaborate proof can be found in the Appendix. When the populations ζ i are different, we have not been able to work out an analytical proof, but a number of numerical simulations performed for various choices of the populations, seem to indicate that the optimal measurement is a PVM also in this general case. In conclusion, it appears that for two-beam interferometers, both with Bayes's cost or with Shannon's information as measures of which-way information, ordinary PVM's can read out the maximum which-way information from the detectors, and recourse to ancillas is superfluous. In fact, it turns out that the optimal PVM is the same, for both criteria (see Eq. (7.12) in the Appendix). Multi-beams interferometers. In this Section we study the case of multi-beam interferometers, with n > 2 beams. We make the simplifying assumption that H D is two-dimensional. This case is actually realized in experiments using beams of spin-half particles or photons, if the path information is stored in the internal states of the interfering particles. A further simplifying assumption that we make is that the beams are equally populated: ζ i = 1/n. H D is isomorphic to C 2 , the set of all pairs of complex numbers. As it is well known, rays of C 2 can be put in one-to-one correspondence with unit three-vectorsn = (n x , n y , n z ), via the map: 1 +n · σ 2 |χ >= |χ > , (4.1) where σ = (σ x , σ y , σ z ) is a set of Pauli matrices. Thus, assigning n pure states |χ i > amounts to picking n unit vectorsn i in R 3 . Whether the optimal test is a PVM or rather a POVM, now depends on the choice of the function F . Below, we consider in detail the three choices for F , Eqs. . For three or more beams, it is known that the optimal test, in general, is not a PVM but rather a POVM. For example, for three statesn 1 ,n 2 andn 3 forming angles of 120 • with each other and such that 3 i=1n i = 0, it has been shown [14] that the optimal test is provided by the following POVM with three elements: b) F is the negative of Bayes' cost function C [Eq. (2.14)]. Here too, the optimal test is not a PVM, but a POVM. An example is again provided by the set of three symmetric pure states considered under case (a) above. It is shown in [13] that the optimal POVM is given this time by the following POVM with three elements: Notice that the above POVM is not the same as [Eq. (4.2)], which is an example of the fact that the solution of the optimization problem depends on the choice of F . c) F is given by the rms spread K [Eq. (2.15)]. Remarkably enough, we can show that, for any number n of equally populated beams, the optimal test is always a PVM. This is in sharp contrast with what happens for the two other choices of F previously considered. To prove this claim, consider an optimal POVM, A = {A µ ; µ = 1, . . . N }. We know, from Sec. 2, that the operators A µ must be of the form (2.16). Using Eq. (4.1), we can write: wherem µ are N unit three-vectors, and α µ are N positive numbers. The condition for a POVM, µ A µ = 1, is then equivalent to: In view of Eq. (4.4), we find: Using this equation, we compute Eq. (2.10) as: In order to evaluate the average informationF (A) of A, it is convenient to rewrite the quantities q µ K µ as Upon using Eqs. (4.6) and (4.7) into the above formula, we obtain, after a little algebra: (4.9) We observe now that, for equally populated beams, ζ i = 1/n, the last sum in the above equation vanishes, and the expression for q µ K µ becomes invariant under the exchange ofm µ with −m µ . Consider now the POVM B = {B + µ , B − µ ; µ = 1, . . . , N }, consisting of 2N elements, such that: Of course, q µ . It follows that the average informations for A and B are equal to each other,F (A) =F (B). Now, for each value of µ, the pair of operators B ± µ /α µ = (1±m µ · σ)/2 constitutes a PVM, and thus the POVM B can be regarded as a collection of N PVM's, each taken with a nonnegative weight α µ . But thenF (B), being equal to the average of the amounts of information provided by N PVM's, cannot be higher than the maximum information F D delivered by a PVM. Therefore, we have proven that F (A) = F (B) ≤ F D , which shows that the optimum measurement can always be effected by a means of PVM. We then see that, in the multibeam case, only with Dürr's measure of information one can dispose of the ancilla, at least for equally populated beams. Conclusions When, in an interference experiment, the which-way detector states are not mutually orthogonal, one has an incomplete knowledge of the path followed by the interfering particles. One is then faced with the problem of reading out, in an optimum way, the information stored in the detectors. The best measurement to be performed depends, in a crucial way, on the criterion used to measure the information. This is a problem in quantum decision theory, and our paper is a contribution to the task of identifying the optimum quantum test, for which no general solution is known so far. We have shown that for the two beams case, both by using Shannon entropy or Bayes cost function as measures of information, the best test to be performed is given by an ordinary projection valued measurement in the detector's Hilbert space. Actually, it turns out that both criteria identify the same measurement. In the multibeam case only Dürr's normalized rms spread criterion leads to a PVM, while the other two lead to a POVM. Notice that in the case of three coplanar symmetric beam states one ends up with two different POVM's: the one relative to Bayes cost [Eq. (4.3)], allows every time to pick one beam as the most probable one, while the POVM determined by Shannon entropy, allows to exclude one of the three beams as impossible. We see, then, that in the multibeam case Dürr's criterion seems to be favoured for two different reason. First of all, it allows to derive a quantitative complementarity relation, as the one given by Eq. (1.3). Second, it allows to work with ordinary quantum mechanical measurements, and to ignore generalized POVM's, involving an ancillary system. A possible relationship of these two features seams worth studying. This may be related to the fact that, as has been recently shown [17], there are problems in extending the mathematical definition of complementarity to a POVM. Our results are of limited generality in two respects: first, in the multibeam case they refer to two-state detectors, second, we always considered equally populated beams. For what concern the latter problem, we may add that we have gathered substantial numerical evidence that our results may extend to arbitrarily populated beams. However we lack at the moment an analytic proof. The former limitation seems more difficult to overcome. Fortunately, however, the case we have treated is physically interesting, for it includes many experimental setups in which the "which-way" detection exploits some two-states internal degrees of freedom of the interfering particles. Theorem: for a two-beams interferometer with equally populated beams, when one uses the negative of Shannon's entropy to measure the which-way information, the optimal measurement is provided by a PVM (precisely described in Eq. (7.12) below). More precisely, let |χ + > and |χ − > be the detector states, for the two beams. We exclude the trivial case, when |χ + > and |χ − > are proportional, because then no path-reconstruction would be possible. Therefore, H D is two-dimensional and we can represent vectors in H D by unit three vectors, according to Eq. (4.1). We loose no generality if we assume that the unit vectorsn + andn − , associated to |χ + > and |χ − > respectively, have the expressions: n + = (sin θ, 0, cos θ) , n − = (− sin θ, 0, cos θ) , (7.11) With this parametrization for the states |χ + > and |χ − >, our theorem states that, if the which-way information is measured by the negative of Shannon's entropy H, the optimal measurement is provided by the PVM A with elements: Before giving the proof of this Theorem, it is useful to prove first the following Lemma: consider, in C 2 , n states |χ i >, with coplanar vectorsn i , and arbitrary populations ζ i . Then, the optimal POVM has elements A µ of the form [Eq. (4.4)], with all the vectorsm µ lying in the same plane containing the vectorsn i . The proof of the lemma is as follows. Let B be an optimal POVM. Then we know, from the theorems quoted in Sec. 2, that its elements must have rank-one and so are of the form given in Eq. (4.4). Moreover, they must satisfy the POVM conditions given by Eqs. (4.5). Suppose now that some of the vectorsm do not belong to the plane containing the vectorŝ n i , which we assume to be the xz plane. We show below how to construct a new POVM A ≡ {A ν , ν = 1, . . . N + p}, providing not less information than B, and such that the vectorŝ m (A) ν all belong to the xz plane. The first step in the construction of A consists in symmetrizing B with respect to the xz plane. The symmetrization is done by replacing each element B µ of B, not lying in the xz plane, by the pair (B ′ µ , B ′′ µ ), where B ′ µ = B µ /2, and B ′′ µ has the same weight as B ′ µ , while its vectorm with respect to the xz plane. It is easy to verify that the symmetrization preserves the conditions for a POVM [Eqs. (4.5)]. Since all the vectorsn i belong by assumption to the xz plane, we see, from Eq. (4.6), that the probabilities P iµ actually depend only on the projections of the vectorsm (B) µ in the plane xz. This implies, at is easy to check, that symmetrization with respect to the xz plane does not change the informationF . We assume therefore that B has been preliminarily symmetrized in this way. Now we show that we can replace, one after the other, each pair of symmetric elements (B ′ µ , B ′′ µ ) by another pair of operators, whose vectors lie in the xz plane, without reducing the information provided by the POVM. Consider for example the pair (B ′ p , B ′′ p ). We construct the unique pair of unit vectorsû p andv p , lying the xz plane, and such that: u p +v p = 2(m (B)x pî + m (B)z pk ) , (7.13) whereî andĵ are the directions of the x and z axis, respectively. Notice thatû p =v p . Consider now the collection of operators obtained by replacing the pair (B ′ p , B ′′ p ) with the pair (A ′ p , A ′′ p ) such that: A ′ p = α (B) p (1 +û p · σ) , A ′′ p = α (B) p (1 +v p · σ) . (7.14) It is clear, in view of Eqs. (7.13), that the new collection of N + p operators still forms a resolution of the identity, and thus represents a POVM. Equations (7.13) also imply: like B, gives rise to another symmetric POVMB, which contains two elements less than B, but nevertheless gives no less information than B. The procedure works as follows: we pick at will two pairs of elements of B, say (B ′ N , B ′′ N ) and (B ′ N −1 , B ′′ N −1 ) and consider the unique pair of symmetric unit vectorsû ± = ±u xî + u zk such that:
2014-10-01T00:00:00.000Z
2002-08-05T00:00:00.000
{ "year": 2002, "sha1": "47c76668a3d5f5ae0bece7547152761d2240b1a4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "dc3a29dfd83a7d179439d0bd617876e3b9b114a6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225838647
pes2o/s2orc
v3-fos-license
Source mechanism identification using regional waveform inversion approach, case study: July 7, 2019 Molucca Sea earthquake Molucca Sea is a seismically active area in eastern Indonesia. An earthquake occurred near to Ternate City, Province of North Maluku (M6.8: depth 29 km) on July 7, 2019. To investigate the detail about the mechanism of the earthquake, we analyzed the moment tensor of the earthquake by applying the regional waveform inversion. We used three components waveform broadband data from 18 station of IA-net seismic network in this study. We carried out the deviatoric mode to determine the double couple and compensated linear vector dipole (CLVD) component of the earthquake. The position and origin time of the earthquake were calculated by a space-time grid search in vertical and lateral positions. The frequency band of 0.01 – 0.023 Hz is used in the inversion process to reduce the instrument low-frequency disturbance and the effect of inaccurate velocity model for the synthetic seismogram. The moment tensor inversion result shows that the source mechanism of the earthquake is transpressional fault. This result agrees well with the tectonic setting of the study area. Introduction The triple junction of converging plate between Eurasia, Australia, and Philippines Sea plates generating the unique tectonic setting in the Molucca Sea [1]. Unique tectonic setting of the Molucca Sea plate termed as divergent double subduction, that a plate has a subduction zone in both east and west boundary [2]. Sangihe slab in the west and Halmahera slab in the east having been subducted down to the discontinuity of upper and lower mantle. Both subducting slab dip at angles approximately 45° [3,4]. The plate lies beneath the forearc basin and the accretional complex of the Molucca Sea. The accretional complex being the most seismically active region leads to the uplift of Talaud-Mayu ridge at the center of the Molucca Sea [5][6][7]. This area marked by the intense shallow seismicity. Molucca Sea earthquake on July 7, 2019 occurred at 15:08:40 (UTC) near to Ternate City, Province of North Maluku. The event location was at latitude 0.54 degrees and longitude 126.15 degrees presumably in the Talaud-Mayu ridge. The earthquake had been categorized as a shallow-depth earthquake of 29 km. The moment magnitude of 6.8 made this earthquake usable to determine the earthquake mechanism. Global Centroid Moment Tensor (GCMT) determined the earthquake mechanism with strike 341°, dip 49° and rake 34 for 1 st fault plane and strike 228°, dip 65° and rake 134 for 2 nd fault plane. In this study, we applied regional waveform inversion to determine the mechanism of the July 7, 2019 Molucca Sea earthquake. The purpose of this paper is to provide more accurate moment tensor and earthquake mechanisms of this seismic event. We compare the result of the regional waveform inversion using local seismic stations from IA-network with the GCMT global station. The final result determined the source of this seismic event, whether from the subduction slab or the thrust fault in the accretional complex of Talaud-Mayu ridge. Methodology The earthquake waveforms were obtained from German Research Center of Geosciences (GFZ) seismological data archive and each station chosen from IA-network. Usable waveform data controlled by the continuity of the records, noise level, station distance, and azimuthal coverage. Based on that parameter control, a total of 18 stations were used to determine the earthquake source mechanism. The distribution of the seismic station we can see in figure 1. We applied the regional waveform inversion by using ISOLA code to model the source mechanism. The retrieved waveform converted from SAC format into ASCII to conform to the needs of ISOLA code [8]. The instrumental correction had been done to remove the effect of instrument response so the record showing the corrected velocity data. Figure 1. The focal mechanism, that represent the source mechanism of the July 7, 2019 Molucca Sea Earthquake. The triangles are the seismic station being used in the regional waveform inversion. An essential stage in the waveform inversion is to determine the usable frequency range. We try several filters to avoid low-frequency disturbances and the effect of inaccurate velocity model. The frequency band used for the inversion stage estimate between 0.01 -0.023 Hz. The velocity model used in the inversion adopted from the velocity model of AK135 which designated to provide a good fit to the full waveform of the seismic phase. The velocity model had been used in the inversion can be seen in figure 2(a). The velocity model was being used as the input of Green's functions computation [9] with the maximum frequency of 0.05 Hz. Another input for the Green's function computation is the geometries of trial source position [10]. In this study, there is two inversion stage based on geometry. The first stage was using a one-dimension vertical source position to estimate the optimum depth of the event centroid. The next stage was designing a plane on the depth determined before, to estimate the best lateral position of the event centroid. Green's function computation was generating the synthetic seismogram for each station which will have been used in the inversion. Regional waveform inversion finding the best fitting between the observed displacement from seismic records with the synthetic seismograms by iterative deconvolution method [11]. The moment tensor calculated at each trial source position based on the space-time grid. The calculation carried out with deviatoric mode to resolve the double couple and compensated linear vector dipole component of the earthquake. The dominant component should be the double couple because this seismological event source comes from the faulting mechanism. The quality of the inversion process quantifies by the variance reduction (misfit) between the record from the seismic station and the synthetics seismograms [12]. Result and Discussion The first stage of inversion, a vertical search had been done for depth between 10-40 km, in the interval of 2 km. The event's best vertical location had been calculated in depth of 22 km and used as a depth reference of lateral location. The lateral search geometry has a 15 km interval in the grid of 7 × 7 with origin location in the center. The distribution of grid search for inversion can be seen in figure 2 The deviatoric dominated by the double couple component with 71.6% and confirmed the event caused by the tectonic activity. The overall variance reduction is 0.75 that means the synthetic seismograms highly match the observed waveform data. The variance reduction calculated for each component in every station and showed in figure 3. There are two waveforms not being used for the inversion, from the MPSI station EW component and BKSI station NS component. The fit of the synthetic of that two components very poor, so it is not included in the inversion to enhance the quality of the inversion result.
2020-07-09T09:12:15.025Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "abaf1062223fc5d22329fb681bb0ab56e89dfdc9", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1568/1/012025", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b17c743253e350c0c48bb5fce85649cfcc30408c", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
263729111
pes2o/s2orc
v3-fos-license
An image caption model based on attention mechanism and deep reinforcement learning Image caption technology aims to convert visual features of images, extracted by computers, into meaningful semantic information. Therefore, the computers can generate text descriptions that resemble human perception, enabling tasks such as image classification, retrieval, and analysis. In recent years, the performance of image caption has been significantly enhanced with the introduction of encoder-decoder architecture in machine translation and the utilization of deep neural networks. However, several challenges still persist in this domain. Therefore, this paper proposes a novel method to address the issue of visual information loss and non-dynamic adjustment of input images during decoding. We introduce a guided decoding network that establishes a connection between the encoding and decoding parts. Through this connection, encoding information can provide guidance to the decoding process, facilitating automatic adjustment of the decoding information. In addition, Dense Convolutional Network (DenseNet) and Multiple Instance Learning (MIL) are adopted in the image encoder, and Nested Long Short-Term Memory (NLSTM) is utilized as the decoder to enhance the extraction and parsing capability of image information during the encoding and decoding process. In order to further improve the performance of our image caption model, this study incorporates an attention mechanism to focus details and constructs a double-layer decoding structure, which facilitates the enhancement of the model in terms of providing more detailed descriptions and enriched semantic information. Furthermore, the Deep Reinforcement Learning (DRL) method is employed to train the model by directly optimizing the identical set of evaluation indexes, which solves the problem of inconsistent training and evaluation standards. Finally, the model is trained and tested on MS COCO and Flickr 30 k datasets, and the results show that the model has improved compared with commonly used models in the evaluation indicators such as BLEU, METEOR and CIDEr. Introduction In recent years, profound advances have been made in deep learning technology due to the breakthrough in computing power of computers and the surge in data (LeCun et al., 2015).Meanwhile, image caption based on deep learning has also seen significant improvements (Bai and An, 2018;Srivastava and Srivastava, 2018;Liu et al., 2019).Image caption is Bai et al. 10.3389/fnins.2023.1270850 the intersection of the fields of computer vision and natural language processing, along with its potential value in terms of contributing to visually impaired individuals' daily life assistance, graphic conversion, automatic title generation and machine intelligence (Hossain et al., 2019;Kang and Hu, 2022).Fundamentally, it involves utilizing techniques grounded in deep learning to interpret a given image and automatically generate descriptive text as if the machine is looking at an image and speaking.Despite its intuitive nature for humans, this process is highly challenging for machines, requiring the accurate interpretation of image content, object relationships and the synthesis of appropriate language.As such, significant research efforts are still required to achieve reliable and effective image caption models that match human-level performance (Anderson et al., 2016;Bernardi et al., 2016). The advancement of image caption technology is of profound importance in terms of both research and practical application.Its significance is particularly evident in the following areas: firstly, in the field of visual assistance systems, image caption can play a vital role in helping the visually impaired access crucial visual information (Jing et al., 2020;Bhalekar and Bedekar, 2022).By expressing image content comprehensively and concretely, this technology can reduce the obstacles that the visually impaired face in their learning and daily life.Secondly, due to the widespread deployment of cameras and the increasing amount of monitoring data being acquired, the workloads of surveillance personnel have become overwhelming.A system based on image caption can provide summarized information of the monitoring data leading to more efficient work processes (Nivedita et al., 2021).Overall, with the continuous development and maturity of deep learning theory, image caption technology will undoubtedly have an increasingly significant impact on people's lifestyles, advancing progress across society and industry (Amritkar and Jabade, 2018;Kinghorn et al., 2018). Image caption has broad application prospects, and more and more researchers begin to study this challenging task.Before the introduction of encoder-decoder architecture, two primary approaches had emerged in the early stages, template-based method and search-based method.The template-based approach generates the final caption from a pre-set sentence template.Farhadi et al. (2010) use detectors to detect objects to form descriptions of images based on language templates.Other researchers use independent corpus construction and more effective semantic analysis models to describe the images.Elliott and de Vries (2015) express target objects in images by means of visual dependency representation, selects the target objects corresponding to the most appropriate features, and fills them in the template.After continuous improvement of the template-based method, although the main object of the image can be recognized accurately, the generated sentences are monotonous and lack some semantic information.The search-based method involves using similarity algorithms to compute the similarity between extracted features and the images stored in a constructed image library, to find out the images in line with the algorithm, and these images have been matched with the corresponding sentence descriptions in advance, which can be fine-tuned for appropriate output.Verma et al. (2013) adopt traditional image feature extraction methods to compare the extracted image features with those in the database, so as to determine the maximum joint probability output in the description tuple.Li and Jin (2016) introduce the reordering mechanism which greatly improves the model performance.The search-based method relies heavily on the constructed search image library, and the results have great uncertainty and poor robustness. The image caption model based on encoder-decoder architecture is derived from the machine translation model (Cho et al., 2014).The encoder-decoder architecture can directly realize the mapping between the images and the descriptions by learning.And the deep neural network model can learn these mappings from a large amount of data to generate a more accurate descriptions, which makes this method have greater improvement in performance compared with the previous methods.The Multimodal Recurrent Neural Network (M-RNN) model is proposed in Mao et al. (2014), stands out as a pioneering approach utilizing an encoder-decoder architecture, effectively bridging the gap between image and text features through modal fusion.The Neural Image Caption (NIC) model proposed in Vinyals et al. (2015) adopt Long Short-Term Memory (LSTM) to replace RNN, which effectively improves performance and is also the baseline model for many subsequent methods.Deng et al. (2020) introduce an adaptive attention model with a visual sentinel, and introduces the Dense Convolutional Network (DenseNet) to extract the global features of the image in the encoding phase, which significantly improves the quality of image caption generation.Fei (2021) propose a memory-augmented method, which extends an existing image caption model by incorporating extra explicit knowledge from a memory bank, and the experiments demonstrate that this method holds the capability for efficiently adapting to larger training datasets.In Shakarami and Tarrah (2020), an efficient image caption method using machine learning and deep learning is proposed.The experimental results demonstrate the superiority of the offered method compared to existing methods by improving the accuracy.Huang et al. (2019) propose an Attention on Attention (AoA) network for both the encoder and the decoder of the image caption model, which extends the conventional attention mechanisms to determine the relevance between attention results and queries.Krause et al. (2017) use faster-RCNN to acquire regional features and combine them, and then uses multi-layer recurrent neural networks to get the image caption.There are several other improvements (Yang et al., 2019;Liu et al., 2020;Parikh et al., 2020;Singh et al., 2021) that are based on this encoder-decoder architecture.This kind of method is characterized by its flexibility and strong generalization ability.At present, most improvements are based on encoder-decoder architecture. With the development of technology, the performance of image caption has been made substantial advancements compared with traditional methods (Liu et al., 2020).However, there are several challenges persist, including shortcomings in the encoding and decoding processes, loss of visual information during decoding, insufficient attention to detail information, and discrepancies between training objectives and evaluation indicators.To address these issues, this paper studies and optimizes the image caption model with encoder-decoder architecture.The structure of the paper is arranged as follows: section 2 puts forward the image caption model based on guided decoding and feature fusion.Section 3 further improves the performance of the image caption model.Section 4 provides the experimental process and result analysis.Finally, the conclusion of our image caption model is in section 5. Image caption model based on guided decoding and feature fusion In order to solve the problems in image caption technology, this paper proposes an image caption model based on guided decoding and feature fusion.Based on the encoder-decoder architecture, DenseNet model is used to encode image features, and the Multiple Instance Learning (MIL) method is used to extract the image visual information.The two parts together constitute the encoding process of image visual information, and the guided decoding module is adopted to dynamically adjust the input image visual information during the decoding process.The decoder uses a Nested Long Short-Term Memory (NLSTM) network, which can learn more hidden information by increasing the depth of the network model. Encoder design based on feature fusion Convolutional Neural Network (CNN) is a crucial model for processing visual image problems and have significantly improved with each architecture iteration.Typically, lower-level features are utilized to distinguish between various classes of basic contour information, while higher-level features are more abstract and effectively differentiate between different varieties of semantic information for the same target.From this perspective, the deeper the layers of the network model, the richer the information extracted.However, the consequent problem is that the increase in model depth causes the gradient to diminish until it disappears during the transfer process.The problem of gradient disappearance can be solved to some extent by using the Batch Normalization (BN) method (Bjorck et al., 2018).Residual Network (ResNet) and highway network also address the problem of gradient disappearance and model degradation by using bypass settings and gating units (Shaked and Wolf, 2017).Nevertheless, these models are prone to excessive parameters and depth redundancy.In image caption tasks, where image scenes are rich, it is necessary not only to identify targets but also to be able to abstractly describe the interconnections between targets, so fusing the base feature map with higher-level feature maps is a good way to handle this problem.In this paper, we employ the DenseNet model for image feature extraction, which is based on the architecture as illustrated in Figure 1.The fundamental concept of DenseNet resides in establishing connections between varied depth feature maps, enabling the utilization of both high-level and low-level features to their fullest potential. DenseNet has been identified to improve feature multiplexing by means of bypass and this not only deepens the network's layer depth, but also amplifies image information availability.Furthermore, it mitigates problems related to gradient disappearance and model degradation while also keeping the number of parameters less than those of deep neural networks such as ResNet.Meanwhile, with the increase in layer depth, optimization of the network does not become more convoluted.The model's accuracy increases proportionally with an increase in parameters, devoid of overfitting occurrences. For the image caption tasks, the object, attribute and relation detector are trained separately by independent hand-labeled training data.We train our image caption models on datasets that contain multiple images and descriptive sentences corresponding to each image.Different from the tasks of image classification and object detection, in the task of image caption, there are not only nouns, but also verbs, adjectives, adverbs and other parts of speech in the description generated by an image.Therefore, in order to describe the needs of the tasks, it is necessary to construct a word set D composed of 1,200 common words, which basically contains more than 95% of the words that need to be used in the training set, and the remaining words are treated as non-essential words. Then, we need to extract the corresponding word from the image through the constructed word set.Because the datasets used in this paper did not define and label corresponding words with corresponding bounding boxes, at the same time, the parts of speech are not even marked, typical supervised learning methods are not suitable for this task.Certainly, while image classification can provide corresponding words for a whole image, many words or semantics are only applicable to the subregions of the image.Such generic classifications often fail to enhance model performance.Therefore, this study applies the MIL method to tackle tasks with one-to-many relationships (Dietterich et al., 1997). In the image caption tasks, each image corresponds to a packet.For each word w in the word set D, the packets are divided into positive packets and negative packets according to different image areas, thus forming the input set of the whole MIL model.The classification method is as follows: if the word w in the word set D appears in the corresponding description sentence of an image I, then the packet is marked as a positive packet; if the word in the word set has no corresponding word in the description sentence, the The model structure of guided decoding network. packet is marked as a negative packet.The training set is represented in formula (1). {( For the input packet in the training set x i , when y i = 1, it is the positive packet, and when y i = −1, it is the negative packet.Using the MIL model, the probability P w that each packet b i contains the word w in the word set D is calculated by the following formula: Where x w ij represents the probability that a particular region j in an image i corresponds to the word w in the word set.Since it is image information, the Visual Geometry Group Network (VGGNet) model is used here for calculation.VGG16 model has a total of 16 layers, including 5 convolutional layers, each convolutional layer is followed by a pooling layer, generally using the maximum pooling method.After the convolutional layers, there are 3 fully connected layers, and finally the SoftMax layer is used for classification.The input of the network model is a 224*224 RGB image.The specific calculation process of x w ij is to adopt a fully connected layer with a sigmoid nonlinear activation function, and the formula is as follows. Where ( b ij ) represents the features of region j in the image i extracted by the seventh fully connected layer in the model, W w and b w , respectively, represent the weight and bias of the word w, which can be obtained by learning in model training. After the operation of the model, a spatial feature map of the image will be obtained in the last fully connected layer, which is corresponding to the position of the input image, that is, the features of different regions in the image.The visual text information of the images in datasets is generated by the MIL model.Generally, the top 10 words with the highest probability after being processed by the MIL model are selected. In this paper, the image feature extraction module and visual information extraction module will be fused by guiding the decoding module to provide a basis for the subsequent decoding process.In the NIC model of image caption, visual information is only input to the decoder at the beginning of decoding, and the strength of its information features will gradually diminish during the decoding process.The ideal decoder should be able to balance the two-input information of image vision and description, so as to avoid the reduction of decoding accuracy because one information dominates the decoding.Therefore, a CNN model for guided decoding is constructed in this paper.By inputting the learned features into the network for modeling, the modeled guidance vector is sent into each time sequence of the decoder, and at the same time, it can accept the error signal feedback from each time sequence of the decoder and make corresponding adjustments.The introduction of the model structure can realize the complete end-to-end training process.The guided decoding network is a deep neural network composed of two convolutional layers and one fully connected layer, represented by CNN-g.Its model structure is shown in Figure 2. Decoder design based on NLSTM model Text information is a critical component of training datasets and plays a vital role in the effectiveness of decoding.To ensure optimal feature extraction and expression, it is necessary to structure raw unstructured text data using a text representation model.This allows for efficient participation in the decoder's training process. Word to Vector (Word2Vec), a highly effective word embedding model built using shallow neural networks, consists of two main structures: skip-gram and CBOW (Continuous Bag of Words).While skip-gram predicts the probability of generating surrounding words based on the current word, CBOW predicts the generation probability of the current word based on surrounding words.The complexity and variation of the semantic environment in image The simplified structure of skip-gram. caption require more precise word embedding inputs.To address this need, this paper adopts the skip-gram model.Skip-gram is a shallow neural network model composed of the input layer, hidden layer and output layer, and its simplified structure is shown in Figure 3. Wherein, each word in the input layer uses one-hot encoding, the size of the training set thesaurus is N, and the hidden layer has K hidden units.After the training is completed, any word x i in the thesaurus can be calculated to get the feature vector with this word as the central word. In the actual model training process, managing the number of output feature vectors can pose a challenge due to the large volume of training data involved.To address this issue, the hierarchical SoftMax method is leveraged in this paper.This method entails constructing a Huffman coded binary tree based on word frequencies, where high-frequency words are placed at the root node to minimize computations.The tree is organized hierarchically from top to bottom, with each node classified by a sigmoid activation function.The sigmoid activation function determines the probability of the left and right branches of the tree, and the goal of model training is to multiply the probability on the passed branches to reach the maximum value. In the context of processing and predicting sequence data, Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) networks are commonly employed.When it comes to image caption tasks, RNN and LSTM serve as decoders.Among them, LSTM has proven effective in addressing the long-term dependence issue.In this paper, an enhanced NLSTM model is utilized as a decoder to decode input image features.Different from the general LSTM model, in NLSTM, the memory function c t can be obtained through model training as shown in formula (4). Where m t is a state function learned from NLSTM, and represents the state m at time t.h t and x t are the input and hidden states of the memory function, respectively.i t and f t respectively represent the input gate and forgetting gate.w c and u c are learned during training. In the NLSTM model, the specific calculation method of internal LSTM is obtained by the following formulas: Where ct is the internal memory function, xt and ht are the input layer and hidden layer states of the memory function, respectively.ĩt , f t , and õt respectively represent the input gate, forgetting gate and output gate of the internal LSTM.To achieve the gating effect in the neural network, the sigmoid function σ is commonly used as the activation function, and the Tanh function is utilized as the candidate memory function.The parameters w, ũ, and b are learned during training. The memory unit of the external LSTM is updated according to formula (10). The value of h t is then updated through the memory unit c t of the external LSTM as shown in formula (11). NLSTM uses the standard LSTM network as a gating unit to input relevant information into its memory unit, reducing internal memory burden.This enables a more deterministic time hierarchy and better handling of time series problems compared to stacked models.Finally, a SoftMax layer is used in the model to predict the output words obtained by the final model through the probability distribution of words at time t.The structure of the image caption model is shown in Figure 4.The structure of the image caption model. In Figure 4, CNN-e represents the DenseNet model used in the coding process, and CNN-g is the guided decoding network.The extracted image fusion features are represented by the formula (12). Where A represents the global image feature, M stands for the visual text information learned from multiple instances, and f g represents the model function learned by guiding the decoding model.The decoded output y t at time t is calculated by formula (13). Image caption combining attention mechanism and deep reinforcement learning In order to further improve the performance of the image caption model, we build a double-layer decoding network by introducing the attention mechanism on the basis of the model proposed above.The output of the first layer and the image features are sent to an attention module to extract important detail features.The output of the module is fused with the output of the first layer as the input of the second layer for the second decoding.Meanwhile, considering the powerful perception and decision abilities of Deep Reinforcement Learning (DRL), this paper constructs a training optimization method based on DRL to improve the overall performance of the model. Attention mechanism Although the traditional encoder-decoder based image caption model can describe the content of the image in a short text description, it often ignores some local and detailed information in the image during the description process.However, this information is very important to the richness and accuracy of the description.When the attention mechanism was introduced into the image caption task for the first time, which effectively improved the performance of the NIC model.The attention mechanism is inspired by the human process of observing things, people immediately focus on the important information in an image while paying less attention or ignoring irrelevant information or background information.In deep learning, the formation of attention is basically through the way of masks, that is, important information in the image is distinguished by giving different weights.After continuous training of the model, it can learn which regions are important in the image and form more attention to these regions.There are two main types of attention mechanisms: hard attention and soft attention.Here, we represent the feature vector v extracted by the encoder as shown in formula ( 14). The output of the last convolutional layer of the DenseNet is used to represent the features of different positions in the image.At different moments of decoding, the attention weights for different regions of the image can be calculated by formula (15). Where h t−1 represents the state of the hidden layer on the decoder LSTM at time t − 1, f att represents a function that assigns different weights to each region of the image. The SoftMax function is used to normalize formula (15) so that the weight range is [0,1] and the weighted sum is 1, as shown in formula (16). Finally, the visual context vectors of different regions of the image are calculated by weight.Its visual context features vt are expressed as shown in the formula (17). Where h it is the multivariate two-point distribution of the input vector v, a it is the weight of the different regions of the image in the input decoder at time t, as shown in formula ( 18). To obtain local image details during the decoding phase, we propose a double-layer stacked decoding structure, based on the Frontiers in Neuroscience 06 frontiersin.org Where I represents the input original image after preprocessing, f cnn represents the computational model of DenseNet.In this model, the last fully connected layer in Figure 4 is removed, and the output of the convolution model is reduced dimensionality by the matrix.The state of the hidden layer of the first layer decoder at time t is calculated by formula (20). Where x t represents the input feature vector of word embedding, h t−1 represents the hidden layer state at the moment t − 1, v g represents the input vector to guide the decoding, and f nlstm stands for the NLSTM network used by the decoder of the first layer. In the attention module, the image features and the hidden layer state of the first layer decoder are used as inputs, and unlike the hidden layer state of the t − 1 moment used by the soft attention mechanism, the hidden layer state of the t moment used here is shown in formula (21). Where w v and w h represent the parameter matrix to be learned by the model, ⊕ represents the summation operation of the matrix. The weight of the attention module is calculated as shown in formula ( 22). Where w a represents the parameter matrix to be learned by the model, f softmax represents the SoftMax operation. Based on the weight of the attention module, we can get the visual attention features of the image vt , as shown in formula (23). Then, by means of residual connection, the visual attention feature is added and fused with the corresponding subscript element of the hidden layer state h t at t moment of the first layer decoder, as shown in formula (24), and it is used as the input of the second layer decoder. LSTM is used as the second layer decoder for the final processing of sequence information.The hidden layer state of the second layer decoder is obtained by formula (25). Where h 2 t−1 represents the hidden layer state of the second layer decoder at time t − 1, f lstm represents the model calculation function of the second layer LSTM. After the second hidden layer state is obtained, an evaluation module is used to predict the possibility of output words, which is mainly composed of linear layer, fully connected layer and SoftMax layer.The linear layer is used for dimensionality reduction of words output by LSTM, and the fully connected layer is used for the upsampling of vectors after dimensionality reduction.Finally, the probability distribution y t of word output is calculated through the Frontiers in Neuroscience 07 frontiersin.orgDropout operation in the double-layer decoding structure. With the increase of the number of model layers, the expressiveness of the model is also enhanced.However, this also leads to overfitting problems.To address this issue, this paper adopts the dropout method in the double-layer decoding structure that reduces overfitting.The main idea of this method is to deactivate part of the computing units and keep the other part of the computing units working on the data that flows into each unit.Figure 6 illustrates the implementation of dropout operation in the double-layer decoding structure, at time t = 0, input x 0 is passed into the first layer of RNN, and then transmission continues in the first layer until time t = 2, during which there is no dropout operation.At time t = 2, the dropout operation is performed when the first layer passes to the second layer, which is always coherent in timing.The dropout operation helps greatly in improving the robustness of the model. Deep reinforcement learning Reinforcement learning is an artificial intelligence learning method.Different from supervised learning and unsupervised learning, reinforcement learning will only make different rewards or punishments according to the quality of actions.DRL not only has the understanding ability of deep learning, but also makes use of reinforcement learning to make decisions and judgments on the environment, and realizes the response and treatment of complex problems through the end-to-end learning process.The framework of DRL is mainly derived from Markov Decision Process (MDP). The policy gradient algorithm is a frequently adopted technique for DRL.It offers a direct approach to optimize the expected reward of the policy, without relying on intermediate stages, and enables the determination of an optimal policy within the given policy Policy gradient learning method based on actor-critic architecture. space.The method utilizes an approximation function to directly optimize the policy and achieve the highest expected total reward.The actor-critic architecture diagram for this algorithm is illustrated in Figure 7, with its policy gradient being expressed through the formula (27). Where ( a t s t ) represents the policy function, which is learned by the neural network in DRL, t represents the evaluation function, which is approximated by a neural network. The policy function can guide the agent's actions.The guidance process is calculated according to the probability of taking an action in a certain state, and it is a mapping function from state to action.At the same time, the optimal policy is selected to guide the value function through policy evaluation.The value function is the state value function under the guidance of the policy.The policy function t is updated by formula (28) during the learning process.The value function w t is updated by the formula (29). Frontiers in Neuroscience 08 frontiersin.orgThe model structure and prediction process based on DRL. Where a t and s t , respectively, represent the action and state at time t. Considering the powerful perception and decision abilities of DRL, we use it to further optimize our image caption model.And on the basis of the actor-critic structure, two kinds of deep neural networks, policy network and value network, are used to construct models for predicting words that best describe the image in each state.Specifically, the policy network evaluates the confidence of the next predicted word based on the current state, and thus suggests the next possible action to be taken.The value network evaluates the reward scores of the actions predicted by the policy network in the current state, and decides whether to choose the actions given by the policy network according to these reward scores.In other words, the model's predictions are constantly adjusted according to the actual situation for producing the better image caption.The model structure and prediction process are shown in Figure 8. The whole process consists of four main elements, including agent, environment, action and goal.In the image caption tasks, the policy network and the value network are the agents and also the main parts of the model.The input image I and its description sentence represent the actual environment of the agents.The next predicted word x t+1 is the next action, and the thesaurus of all the words in the caption is the space for the actions.Generating the image caption is the goal of this process. The policy network adopts the encoder-decoder architecture mentioned above in this paper.We use s t to represent the current state, e = { Ix 1 x 2 … x t } to represent the environment, and a t = x t+1 to represent the next action based on the environment.The visual feature v g of image I is extracted by CNN, as shown in formula (30). Using v g as the input of the decoder NLSTM, the action a t at time t is predicted according to the hidden layer state h t at time t and the input word x t−1 at time t − 1.Because the decoder adopts a sequential processing mode, the prediction word x t will also be used as the input for time t + 1, and the hidden layer state at the next time will also be updated as the input is updated.The formulas are shown as follows. Where and represent the input and output of the decoder, respectively.p ( a t s t ) represents the possibility of taking action a t in the case of determining state s t . In the value network, the value function v p under the policy p is first defined, which represents the prediction of the total reward r in the state s t , expressed by formula (33). In this paper, the output v (s) of the value network is constructed to fit the value function.The value network is based on the deep neural network, and its structure is shown in Figure 9.It mainly consists of three parts: CNN module, RNN module and fully connected network module.The CNN module is used to extract the visual features of the image, and the Inception-v3 model is selected in this paper.RNN module adopts LSTM structure to extract semantic features of descriptions.The fully connected network module uses the linear regression method to obtain the reward score of the generated semantic descriptions. In the value network, when the agents complete a goal, the total reward is used to motivate the actions taken.Here, the linear mapping method implemented by the fully connected module maps the image and the corresponding description into a semantic embedding space, to calculate the vector distance between them.The loss function m loss of this mapping can be expressed by the formula (34). (34) Where is the penalty coefficient with the range of (0,1), f cnn is the image feature extracted by the DenseNet, and f m is the mapping function. For a given description sentence s, whose embedded characteristics depend on the final state h T of the hidden layer, and The structure of value network. the total reward is defined as shown in formula (35). According to formula (35), the total loss r loss is calculated in formula (36). Where is the hyperparameter with the range of (0,1). Experimental process and result analysis We assess the effectiveness of the image caption model presented in this paper by means of a deliberate experimental process, including thorough comparative analysis of the experimental results.The experimental environment and datasets deployed in the experiment are introduced in detail.Additionally, the data preprocessing method, specific model training methodology, and optimization of model parameters are also comprehensively discussed.Finally, through comparative analysis, the performance and advantages of the proposed model are evaluated in depth for maximum objectivity and credibility. In the tasks of image caption, the most popular datasets adopted by most researchers include MS COCO (Lin et al., 2014) and Flickr 30 k (Young et al., 2014).The Flickr dataset is primarily description of human activity scenarios.We use 29,000 of the Flickr data as a training set, 1,000 as a validation set, and the remaining 1,000 First of all, it is necessary to preprocess the data in the datasets, including the images and the descriptions.The image size is uniformly adjusted to 256*256, then trimmed to 224*224 to fit the model input.And the image is normalized to scale each pixel with the range of (0,1).Firstly, the description sentences need to segment, convert all letters to lower case, and remove spaces and punctuation.Then, the number of occurrences of all words in the datasets is counted, and words that appear less than 5 times are tagged UNK which have little effect on predicting outcomes.Finally, it is stipulated that the length of the sentences is not more than 15 words, each sentence only intercepts the characteristic values corresponding to the first 15 words.For sentences with less than 15 words, we supplement the number of characteristic values to 15, and the supplementary characteristic values are 0. At the same time, the tag start and end, respectively, placed at the beginning and end of the description sentences, to mark the beginning and end of the sentences. In this paper, we adopt BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), ROUGE-L (Lin, 2004), andCIDEr (Vedantam et al., 2015), which are commonly used evaluation indicators.In the model testing phase, this paper uses the method of beam search to choose a better generated sentence.The five sentences with the highest probability value are output at each decoding moment, that is, the value of beam size is set to 5. Given that dropout operation is used during model training, the impact of different dropout ratios on model performance can vary.To determine the optimal dropout ratio for the model, this A comparison of the scores of different dropout ratio on CIDEr. paper compares model scores across different dropout ratios using the CIDEr evaluation indicator and presents a comparative graph in Figure 10.Analysis of the results indicates that when dropout operation is not performed, the score of the model fluctuates greatly, which indicates that the model is too complex and overfitting has occurred.Similarly, when the dropout ratio is 0.3, the fluctuation remains high and the model convergence score is low suggestive of underfitting arising from insufficient involvement of neurons in training.In contrast, when the dropout ratio is set at either 0.5 or 0.7, the curve remains relatively stable with a better CIDEr score when the dropout ratio is 0.5.Thus, the appropriate dropout ratio for the model is determined to be 0.5. In this study, we conducted a comparative analysis of our model's performance against other mainstream models, namely Google NIC, Soft attention, g-LSTM, RIC, RHN, and LSTM-A5.We evaluated the models using different metrics on MS COCO and Flickr 30 k.The comparison results are presented in Tables 1, 2. As shown in Table 1, on the MS COCO dataset, the basic model proposed in this paper has improved the scores of BLEU-1 and BLEU-4, which measure sentence coherence and accuracy, by nearly 0.05 and 0.03, respectively, compared with the g-LSTM model, due to the use of the guided decoding network.At the same time, using DenseNet and MIL to process image information also improved the score of CIDEr evaluation index reflecting semantic richness by nearly 0.04 compared with Google NIC which only used the Inception-v3 structure as the image information extraction model.However, compared with more advanced models such as RIC and LSTM-A5, the proposed basic model still has a certain gap in the scores of various evaluation indexes.The reason is that the attention mechanism is not introduced, so the details are not enough.And the decoder only uses a single layer structure, so the decoding process is not sufficient. As can be seen from the results in Table 1, on the MS COCO dataset, the performance of the final model in this paper is superior to the comparison models on various evaluation indicators even when without attention mechanism.Therefore, the use of DRL can significantly improve the performance of the image caption model, and when the attention mechanism is added, the model certainly performs better.Specifically, the BLEU scores of the proposed model are improved by 0.018 and 0.019, respectively, compared with the The effect diagram of the attention mechanism. best results in the comparison models, which indicates that the output sentences of the proposed model have better coherence and accuracy.In terms of the METEOR scores, the proposed model also has an improvement of more than 0.03 compared with other models.In addition, without the attention mechanism, the model in this paper is also improved by more than 0.05 compared with the g-LSTM model, so the end-to-end model structure in this paper has greater advantages than the static adjustment of g-LSTM.Compared with the Soft attention model, which also uses the attention mechanism, the performance is improved by 0.05 due to the double-layer mechanism guiding the decoding and the optimization of DRL.In terms of CIDEr scores, which measures semantic richness and description consistency, there is also an improvement of 0.077 compared with the best results in the comparison models, which shows the excellent performance of the model designed in this paper. As shown in Table 2, because the Flickr 30 k dataset contains much less data than the MS COCO dataset, the evaluation index scores of the proposed basic model and final model are basically decreased compared with those in Table 1.However, the basic model presented in this paper has higher evaluation index scores than the Google NIC, Soft attention, and g-LSTM models.And the scores of the final model are better than the comparison models in most evaluation indicators, however, the scores of some indicators are slightly lower than those of some models, which may be caused by the poor generalization ability of the model due to too small amount of data. After the attention mechanism is used to improve the proposed model, in order to verify the actual effect, the extracted image features and the hidden layer state of the first layer decoder are processed by the attention module, then the words corresponding to different regions in the image are determined according to the corresponding weights, and the effect diagram is shown in Figure 11. Figure 11 shows the corresponding focus of each word in the sentence in the image.The white highlights in each image from left to right correspond to each word from left to right in the sentence below, and the whiter part of the highlights indicates the greater attention weight assigned.As can be seen from the images, the attribute word "green" about color focuses on the position of the bird's body, and the target subject "bird" focuses on the head of the bird, because the head is the area that can best reflect the characteristics of the bird.The phrase "standing on" focuses on the bird's feet, which is characteristic of the action.The word "grass" focuses on the green area where the bird is standing.Through the above analysis, it can be seen that the double-layer decoding structure model with the introduction of the attention mechanism is very accurate in extracting and matching key information and local information in the image, and it is also helpful in improving the performance of the image caption model. Conclusion Aiming at the problems of existing image caption models, this paper proposes an image caption model based on deep learning.Firstly, based on the NIC model, the encoder and decoder are optimized through DenseNet and NLSTM networks.Meanwhile, this paper also introduces a guided decoding network to realize the dynamic adjustment of encoded information in the decoding process and avoid the loss of image information.The experimental results show that compared with several common models, the performance of the basic model designed in this paper is improved.Then, on the basis of the proposed image caption model, we introduce the attention mechanism to construct a double-layer decoding structure and improve the decoding depth to obtain the details of the image.The powerful perception and decision abilities of DRL are adopted to optimize the model, which solve the problem of discrepancies between training objectives and evaluation indicators, and improve the expressive ability of the image caption model.Through the comparison and analysis of the experimental results with several common models, our image caption model further improves the scores of each evaluation index, and the output description of the image is more accurate and semantic rich.In future work, we will design the image caption model based on expression ways in different scenes and language habits of different people, so that the sentences output by the model will be closer to the expression ways of humans in real scenes.Meanwhile, we will continue to expand the datasets to include richer content, and further design a better model to enable zero-sample learning through textual inference. FIGURE 5 FIGURE 5The improved structure of the image caption model. FIGURE 8 FIGURE 8 as a test set.In addition, 40,775 images and 30,775 data of the corresponding image descriptions from the MS COCO dataset are added to the training set to increase the number of training samples.The deep learning framework used is TensorFlow.
2023-10-07T15:19:25.889Z
2023-10-05T00:00:00.000
{ "year": 2023, "sha1": "7d4ae0fc712d7c19bad5f39d22a69fa3d1e9bf3a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2023.1270850/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8c6f188cc6d9fe0c62d156035355400c00fd8a9f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
133215000
pes2o/s2orc
v3-fos-license
Assess the Achievement of the E-Learning Outcomes of Disasters and Mitigation and Disaster Management Courses In 2009, Kocaeli University entered the process of educational reconstruction. Since 2012, all students have been able to choose two seperate courses under the name of university electronic elective course (E-course) within the scope of these changes if they want. These courses are Disasters and Mitigation and Disaster Management courses. In these courses, it is tried to increase disaster awareness in as many students as possible and to ensure that they learn, think about and discuss both what they can do individually and the social contribution they can offer in this regard. The students assess these learning outcomes at the end of the term. In this study, to what extent the 3,302 students, who had taken these courses in the past three years, thought they had achieved the e-learning outcomes of these courses and the relationship between the academic achievement they showed in the courses and the achievement of the learning outcomes have been examined. The results of the study show that the e-learning outcomes of these courses have been achieved “above the expected level”. No relationship has been found between the academic achievement and the achievement level of the learning outcomes for the Disaster and Mitigation course. This result shows that as the content of this course is for individual achievements, these achievements are believed to be more effective and important than the academic achievement by the students. Introduction Following the adoption of the Hyogo Framework of Action plan, various printed educational materials such as coursebooks/handbooks, guidebooks and posters and non-printed educational materials including activities, games and practices were developed by several different institutions and non-governmental organizations in many countries.Since the turn of the Millennium, quite a variety of educational materials aimed at disaster risk mitigation have been developed for children as a result of the rapid development of internet and information sharing (Asharose, et al, 2015). In one of his studies, Kitagawa (2015) highlights the importance of disaster trainings given both formally and informally in Japan, considers disaster safety as a part of safe education and emphasizes that disaster safety can be provided with disaster training, disaster management and disaster coordination.In the same study, it is also emphasized that contents of formal education and disaster policies are developed depending on the lessons taken from the experienced disasters (Kitagawa, 2015).In a study, Tanaka (2005) different approaches and information on disaster education are necessary for people to be motivated to take preparatory activities in different cultures (Tanaka, 2005). In a study, Asharose et al. (2015) mention the requirements of execution of prevention activities by communities to minimize the disaster risk and damage mitigation and preparation activities to restrict the impacts of disasters.In the same study, the role of direct training given by institutions based on a curriculum and indirect training provided through a person's own daily activity in increasing the disaster awareness of the society is also underlined.These trainings play an important role in disasters and mitigation and provision of human safety as part of sustainable development (Asharose, et al, 2015).In a study Bansal and Verma highlights, since earthquake forecast is not yet possible, it necessitates better understanding of seismic hazard and an important programme, which goes hand in hand with disaster mitigation efforts, is the education and awareness.In addition, at the same study emphasis that interactive learning; creating an opportunity for students at the all level of education (Bansal and Verma, (2012). Training programs were developed by several institutions such as the Ministry of National Education, Boğaziçi University, Kandilli Observatory and Earthquake Research Institute, Turkish Red Crescent, universities, municipalities and non-governmental organizations to reduce the impacts of disasters after the 1999 earthquakes, which wounded our country deeply, and develop the behavior pattern at the time of a disaster (MoNE, 2011;Sanduvac & Petal, 2010).Unfortunately, these trainings were continued for a short time after the great losses experienced and they could not go beyond social activities or be integrated into the education system.In a study carried out by Karancı et al. (2005), it was stated that such short disaster preparedness trainings increased the motivation of individuals; however they did not lead to lasting behavioral change.In the same study, it was also emphasized that education reduced anxieties about the potential disasters and the anxieties decreased as the education level increased (Karancı et al., 2005) Educational institutions play an important role in disaster preparedness and community recovery after hazard events (Öcal and Topkaya, 2011).In the Department of Emergency and Disaster Management (college level) and Civil Defense and Firefighting programs (associate degree level) existing in a limited number of universities in our country, Civil Defense and Protection Information, Disaster Culture, Emergency Management and Civil Defense, Disaster Management, Disasters and Mitigation courses are taught.These departments train staff who will take an active role in the response stage of disaster management.On the other hand, unfortunately there are no courses aimed at the provision of disaster knowledge and creation of individual preparedness culture in most of the universities in our country.However, making a society prepared for disasters is possible through starting from individuals and reaching an institutional and social dimension.In primary and secondary school programs, disaster awareness and prevention culture is included in all levels even inadequately, however it is seen that these trainings are interrupted in higher education.Elective courses can be an alternative in the elimination of this deficiency in higher education level.Considering that fewer people can be reached in the elective courses which are taught face to face in a classroom environment, performance of these elective courses in an electronic media using e-learning tools will provide wider reach.E-learning is an educational activity performed using information and communication networks (Brooks, et al, 2006;Aytac, 2000). It cannot be denied that such efforts have also begun to develop in universities, which are one of the most important institutions in our country, and significant steps have been taken.It is known that several universities have made attempts to include disasters and mitigation and disaster management courses in all fields, not only in the fields such as geology and Through e-learning, information can be accessed from a desired place at a desired speed. Starting from this point of view, 2 elective courses designed with e-learning method as a means to disseminate disaster awareness and culture at the Umuttepe central campus of Kocaeli University regardless of branches were opened for all students in the electronic elective course package.It is observed that there has been a demand over the available placement quotas for these courses starting from the academic year of 2011-2012 up to the present.This demand demonstrates that young people living both in our province and in our country are aware of the disaster facts in our country and willing to access information on this issue. Universities are institutions which have quite different student profiles and where technological developments were optimally put into practice and current technologies are utilized.As a matter of fact, enabling academicians to use the technology can also be stated as one of the priorities of higher education today (Marshall, 2010, p. 184).Both academicians and students can exchange information in the education process using the technological infrastructure and continue their educational activities without limitation of time and space (Browne et al, 2010;Laurillard, 2008).As well as these advantages, provision of the limits / learning outcomes of the information to be transferred to the students through e-learning beforehand, acquisitions of the transferred information and assessment of the extent of these acquisitions are also very important.The learning outcomes of these e-learning module courses taught in our university, which is the the subject of this study, are initially presented to the students.Learning outcomes is a clear definition of what a student needs to know, understand and can do at the end of the learning process (Bingham, 1999).The students were asked the achievement level of the learning outcomes of the course at the end of the term and allowed to evaluate the process from their own perspective.The information about to what extent the course outcomes were achieved by the students was obtained with the help of the collected data and whether this information was affected by the gender of the students and the field they studied and whether there was a correlation between the academic achievement of the students and the achievement level of learning outcomes were found. The Method This research has been organized according to the general screening model.As the subject of this research is e-courses, which is a product of an effort we try to create in our university to take disaster awareness a step further, and determination of to what extent the learning outcomes of these courses have been achieved, it describes a special case.Therefore, the student group constituting the sample of the study consists of students taking these courses.Table 1 and Table 2 show the information about the sample.Outcomes of lessons of Disasters and Mitigation and Disaster Management is given in Table 3.The achievement level of the learning outcomes given in Table 3 was tried to be determined with a single question.The answers of the students were obtained using the following 5-point likert scale.4. Whether these opinion varied by gender was examined using a t-test and the results are shown in Table 5.In Table 5, it is seen that gender did not affect the opinions of the students about the achievement of the learning outcomes of Disasters and Mitigation course and the achievement level of the outcomes for both groups were above expected (t=0.538,p>.05).It was determined that female students taking Disaster Management course found their achievement level of the learning outcomes higher than male students with a significant difference (t=2,008, p<.05). While males thought that they had achieved their outcomes at the expected level, females thought that they had acheieved their outcomes above the expected level. The relationship between the achievement level of the learning outcomes of Disaster and Mitigation and Disaster Management courses and the academic achievement level in these courses was examined and the results are given in Table 6.Although there is a positive correlation between the opinions of the students taking Disasters and Mitigation course about their achievement level of the learning outcomes and their academic achievement in this course, the correlation was not found statistically significant (Pearson's r= 0,012, p> 0,01). On the other hand, a positive correlation was found between the opinions of the students taking Disaster Management course about their achievement of the learning outcomes and their academic achievement in this course (r= 0,111, p<0,01). As the distribution of the students according to their grade and faculty or college was not homogeneous, their achievement levels of the outcomes were not examined according to these variables.However, the departments were grouped as social science and physical science fields and the achievement levels of the students in two different fields for the outcomes of these courses were examined and the results are given in Table 7. Results and Recommendations The results obtained from this study, where the learning outcomes of Disasters and Mitigation and Disaster Management courses taught as electronic elective courses at the level of bachelor's degree at Kocaeli University between the years of 2012-2014 were assessed, are summarized below. According to the results of the analysis, it is seen that the students taking these courses were of the opininon that they had achieved the learning outcomes of both courses above the expected level.It is a fact that larger masses can be reached through the electronic media with the developing technology.This result supports the fact that communication of information to individuals in the right way through electronic media is the shortest and fastest method.Quick steps can be taken in building a society prepared for disasters with e-learning training modules to be adopted by the authorized bodies in this direction. The achievement level of the learning outcomes of Disasterd and Mitigation course did not vary by gender.This result shows that individuals need to know the potantial disasters independently of gender and improve their knowledge level related to individual precautions. The achievement level of the learning outcomes of Disaster Management course varied significantly by gender.The achievement of the outcomes was at the expexted level for male students, whereas it was considered above the expected level for female students. No significant relationship was found between the academic achievement of the students taking Disaster s and Mitigation course and the achievement level of the learning outcomes of the course.This shows that the achievement of the course did not affect the achievement of the outcomes.No matter the students failed or passed the course, it can be said that the students believed these outcomes were achieved. On the other hand, a significant relationship was found between the academic achievement of the students taking Disaster Management course and the achievement level of the learning outcomes of the course.The content of Disasters and Mitigation course contains information about the events which directly affect individuals when they experience disasters and how to maintain their safety against these events, whereas the content of Disaster Management course rather contains the functions of authorities and the responsibilities of the authorities in the precautions to be taken and things to be done at the time of disasters.It can be said that the students prioritized their academic achievement over the courses they thought they individually would not benefit from directly. It is seen that the fields the students were majoring in did not affect the achievement level of the learning outcomes of the students taking Disasters and Mitigation and Disaster Management courses.This result can be interpreted as course preparation and presentatiton were organized in such a way that both physical science and social science students could understand.This result also indicates that whatever their major was, the students believed the necessity of such courses. The fact that we live in a region of Turkey where many types of disasters frequently occur made us think the e-learning process as the lowest cost and shortest way to accelerate the process of increasing disaster awareness and disaster preparedness.The effectiveness of the courses prepared through e-learning is highly important.In the following process, studies can be carried out on the sustainability of these impact and outcomes achieved. information about to what extent the individual has benefited from the trainings given on disaster awareness and consciousness no matter in what level is extremely important as it affects the development of the right behavior by the individual in case of a disaster or emergency.Moreover, performance of such assessments constitutes an opportunity for educators both to discuss and to improve the level and quality of the trainings provided.The purpose of this study is evaluation of learning outcomes of Disasters and Mitigation and Disaster Management courses according to academic achievement, gender and branches of students.For this purpose, it has been tried to find answer to the following questions To what extent do the students think they have achieved the learning outcomes determined in Disasters and Mitigation and Disaster Management courses?Is there a relationship between the students' achievement level of the learning outcomes of Disasters and Mitigation and Disaster Management courses and their academic achievement in these courses?Does gender have an impact on the students'achievement level of the learning outcomes of Disasters and Mitigation and Disaster Management courses?Does whether the students are majoring in Social Sciences or Physical Sciences have an impact on the achievement level of the learning outcomes of Disasters and Mitigation and Disaster Management courses? Two seperate courses on disaster training within University Elective Courses, which can be selected by all the students as of the 2 nd grade starting from the academic year of 2011-2012 within the scope of the studies ongoing since 2009 as part of the process of Reconstruction and Quality Improvement in Education, are opened as e-courses at Kocaeli University.One of these courses is Disasters and Mitigation and the other is Disaster Management courses.The main objective of Disasters and Mitigation course is to enable the students to know the types of disasters and understand what the individual protection methods are when they are exposed to disasters.The main objective of Disaster Management course is to provide the understanding of disaster related administrative and institutional activities required to be carried out in addition to the individual protection methods and awareness of their necessity. 5 Far above the level I have expected 4 Above the level I have expected 3 At the level I have expected 2 Below the level I have expected 1 Far below the level I have expected The opinions of the students taking Disasters and Mitigation and Disaster Management courses about to what extent the learning outcomes of these courses were achieved are given in Table AJIT-e: Online Academic Journal of Information Technology 2016 Fall/Güz -Cilt/Vol: 7 -Sayı/Num: 25 DOI: 10.5824/1309-1581.2016.4.005.x geophysics . One of them is Kocaeli University.Kocaeli University, has a high risk of disasters, due to its geographical position, it supports and leads the disasters and mitigation efforts. Table 2 . Distribution of students according to field of study for lessons of Disasters and Mitigation and Disaster Management Table1.Distribution of students according to gender for lessons of Disasters and Mitigation and Disaster Management Table 3 . Learning outcomes of lessons of Disaster Management and Disasters and Mitigation Table 4 . Achieved the level of designated learning outcomes of lessons for Disasters and Mitigation and Disaster ManagementExamining Table4, it is seen that the students are of the opinion that the learning outcomes of Disasters and Mitigation and Disaster Management courses were above the level they had expected (Disaster Mitigation Xavg =3.73, Disaster Management Xavg =3.45).Table 5. T-test results of learning outcomes of lessons for Disasters and Mitigation and Disaster Management Table 6 . Investigation the relationship between the level of realization of the learning outcomes and the level of the academic achievement according to Pearson Correlation Table 7 . The level of realization of the learning outcomes of the lessons, independent t-test results of changing according to study field of students
2019-01-02T01:53:53.992Z
2016-11-15T00:00:00.000
{ "year": 2016, "sha1": "fc6c641645546ae3ad57aa914fcf4d694a660621", "oa_license": "CCBYSA", "oa_url": "https://www.ajit-e.org/download_pdf.php?f=231_rev1.pdf&id=231", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fc6c641645546ae3ad57aa914fcf4d694a660621", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
242363796
pes2o/s2orc
v3-fos-license
Optimization Study on Turning Process by Using Taguchi-copras Method In this article, a multi-objective optimization of turning process study is presented. Two output parameters of the turning process taken into consideration are surface roughness and Material Removal Rate (MRR). Taguchi method has been applied to design the experimental matrix with four input parameters including nose radius, cutting velocity, feed rate and cutting depth.Copras method has been employed to solve the multi-objective optimization problem. Finally, the optimal values of the input parameters have been determined to simultaneously ensure the two criteria of the minimum surface roughness and the INTRODUCTION Surface roughness has a direct influence on the workability and durability of the product, and MRR is a characteristic parameter to evaluate the productivity of machining process. Therefore, ensuring the surface of workpiece with a small roughness and a large MRR is always the desire of most machining methods in general and turning methods in particular [1][2][3]. The MRR when turning depends on workpiece diameter, feed rate and cutting depth and the surface roughness when turning is influenced by many parameters such as cutting parameters, parameters of cutting tools, workpiece materials, coolant, stiffness of technology system, etc. Therefore, in order to simultaneously ensure the two criteria including the minimum surface roughness and the maximum MRR, it is necessary to determine the optimal values of these parameters. Because of the large number of experiments, the consideration of all these parameters in a single study will lead to a complexity Therefore, it is recommended to choose several parameters that can be easily adjusted by the machine operator. On the other hand, it is also necessary to choose the type of experimental matrix design so that the number of experiments is minimal, but it is still required to ensure the principle of experimental planning. The experimental matrix design by Taguchi method was recommended by Dr. Genichi Taguchi in 1980 [4]. This is a very famous matrix design method that has been applied in many studies in many different fields. This method enables to design an experimental matrix with a small number of experiments and a large number of input parameters. In addition, the input parameters in qualitative form can also be included in the experimental matrix, which is an outstanding advantage that only this method has [5][6][7][8]. However, the Taguchi method has a major drawback that it can only solve the singleobjective optimization problem through the analysis of S/N ratio (signal-to-noise). To be able to overcome this drawback while successfully using the advantages of Taguchi method when solving the multi-objective problems, it is necessary to com-bine it with a certain mathematical method. It is possible to designate a number of other methods that have been very successful when being combined with the Taguchi method such as: the combination of Taguchi-Dear [9], Taguchi-Topsis [10][11][12][13][14][15][16], Tagu-chi-Vikor [17][18][19], Taguchi-Moora [20,21], Taguchi-PSI [22], Taguchi-RIM [23][24][25], etc. Copras is also a well-known method for solving the multiobjective optimization problems. This method has been applied in some cases to solve the multi-objective optimization problems such as the multi-objective optimization of mushroom growing process [26], the multi-objective optimization of grinding process [27], etc. However, in accordance with the authors of this article, so far, there have been no published studies on the application of the Taguchi-Copras method in the multi-objective optimization of turning process. From the above analysis, in this study, the turning process experiment will be conducted with the experimental matrix designed in accordance with the Taguchi method. Copras method will be applied to solve the multi-objective optimization problem. The purpose of this study is to simultaneously ensure the minimum surface roughness and the maximum MRR. Where d_ij‫א‬R^+ with all L « P DQG M « n. In this study, the weighting of criteria will be calculated using the Entropy measure because it provides high accuracy. The weight calculation steps are performed as follows [29]: Step 2: Calculate the entropy measures of each criterion ZLWK DOO M « Q (2) Step 3: Calculate the weights of each criterion with DOO M « Q M Copras method The Copras method was first introduced by Zavadskas et al. in 1994 [30]. This method includes the following steps: Step 1: Calculate the values p_ij in accordance with the IRUPXOD ZLWK DOO L « P DQG M « Q Step 2: Calculate the entropy measures e_i of each FULWHULRQ &BL ZLWK DOO M « Q LQ DFFRUGDQFH ZLWK the formula (2). Step 4: Calculate the normalized decision matrix X= >;BLM @BPîQ ZLWK DOO L « P DQG M « Q In which: Step Step 7: Calculate priority values with DOO L « P p y Step 8: Rank the options A_k<A_i if Q_k<Q_i with all i, N « P Experimental process The experiments have been conducted on CNC Doosan Lynx 220L lathe, the experimental steel has been SCM400 steel with a diameter of 28 mm and a length of 300 mm. TiAlN coated insert have been used during the experiment. Four parameters including nose radius (r), cutting velocity (vc), feed rate (fd) and cutting depth (ap) have been selected as input parameters of the experimental process. These are parameters that can be easily adjusted by the machine operator. The Taguchi method has been applied to design an orthogonal matrix with a total of 16 experiments as shown in Table 1, in which each input parameter has been selected with four levels of values. Surface roughness is measured in accordance with the arithmetic mean roughness (Ra). MRR is calculated in accordance with the formula (9). Where nw is the number of revolutions of the workpiece per minute, dw is the workpiece diameter, fd is the feed rate and ap is the cutting depth. g p (9) Results and discussion The experimental results are presented in Table 2. In this table, the minimum value of surface roughness is 0.524 Pm in the experiment #14. However, also in this experiment, the MRR is also very small (66.67 mm3/min), so this experiment is not the best one. The MRR in the experiment #4 has the largest value (300 mm3/min). However, also in this experiment, the surface roughness value is also very large (1,722 Pm), so this experiment is not also the best one. Since then, it is shown that it is impossible to find an experiment absolutely and simultaneously ensuring the two criteria of the minimum surface roughness and the maximum MRR, but it is only possible to find an experiment where the surface URXJKQHVV LV FRQVLGHUHG DV ³PLQLPXP´ DQG WKH 055 LV FRQVLGHUHG DV ³PD[LPXP´ 7KLV ZRUN LV LQFDSDEOH RI being achieved by just looking at the data in Table 2, that can only be done by solving the multi-objective problem. Multi-objective optimization of turning process by Copras method In order to facilitate the use of mathematical symbols when performing the optimization by the Copras method, we set the criterion of Surface roughness (Ra) as C1, and set the criterion of MRR as C2 as shown in Table 3. From the data in Table 3, the Copras method is used to calculate the following values: Step 1: Calculate the values p_ij in accordance with the formula 1. The results are presented in Table 4. Step 2: Use the formula (2) to calculate the entropy measures e_j of each criterion C_j. The results are shown in Table 5. Step 3: Use the formula (3) to calculate the weights w_j of each criterion C_j. The results are also shown in Table 5. Step 4: Use the formula (4) to calculate the normalized decision matrix, as shown in Table 6. Step 5. Use the formula (5) to calculate the decision matrices after the normalization of number W. The results are shown in Table 7. Step 6. Use the formula (6) to calculate the value of Pi, use the formula (7) to calculate the value of Ri. The results are shown in Table 8. Step 7. Use the formula (8) to calculate the value of Qi. The results are also shown in Table 8. The results in Table 8 have shown that A15 is the best option, while the option A1 is the worst one out of a total of 16 conducted options. In option A1, the surface roughness is 1.822 m, which is the largest value of the 16 surface roughness values in Table 2. Also in this option, the MRR is equal to 30 mm3/min, which is the https://doi.org/10.1051/e3sconf/202130 E3S Web of Conferences 309, 01010 (2021) ICMED 2021 901010 smallest value of the 16 MRR values. Thus, it is obvious that A1 is the worst option. In the A15 option, the surface roughness is equal to 0.525 m, although it is not the minimum value of 16 options (the minimum surface roughness value is equal to 0.524 m in the A14 option), the surface roughness in this option is also very small compared to the other options. The MRR in the A15 option is equal to 166.67 mm3/min, although this is still smaller than the MRR value in the A4 option, it is also a rather large value compared to the other options. Since then, it can be confirmed that A14 is the best RSWLRQ ZKHUH WKH VXUIDFH URXJKQHVV LV ³VPDOOHVW´ DQG WKH 055 LV ³ODUJHVW´ 7KXV WKH RSWLPDO YDOXHV RI WKH parameters of nose radius, cutting velocity, feed rate and cutting depth are 1 mm, 125 m/min, 0.08 mm/rev and 1 mm, respectively. Table 7. The normalized matrix in combination with the weights in Table 5 No Conclusion The experimental process of turing SCM400 steel with TiAlN insert has been con-ducted on CNC lathe in accordance with the Taguchi-type experimental matrix. Nose radius, cutting velocity, feed rate and cutting depth are parameters to determine the optimal value. Surface roughness and MRR are two criteria to evaluate the turning process. The Copras method has been applied to solve the multi-objective optimiza-tion problem. A number of conclusions are drawn as follows: In order to simultaneously ensure the two criteria including the minimum surface roughness and the maximum MRR, the value of nose radius is 1 mm, the cutting velocity is 125 m/min, the feed rate is 0.08 mm/rev and the cutting depth is 1 mm. The Copras method has been applied for the first time and succeeded in the multi-objective optimization of turning process in this study. This method has also been successful in solving the multi-objective problems in a number of published studies [26,27]. At the same time, it also promises to be successful when being applied to the multi-objective optimization of other machining processes. Determination of the optimal set of parameters of cutting tool (materials of chip, geometrical parameters, « DQG SDUDPHWHUV RI OXEULFDWLQJ DQG FRROLQJ WR VLPXOWDneously ensure the criteria of turning process is the work that will be done by the authors of this article in the next time.
2021-10-15T16:16:54.064Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "dceea72f6b3ff2d695b1b06827d3a6370b5c2041", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/85/e3sconf_icmed2021_01010.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "37b930bbfc7bfa85347528d94b731c49480e2dbf", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
248063237
pes2o/s2orc
v3-fos-license
Wave propagation in phononic materials based on the reduced micromorphic model by one-sided Fourier transform A one-dimensional problem of wave propagation in phononic materials is solved under the reduced micromorphic model introduced recently. An efficient technique is used for the solution, based on one-sided Fourier transform. This allows obtaining an exact solution in closed form, which can be utilized to check approximate solutions obtained by other methods. The results are confirmed numerically by the method of finite differences. They illustrate the existence of frequency band gaps. Introduction Materials with periodic structures, i.e., phononic materials, are obtained by assembling in a suitable manner individual element with particular shape and size, in such a way so as to exhibit targeted mechanical properties. Such materials have witnessed increasing interest and attracted a great attention in the past two decades due to their ability to exhibit phenomena which are not found in usual materials, e.g., inhibition of certain frequencies under wave propagation. They are presently an important component in science and technology, having extensive applications as smart materials in intelligent microstructures in acoustic and vibration engineering. Some of the potential applications of phononic advanced materials include the design and construction of acoustic filters and transducers and advanced materials for noise control. An overview of phononic advanced materials is found in [1] [1][2][3]. Modelling of phononic materials requires consideration of media with microstructures. These are naturally multiscale materials, where many physical phenomena would be exhibited at different space and time scales. This is in line with the concept of the micromorphic continuum mechanics. In the micromorphic mechanics, a continuum has an inner structure, which has its own state variables. Many researchers have paid attention to studying the new phenomena results from the development of the new models in continuum mechanics. The new models seem to be more complicated due to the complexity of the structure of the materials, increasing the degree of freedom and increasing the material parameters. So, some authors strove to introduce precisely description model to capture the new phenomena at the nano-scale measure. The most popular model is the micromorphic model introduced by Eringen [4,5]. The original micromorphic theory [6] represents the dynamic balance of elastic materials with 12 equations of motion that describe 3-displacements, 6-strains, and 3-rotations of the material's microstructure. The constitutive equations of isotropic-linear elastic materials depend on 18 material coefficients in the context of the original micromorphic model. The identification of these material coefficients for various material was a great challenge over the past decades. Recently, micromorphic models have shown great success in modeling the mechanics of advanced materials, e.g., phononic and metamaterials [7][8][9][10][11]. For instance, the band gap structures of acoustic metamaterials have been developed based on a relaxed linear elastic micromorphic model [12][13][14]. It was revealed that acoustic metamaterials can stop or attenuate the propagation of waves at certain frequency domains [15][16][17]. Furthermore, Chen and Wang [18] discussed the size-effect on the band structures of nanoscale phononic crystals. The transfer-matrix method which used in optics is developed to compute the band structures of a nanoscale layered phononic crystal based on the nonlocal elastic continuum theory. Wu et al. [19] used the finite element method and the transfer-matrix method to study the propagation of elastic waves in one-dimensional phononic crystals with functionally graded materials. Yan et al. [20] studied time harmonic wave propagation in nanoscale periodic layered piezoelectric materials. Recently, alternative versions of the original micromorphic theory have been developed, e.g., the relaxed micromorphic model and the reduced micromorphic model [21]. Muhammad et al. [22] study the transition and topological interface modes with topological phases for 1D phononic crystals consisting of circular aluminum beams. The same authors [23] study here 1D topological phononic crystals with interface states produced by an exchange of wave mode polarization and geometric phases, using the spectral element method a e-mail: amrramadaneg@yahoo.com (corresponding author) b e-mail: moustafa_aboudina@hotmail.com c e-mail: afghaleb@gmail.com with Timoshenko beam model for flexural wave propagation. Recent work in this field study different aspects of wave propagation in media with periodic microstructures [24][25][26][27]. The reduced micromorphic model was developed by eliminating redundant microstructural degrees of freedom. The reduced micromorphic model depends on eight material coefficients only. These material coefficients were related to the material microstructure [28]. In this paper, a first attempt is undertaken to solve a boundary-value problem based on a new reduced micromorphic model suggested by Shaat [21], in which the author demonstrates the effectiveness of his reduced model and its ability to account for many features of wave propagation and bandgap in phononic materials. An efficient technique is presented, based on the "one-sided Fourier transform", to solve a problem of wave propagation in a slab of phononic material to describe the effect of microstructure on wave propagation in phononic materials. This technique was used earlier [29,30] in problems of elasticity and thermoelasticity for rectangular domains. It allows to obtain an analytic solution in closed form, in contrast to other methods, for example the Laplace transform which needs numerical inversion in most cases of interest. This exact solution can be used to check the validity of approximate solutions obtained by other methods. Moreover, the method can deal with cases where the behavior of the solution for large times is not known a priori, and with arbitrary initial and boundary conditions, a case that finds difficulty when using Fourier cosine or sine transforms, and spectral methods. The obtained numerical results reveal the existence of frequency band gaps as expected. The obtained results are confirmed numerically by using the finite difference method. No attempt is undertaken to compare the present results with those obtained by other methods. Reduced micromorphic model: review In the context of the reduced micromorphic model, the deformation of material is describable using the following kinematical variables [21]: where ε i j are the components of the infinitesimal strain tensor, and s i j are the components of a microstrain tensor that represent the deformation of the material's microstructure, γ i j are the components of the coupling tensor that takes into account the difference between the microstrain s i j and the macrostrain ε i j fields, χ i jk are the components of the gradient of the microstrain tensor s i j , which is symmetric over the last two indices. Tensors τ i j and m i jk are introduced to capture the effects of microstrain on the macro-scale deformation of multiscale materials. According to the reduced micromorphic model, the constitutive equations have the form: and the equations of motion are: with the natural boundary conditions: Here λ m and μ m are microscopic Lamé moduli representing the stiffness of the material's microstructure, e.g., grain or unit cell [21], λ and μ are the elastic moduli of a material confined between two-unit cells, λ c and μ c are two elastic moduli that adjust the coupling between the microscopic stiffness and the macroscopic stiffness of the material, 1 and 2 are two length scales, ρ is the mass density of the macro-scale material, ρ m is the mass density of the material particle, and J is a micro-inertia, f i and H jk are the body forces and body higher-order-moments, respectively, t i and m jk stand for the external surface force and couple applied to the medium. The governing equations for the displacement and the microstrain fields are: the non-vanishing components of kinematic relations, Eqs. (1)-(3) are: where for brevity, we replace u 1 by u and s 11 by s. In view of Eq. (9), Eqs. (5)-(7) take the form: and the field Eqs. (8) yield: showing the coupling between the macroscopic mechanical displacement and the microstrain: (i) effect of microstrain on elastic wave propagation through the coefficient β 2 , which seems to be similar to the effect of temperature in classical thermoelasticity; (ii) effect of macroscopic displacement on microstrain through the coefficient β 4 . The coefficients appearing in Eqs. (11) are given in terms of the different macroscopic and microstructure material parameters by the expressions: It is easy to verify that the system of partial differential Eqs. (11) has two sets of characteristic curves dx ± √ β 1 dt 0 corresponding to the coupled elastic wave travelling with speed √ β 1 , in addition to two degenerate characteristics dx 0. Details will be omitted for the sake of brevity. On the characteristics with positive slope the following condition holds: which may be useful when integrating by the method of characteristics. The problem will be solved in a thick slab 0 ≤ x ≤ , and t > 0, occupied by a material with specified material parameters, under the following initial and boundary conditions where u 0 and s 0 are the amplitudes of the applied waves, a and b are parameters to be chosen during the numerical calculations. The exponential introduced in the boundary conditions is for convenience purposes only and can be removed altogether if sufficient conditions for the existence of Fourier transform are secured. In the present paper, we use a different technique based on the one-sided Fourier transform to find the solution for the system of partial differential Eqs. (11), with initial and boundary conditions (13). It has shown good efficiency in solving rather general types of initial-boundary-value problems of systems of PDE's, although not quite popular in the literature (see [45,46], for example). This transform has been successfully used in the study of analytic signals [47] from which negative frequencies are filtered-out. It provides a natural way for investigating frequency band gaps as will become clear from the upcoming results. Through a change of the unknown function, the method has the flexibility to deal with cases where the behavior of the solution for large times is not known a priori, and with arbitrary initial and boundary conditions. Again, it can deal with a multitude of boundary conditions. Let us define the integral transform of a function f (x, t) as follows: It is to be noted that obtaining the complex conjugate of function F amounts simply to changing the sign of ω in the above definition. Substitute Euler's formula e iωt cos(ωt) + i sin(ωt) (15) into the definition (14) to get: Thus, the real and the imaginary parts of the function F(x, ω) are: The inverse expressions for Eqs. (17)(18) are given by: The following formula for the transform of the second time derivative is easily derived, and will be used in the sequel: Apply the suggested technique to the system of PDE's (11) to obtain the system of ordinary differential equations for the transformed functions U and S: Here a is a constant to control the attenuation of the incident wave, b is a constant to determine the location of the hump on the curve below which represents the integral in Eqs. (23) for concrete values of parameters a 0.1 and b 1.0 (Fig. 2). For bounded solutions, α(ω) must be real. The expressions for λ(ω) depend on the choice of the material parameters for the considered material. The two functions U (x, ω) and S (x, ω) are obtained as: and The involved coefficients in the above expressions are not all independent. In fact, the functions U (x, ω) and S(x, ω) in Eqs. (27) and (28) are solutions of the field Eq. (22) in the ω− domain. Apply boundary conditions (23) to obtain the solution of system (22) in the form: and S(x, ω) S(0, ω)(cos(α(ω)x) − cot(α(ω) ) sin(α(ω)x)), The functions in (29) and (30) have simple poles at the zeros of the equation sin[α(ω) ] 0. We have represented on Fig. 3 the function U (0.01, ω)/U (0, ω) to show the type of behavior of the kernels in the integrals yielding the solution. In particular, there is a denumerable number of frequencies at which this function does not exist. Now, it is required to invert the two functions U (x, ω) and S(x, ω) using Eqs. (19) or (20) as: or else Both choices are equally good. We shall opt for the former choice for definiteness. The integrals in Eqs. (31)(32) or (33)(34) are evaluated numerically. Finite difference method For the numerical usage, we consider a domain in (x, t) plane and discretized by the steps x h and t k. The values of the unknown functions at any point (x, t) (ih, jk) are denoted by f x i , t j f i, j . The following expressions are the finite-difference representations of the various partial derivatives [41] and [42]: Substitute relations (40) into Eq. (11) to get: with 4h β 4 , the initial and boundary conditions (13), they take the form: where N and M are the number of nodes along x and t axes respectively. The applied boundary conditions are discretized as follows: and g t j s 0 1 − cos bt j e −at j , 0 ≤ t ≤ 2π, 0, t > 2π. Numerical results and discussion Here we consider a concrete phononic material with given values of the physical parameters. The solution will be obtained by the above-mentioned method under the given boundary conditions. Comparison between the present results and those obtained numerically by finite-differences will be exposed. Other types of boundary conditions may be treated equally well. The space and time steps for the numerical scheme are taken as h 1 and k 0.005. Worked application The material constants for the matrix and inclusion are The parameters used for the reduced micromorphic model are listed on the following table: Phononic material constants We have calculated these two functions for all considered times and at many locations. Figures 5 and 6 for the wave propagation of displacement and microstrain calculated numerically at the location x 2 have shown good agreement with the analytical solution based on the one-sided Fourier transform, thus confirming the validity of the obtained results. One notices a decrease in amplitude in both the displacement and the microstrain as the main hump reaches successive locations at increasing distances from the boundary, as compared to the imposed boundary functions. Curve fitting is used to show that each of these two amplitudes obeys an exponential decrease law with the spatial coordinate x. Comparison with the case of classical elasticity allows reaching a conclusion that such a decrease in displacement is conditioned by the damping effect of microstructure. Figures 7 and 8 show the distributions of stresses as functions of time at the location x 2. It appears that these two functions, as well as the displacement and microstrain, are characterized by an "overdamping" which takes the form of damped oscillations about the zero value for large times. Figures 9, 10, 11 and 12 are 3-D plots for displacement, microstrain, stress tensor and couple stress tensor as functions of (x, t) illustrating the wave propagation of the boundary regime. The calculations have shown good stability of the numerical scheme with respect to time. Ten thousand steps of time were used during the calculations. Band gap phenomenon The analytical solution by one-sided Fourier transform given above, through the transformed functions, has enabled us to reveal the existence of band gaps which forbid wave propagation for certain frequency bands. In fact, calculations show that the condition of non-negativity of the function α(ω) guarantying the existence of bounded solutions to the problem is satisfied everywhere on the real axis of frequencies, except when 0.053 ≤ ω ≤ 0.057. When this constraint goes into the right-hand sides of Eqs. (29) and (30), it produces a denumerable set of band gaps. This is illustrated on Fig. 13. different methods were used to solve the problem: An analytical method based on the use of one-sided Fourier transform, and a finite-difference numerical scheme. Both methods yielded identical results. The one-sided Fourier transform method has provided the opportunity to obtain an exact solution in closed form, to be used to verify the efficiency of approximate solutions obtained by other methods. The method allows more flexibility in dealing with different initial and boundary conditions, and can be used, in contrast to other methods, in case when the behavior of the unknown function at large values of time is unknown a priori. The transformed functions for the displacement and the microstrain as obtained from the analytical solution were then used to put in evidence the existence of a frequency band gap, whose location is bound to change if other boundary conditions are used. We have given plots to show how the wave energy is distributed among the frequencies at different locations. Although the elastic wave bandgap is usually limited to the high frequency range [48], our work does not prevent the existence of such bandgaps at lower frequencies. It is believed that the used type of boundary conditions will be decisive in this respect. Future work in progress concentrates of the extension of the present model to two-dimensional wave propagation problems. Funding Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2022-04-10T15:25:50.068Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "6d54489ada78457b83baf41aa2886938bde4ce22", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjp/s13360-022-02637-3.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "0414304cd87eb49f96a1b26bd5d3b93d6cb23f36", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
252114779
pes2o/s2orc
v3-fos-license
LNC-ZNF33B-2:1 gene rs579501 polymorphism is associated with organ dysfunction and death risk in pediatric sepsis Background: Sepsis is a severe systemic reaction disease induced by bacteria and virus invading the bloodstream and subsequently causing multiple systemic organ dysfunctions. For example, the kidney may stop producing urine, or the lungs may stop taking in oxygen. Recent studies have shown that long non-coding RNAs (lncRNAs) are related to the dysfunction of organs in sepsis. This study aims to screen and validate the sepsis-associated lncRNAs and their functional single nucleotide polymorphisms (SNPs). Result: Unconditional multiple logistic regression based on the recessive model (adjusted odds ratio = 2.026, 95% CI = 1.156–3.551, p = 0.0136) showed that patients with the CC genotype of rs579501 had increased risk of sepsis. Stratification analysis by age and gender indicated that patients with the rs579501 CC genotype had higher risk of sepsis among children aged <12 months (adjusted odds ratio = 2.638, 95% CI = 1.167–5.960, p = 0.0197) and in male patients (adjusted odds ratio = 2.232, 95% CI = 1.127–4.421, p = 0.0213). We also found a significant relationship between rs579501 and severe sepsis risk (CC versus AA/AC: adjusted odds ratio = 2.466, 95% CI = 1.346–4.517, p = 0.0035). Stratification analysis for prognosis and number of organ dysfunctions demonstrated that the rs579501 CC genotype increased non-survivors’ risk (adjusted odds ratio = 2.827, 95% CI = 1.159–6.898, p = 0.0224) and one to two organs with dysfunction risk (adjusted odds ratio = 2.253, 95% CI = 1.011–5.926, p = 0.0472). Conclusion: Our findings showed that the lnc-ZNF33B-2:1 rs579501 CC genotype increases the susceptibility to sepsis. From the medical perspective, the lnc-ZNF33B-2:1 rs579501 CC genotype could be serving as a biochemical marker for sepsis. Background: Sepsis is a severe systemic reaction disease induced by bacteria and virus invading the bloodstream and subsequently causing multiple systemic organ dysfunctions. For example, the kidney may stop producing urine, or the lungs may stop taking in oxygen. Recent studies have shown that long noncoding RNAs (lncRNAs) are related to the dysfunction of organs in sepsis. This study aims to screen and validate the sepsis-associated lncRNAs and their functional single nucleotide polymorphisms (SNPs). Conclusion: Our findings showed that the lnc-ZNF33B-2:1 rs579501 CC genotype increases the susceptibility to sepsis. From the medical perspective, the lnc-ZNF33B-2:1 rs579501 CC genotype could be serving as a biochemical marker for sepsis. Background When children are infected by pathogens, the immune system will start to attack the origin of the infection. The immune system releases chemokines into the bloodstream to fight the bacterial or viral infection; these chemokines can also attack normal organs and tissues, and this immune overreaction is called sepsis, which causes inflammation, blood flow problems, low blood pressure, trouble breathing, vital organ failure, and can even be life-threatening (Mathias et al., 2016). Sepsis in newborns and children is always caused by bacteria in the blood. Common culprits include group B Streptococcus, Escherichia coli, Listeria monocytogenes, Neisseria meningitis, Streptococcus pneumoniae, Haemophilus influenzae type B, and Salmonella. Sepsis is the major cause of admissions to neonatal intensive care units (NICUs) and pediatric intensive care units (PICUs) and of death (McDonald et al., 2012;Dickson et al., 2016). Recently, research has reported that 17% of worldwide mortality is associated with sepsis and that 26% of hospital mortality is due to severe sepsis (Fleischmann et al., 2016). In the United States alone, the incidence of sepsis is about 0.3% of the population; almost 72,000 children were hospitalized for sepsis, with a 25% mortality rate, throughout 2013-2014 (Balamuth et al., 2014;Ruth et al., 2014;Weiss et al., 2015). Wang et al. (2014a) reported that in Huai'an, Jiangsu, China, the incidence of sepsis among children was nearly 0.18%, and the overall case fatality rate for sepsis was 3.5%. They estimated a minimum annual incidence of more than 360,000 cases of pediatric sepsis in China. Therefore, early diagnosis and prognosis of sepsis are essential for clinical therapy. There is overwhelming evidence supporting that sepsis is a highly heterogeneous disease with large inter-individual differences in the disease course and genetic factors influencing individual vulnerability to, and the severity of, infections (Frodsham and Hill, 2004). For example, the genetic variant rs2737190 is in the promoter region of the TLR4 gene, and the GG genotype produces an improvement in the immune response (Colin-Castro et al., 2021). Another example is that 593C>T GPx1 SNP in sepsis patients leads to high organ dysfunction, sepsis shock, and mortality risk (Majolo et al., 2015). LncRNAs are typically defined as transcripts longer than 200 nucleotides with little or no protein coding potential (Fatica and Bozzoni, 2014). Long noncoding RNALnc (RNAs) may regulate gene expression at epigenetic, transcriptional, and posttranscriptional levels. Furthermore, single nucleotide polymorphisms (SNPs) may alter the function of lncRNAs and effect susceptibility to disease. In the case of PACT1, for instance, rs2632159 polymorphism could increase colorectal cancer risk (Yang et al., 2019). Whether SNPs in lncRNAs can be biomarkers for sepsis susceptibility remains unclear. Several studies have reported that some lncRNAs, such as those in SOX2OT, CCAT2, and MALAT1 gene polymorphism, may be associated with an increased risk of sepsis (Chen et al., 2018;Wu et al., 2020;Wu et al., 2021). Lnc-ZNF33B-2:1, also known as LOC283820/AL022334.7-001/NR_136644.1/ENST00000568976, rs579501 located on chromosome chr10: 43246795 (GRCh37. p13), is a functional SNP in the LOC203828 noncoding region. Furthermore, one publication has reported the association of rs579501 with gastric cancer (Duan et al., 2018). This research aims to evaluate the relationship between the candidate lnc-ZNF33B-2:1 allele and sepsis susceptibility and to assess the effect of sepsis-associated rs579501 on the susceptibility to sepsis in the southern Chinese population to better understand the public health risk. Study population We recruited 474 sepsis patients from the PICU and 678 healthy controls who visited the hospital for health checks at the Guangzhou Women and Children's Medical Center in southern China from December 2015 to December 2019 and who presented without any other diseases. Inclusion criteria for the case group were as follows: 1) children with clinical sepsis in the PICU of Guangzhou Women and Children's Medical Center; 2) time range November 2015 to November 2019; 3) the subjects of this study belonged to the southern area of China based on the children's place of origin; 4) the medical records of sick children were complete, including complete medical histories; and 5) the parents of the children provided signed informed consent before the study. Inclusion criteria for the control group were as follows: 1) healthy children were recruited from the Department of Physical Examination in Guangzhou Women and Children's Medical Center, with no previous history of sepsis; 2) time rage November 2015 to November 2019; 3) the subjects of this study belonged to the southern area of China, based on the children's place of origin; 4) the medical records of control children were complete, including complete medical histories; and 5) the parents of the children provided signed informed consent before the study. Exclusion criteria were as follows: 1) past or present history of malignant tumors or genetic diseases; 2) the description provided by parents was not clear; 3) the subjects of this study did not belong to the southern area of China; and 4) the family did not consent to the study. Diagnostic criteria for sepsis, severe sepsis, and septic shock were based on the international definition (Goldstein et al., 2005). Sepsis describes a syndrome that occurs when severe infection leads to severe illness and effects. Severe sepsis occurs when a bacterial, viral, or fungal infection causes a significant response from the body's immune system, causing a high heart rate, fever, or shortness of breath. Septic shock is the most severe form of sepsis, in which underlying circulatory and cellular metabolism abnormalities are severe enough to significantly increase mortality . However, the variation between different in-patient children is larger (high inter-individual variation). Based on the related pediatric sepsis literature (Dellinger et al., 2013;Weiss et al., 2015;Emr et al., 2018), the specific criteria are as follows. Sepsis: 1) The correct evidence of the pathogen was achieved according to the routine laboratory culture identification or a highly suspected pathogen infection by clinical and imaging methods and had to include abnormal body temperature or abnormal white blood cell count. 2) Body temperature greater than 38.5°C or less than 36°C. 3) Immature neutrophil proportion >10%. Severe sepsis: 1) Children with tachycardias due to insufficient recirculating perfusion. 2) Decreased peripheral pulse rate, capillary filling lasted for longer than 2 s. 3) Red streaks on the limbs. 4) Reduced urine volume. 5) Pediatric acute respiratory distress syndrome. 6) Two or more organ dysfunctions. Septic shock: 1) Severe sepsis with hypoperfusion. 2) Meeting the criteria for hypotension in children at this age and severe vasodilation and hypotension refractory to aggressive fluid resuscitation. 3) Needing vasoactive drugs to maintain hemodynamic stability. DNA extraction and genotyping We genotyped SNP (lnc-ZNF33B-2:1 rs579501 was purchased from Applied Biosystems) from genomic DNA isolated from venous whole blood samples of sepsis patients and healthy controls. The DNA extraction and genotyping procedures have been published previously . The patients' specimens for this study were stored in the ultra-low temperature freezer of our hospital's clinical biobank. In addition, we quality controlled the DNA samples using the DNA electrophoresis technique: DNA samples were measured using a UV spectrophotometer, and the ratio of absorbance (OD260/OD280) was between 1.6 and 1.8. Statistical analysis Initially, we examined the Hardy-Weinberg equilibrium (HWE) of the samples using SAS software (version 9.1; SAS Institute, Cary, NC). Next, a chi-squared (χ 2 ) test was employed to assess the significance of differences between patients and healthy controls in the frequency distributions and genotypes. To evaluate the associations between rs579501 and sepsis risk, multivariate logistic regression was used to compute odds ratios (ORs) and corresponding 95% confidence intervals (CI), adjusted for age, gender, sepsis subtype, prognosis, and number of organs with dysfunction. The statistical analysis procedures have been described previously . Data source The gene expression datasets analyzed in our study were obtained from the GEO database (https://www.ncbi.nlm.nih.gov/ geo/). Data of a total of 10 sepsis children and 12 health controls were retrieved from the database. GSE145227 was based on the Agilent GPL23178 platform ([OElncRNAs520855F] Affymetrix Human Custom lncRNA Array). All the data were freely available online, and this study did not involve any experiment on humans or animals performed by any of the authors. Ethics statement The present study was approved by the Guangzhou Women and Children Medical Center Ethics Committee (2015042202) and was conducted according to the International Ethical Guidelines for Research Involving Human Subjects stated in the Declaration of Helsinki. The children's families provided written informed consent. Results Comparison of rs579501 A/C polymorphism in southern Han Chinese population with different regional and ethnic groups Table 2 shows the demographic characteristics of sepsis patients and healthy controls. In total, 474 pediatric patients with sepsis and 678 healthy controls were included in our research. The average age of sepsis patients was 35.04 months (±34.26; range: 1-180) and 35.53 ± 29.37 months (range: 1-168) for controls. A total of 63.5% of sepsis patients were male, and the male ratio was 58.85% in the controls. The distribution of age (p = 0.1811) and gender (p = 0.111) was not significantly different between sepsis patients and controls. Among the sepsis patients, 98 children were diagnosed clinically as having sepsis, 291 children as having severe sepsis, and 85 as having septic shock. According to the number of dysfunctional organs, 276 children had damage to one to two organs, and 95 children had three or more dysfunctional organs. Ultimately, 80 children suffering from sepsis died. Population characteristics Association between the lnc-ZNF33B-2: 1 rs579501 A/C polymorphism and the risk of sepsis Table3 shows the genotype distributions of lnc-ZNF33B-2: 1 rs579501 A/C polymorphism in the sepsis patients and healthy controls. The lnc-ZNF33B-2:1 rs579501 A/C genotype distribution analysis used to assess the HWE in the healthy control group revealed an equilibrium (HWE = 0.149). Power calculations We used the online software for calculating the statistical power for rs579501 (https://zzz.bwh.harvard.edu/cgi-bin/ cc2k.cgi) with the following parameters. The sample size was 474 patients and 678 controls. The minor allele Prediction of lnc-ZNF33B-2: 1 polymorphism centroid secondary structure and target microRNAs The LNCipedia web server (https://lncipedia.org/) was used to find lnc-ZNF33B-2:1 rs579501 A and rs579501 C allele nucleotide sequences (Supplementary Table S1). The RNAfold web server (http://rna.tbi.univie.ac.at//cgi-bin/ RNAWebSuite/RNAfold.cgi) was used for the analysis of the prediction of lnc-ZNF33B-2:1 secondary structure, including rs579501 A and rs579501 C alleles. Consequently, RNAfold prediction showed that the centroid secondary structure was markedly changed with rs579501 A>C alleles (Figure 1). The minimum free energy (MFE) of rs579501 A>C alleles was changed from -30. 72 kcal/mol to -30.98 kcal/mol. By using the lncRNAbinding prediction software program (http://bioinfo.life. hust.edu.cn/lncRNASNP2/), we found that the conversion of A>C in the rs579501 polymorphism may create a binding site for has-miR-27a-5p and lead to a loss of has-miR-5002-5p binding (Figure 2). The miRBD web server (http://www. mirdb.org/mirdb/index.html) was used to predict the targets of miR-27a-5p as some proteins have been reported to be associated with aggravation of the sepsis state, such as IL-1 and gasdermin A (Supplementary Table S2). We also found Frontiers in Genetics frontiersin.org that sepsis patients have a higher lnc-ZNF33B-2:1 level from the GEO database ( Figure 3). Discussion Genetic risk factors play an important role in the pathogenesis of sepsis. In our case-control study, we investigated the associations between lnc-ZNF33B-2:1 gene polymorphism and sepsis risk in a southern Chinese population. We found that the rs579501 CC genotype was associated with an increased risk of sepsis in children and was a risk factor in male patients and children aged 0-12 months. Furthermore, the stratified analysis revealed that the rs579501 C allele was a risk factor during severe sepsis. Moreover, patients with the rs579501 CC genotype were more susceptible to death and one to two organs with dysfunction, which were caused by sepsis. Interestingly, there were significant ethnic differences between CHS and other regions and populations in genotype or allele frequencies of rs579501 CC genotypes, with a higher proportion among the CHS population. This study shows that the lnc-ZNF33B-2:1 rs579501 CC genotype was associated with an increased risk of sepsis in the CHS population. The MFE structure of a sequence is the secondary structure that is calculated to have the lowest value of free energy. The lower the free energy, the more likely, in theory, the structure will form and the more stable it will be (Wuchty et al., 1999). In our study, we found that The meaning of the bold values is that difference of groups has statistical significance (p-value <0.05). a χ 2 tests were used to determine differences in genotype distributions between the children with KD and the controls. b Adjusted for age and gender. The meaning of the bold values is that difference of groups has statistical significance (p-value <0.05). Frontiers in Genetics frontiersin.org FIGURE 1 Bioinformatic prediction of lnc-ZNF33B-2:1 polymorphism on centroid secondary structure. (A) Centroid secondary structure and a mountain plot representation of the MFE structure of rs579501 A allele; (B) centroid secondary structure and a mountain plot representation of the MFE structure of rs579501 C allele. FIGURE 2 Prediction target microRNAs of lnc-ZNF33B-2:1 polymorphism. The sequence and putative binding sites of miR-5002-5p and miR-27a-5p on the different re579501 allele were validated using the lncRNA-binding prediction software program. At first, it remained unclear how these lnc-ZNF33B-2: 1 polymorphisms affect sepsis. By using the lncRNA-binding prediction software program, we found that after change rs579501 A>G, has-miR-27a-5p could functionally substitute for has-miR-5002-5p and combine with lnc-ZNF33B-2:1. The has-miR-27a-5p gene is located on chromosome 19 in humans, and a growing body of research has revealed that the miR-27a-5p gene is involved in sepsis progression. In the sepsis-induced lung injury model, this could increase the miR-27a-5p expression level, and in an LPS-reduced septic mouse model, cell infiltration was attenuated by the intratracheal instillation of the miR-27a-5p inhibitor and alleviated the inflammatory phase of sepsis (Younes et al., 2020). Furthermore, Wang et al. (2014b) reported that miR-27a was upregulated and promoted inflammatory response in sepsis. But we were still unclear whether different miRNA bindings to lnc-ZNF33B-2:1 have distinct consequences. Therefore, we tried to explore the underlying mechanism in the lncRNAs-miRNA axis function in sepsis. GSDMA was predicted to be a target protein of miR-27a-5p by the miRNA target gene prediction website. GSDMD is the final common effector of inflammasome activation, forming membrane pores to enable pro-inflammatory cytokine release and pyroptosis. GSDMD could mediate LPS-induced septic myocardial dysfunction and drive tissue injury in lethal polymicrobial sepsis (Chen et al., 2020;Dai et al., 2021). Hu identified a potent inhibitor of GSDMD pore formation and protected against sepsis (Crunkhorn, 2020). Therefore, we speculated that the rs579501 CC-miR-27a-5p-GSDMD pathway is involved in sepsis progression. Further studies will be necessary to validate these concepts. Some limitations of this study should be noted. First, in the aforementioned studies, we also found the rs579501 CC genotype in a higher proportion of CHB population. Owing to this research was subjected to geographical factors, we only analyzed the population in southern China; thus, this study lacks regional and ethnic comparisons. Second, this study also lacks dynamic monitoring of miR-27a-5p and GSDMD levels during the follow-up period. Conclusion In conclusion, the present study demonstrated that the lnc-ZNF33B-2:1 rs579501 C variant is a risk factor for sepsis in southern Chinese children. The risk effect was reflected more substantially in children aged <12 months, survivors, and male patients. Moreover, the risk effect was found more in the sepsis subgroup than in the severe sepsis subgroup. In this study, we sought to identify sepsis-associated lncRNAs as potential biomarkers for diagnosis and therapy and uncovered important clues for further study of the function and mechanism of lncRNAs in sepsis. Our study illustrates that the lncRNAs polymorphism was associated with the susceptibility of sepsis. Extensive functional research and additional well-designed population-based prospective studies with different ethnic groups are warranted to confirm and extend our findings. Data availability statement The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Author contributions All authors contributed significantly to this work. KC, YL, KL, YW, KX, LF, WL, HZ, BW, and LP performed the research study and collected the samples and data. YX and HY analyzed the data. DC and XG designed the research study. ZL was a major contributor in writing the manuscript. DC and ZL prepared all the tables. All authors have reviewed the manuscript. Additionally, all authors have read and approved the manuscript. lnc-ZNF33B-2:1 expression level in sepsis and healthy control samples (data from GSE145227), revealing that lnc-ZNF33B-2:1 shows significantly higher expression in sepsis patients (n = 10) than normal healthy donors (n = 12). Frontiers in Genetics frontiersin.org
2022-09-08T13:43:29.691Z
2022-09-08T00:00:00.000
{ "year": 2022, "sha1": "998905595dfee1bef83374fae8d5b8b093ca9790", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "998905595dfee1bef83374fae8d5b8b093ca9790", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253882398
pes2o/s2orc
v3-fos-license
Effectiveness of home-based “egg-suji” diet in management of severe acute malnutrition of Rohingya refugee children Background Prevalence of severe acute malnutrition (SAM) among Rohingya children aged 6–59 months who took shelter in refugee camp in Cox’s Bazar District, Bangladesh, was found to be 7.5%. Objective To measure the effectiveness of homemade diet in the management of severe acute malnutrition of Rohingya refugee children. Methods In total, 645 SAM children (MUAC < 11.5 cm) aged 6–59 months were selected and fed the homemade diet for 3 months by their caregivers and followed up for next 2 months. Nutrition counseling, demonstration of food preparation and the ingredients of food (rice powder, egg, sugar and oil) were provided to the families for 3 months to cook “egg-suji” diet to feed the children. Results The study children were assessed for nutritional status. After intervention, energy intake from diet increased from 455.29 ± 120.9 kcal/day to 609.61 ± 29.5 kcal/day (P = 0.001) in 3 months. Frequency of daily food intake improved from 4.89 ± 1.02 to 5.94 ± 0.26 (P = 0.001). The body weight of children increased from 6.3 ± 1.04 kg to 9.93 ± 1.35 kg (P = 0.001), height increased from 67.93 ± 6.18 cm to 73.86 ± 0.35 (P = 0.001) cm, and MUAC improved from 11.14 ± 1.35 cm to 12.89 ± 0.37 cm (P = 0.001). HAZ improved from − 3.64 ± 1.35 to − 2.82 ± 1.40 (P = 0.001), WHZ improved from − 2.45 ± 1.23 to 1.03 ± 1.17 (P = 0.001), WAZ improved from − 3.8 ± 0.61 to − 0.69 ± 0.78, and MUACZ improved from − 3.32 ± 0.49 to 1.8 ± 0.54 (P = 0.001) from the beginning to the end of observation. Morbidity was found in 5.12% children in the first month which reduced to 0.15% at the end of follow-up. Conclusions Nutritional counseling and supply of food ingredients at refugee camps resulted in complete recovery from severe malnutrition for all children which was sustainable. Introduction An estimated 622,000 Rohingya refugees fled from Myanmar to Cox's Bazar, Bangladesh, by November 2017. The total number of Rohingya refugees was 835,000, and the estimated proportion of children under 5 years was 29%. The total estimated under-five children was 242,150 in the camp [1]. The Rohingya refugees reached Bangladesh without much assets, and they took shelter in hillside terrain of Cox's Bazar district [2]. Nutritional assessment in the camps showed a prevalence of 24.3% global acute Open Access *Correspondence: skroy1950@gmail.com malnutrition (GAM) and 7.5% severe acute malnutrition (SAM) [3]. The children in Rohingya population in Cox's Bazar were therefore nutritionally vulnerable. For young children, severe acute malnutrition (SAM) is a high-risk condition requiring urgent rehabilitation. Referral of SAM children with complications could save the lives of millions of children [4]. On the other hand, socioeconomically privileged children were found to have higher bottle feeding rates in most countries [5]. The role of nutrition education has been proven to significantly improve the nutritional status of moderately malnourished children in Bangladesh within 3 months through improving the child feeding behavior of mothers [6]. Another study showed that an increase in the frequency of complementary feeding was associated with higher weight gain in the nutrition intervention group compared to the control group. Following the intervention, the growth of children was sustained [6,7]. Presently, many international agencies are promoting Ready to Use Therapeutic Foods (RUTF) for treatment of SAM children [8,9]. For example, intervention has not been found much successful for sustainability of recovery from SAM in the subcontinent [10]. As a sustainable and costeffective strategy, nutrition education to the mothers has been reported to be useful to improve and sustain better nutritional status of their children [11]. The home-prepared diets have been tried with success in stable setup, but those have not been tried in emergency situation like the displaced Rohingya children. We conducted this trial to get scientific evidence for the impact feasibility and acceptability of the improved home-based nutrient-dense recipes on recovery from SAM. We tested the hypothesis that providing support with food ingredients and advices on preparation of the home-based nutrient-dense recipes in the refugee camps can cure severe malnutrition in the children with sustainability without food supplement. Methods We identified and enrolled 650, 6-59 months uncomplicated SAM children with MUAC < 11.5 cm. (According to WHO, children who were clinically well without signs of infection or other indication for hospital admission, with a retained appetite passed an appetite test.) Retained appetite is regarded to indicate the absence of severe metabolic disturbance. These subjects are deemed to be most appropriately managed as outpatients. Children who were taking RUTF or refused to cooperate were excluded. Anthropometric measurements such as body weight, length/height and MUAC were taken by trained research assistant to estimate severe wasting (WHZ < − 3 or MUAC < 11.5 cm) [12]. Caregivers of children were explained the purpose of the study before taking consent. Trained research staff worked for 13 groups of Rohingya children in Cox's Bazar District. Ten groups of children were in Ukhia upazila camp sites, and three groups were in Teknaf upazila camp sites. Nutrition education for caregivers was conducted weekly with demonstration of cooking methods with feeding schedule for improved feeding practice of their children. A total of 645 children were finally enrolled because one child died and four children migrated out during the first week of the study (Fig. 1). The intervention was given for 3 months and then children were followed up for 2 more months. The refugee families were given shelter by the Govt. of Bangladesh, in small tents having cooking devices and kerosene stoves. Their traditional diet in Myanmar was Sutki-vat (Dried fish and boiled rice). In the camp, they usually consumed rice, dry fish, meat, potato, pulse etc. Sometimes, it has been seen that few families were growing pumpkins around their camp. The refugees also had access to potable water, bathrooms and toilets. Usually five families were allocated to one toilet. They also had tube well facilities where one tube well was used by members of 10-15 tents. Some of them worked as daily worker in the camp for distribution of relief materials. Most of the Rohingya refugees kept contact with Myanmar and continued small business such as selling Burmese cloths, pickles, toys, cream and cosmetics. The Rohingya people received more than adequate amount of relief materials some of which were exchanged for cash. They received additional financial support from the Government of Bangladesh and international NGOs. They appeared reluctant to return to their own country as the situation did not improve to their expected level. Nutrition education and nutrient-dense recipe demonstration Mothers and caretakers of the children were explained the benefits of the ingredients of the nutrient-dense recipe "egg-suji" and how those could improve the nutritional status of their children. Demonstration was given on the preparation of the nutrient-dense recipe. The main components were rice powder, egg, sugar and cooking oil. Nutrition education was provided on the basis of nutrition triangle (food security, disease control, caring practices) as mentioned in an earlier report [6]. Counseling emphasized specific messages for increasing the portion size, frequency of feeding, methods of preparation of the nutrient-dense recipe (egg-suji), storage, hygiene practices, serving to ensure safe food handling and consumption. The selected recipe "egg-suji" was already tested in the improved recipe trial all over Bangladesh where local availability and acceptability were examined. It was adopted from traditionally used local diet and improved in nutrient density in the recipe trial at household level; it was then published in the complementary recipe book of FAO [11]. Egg-suji has been successfully used in hospital setup (Institute of Child and Mother Health-ICMH, Matuail, Narayanganj, Dhaka) for management of children with severe acute malnutrition. At the beginning of the study, Rohingya children were offered two recipes "Khichuri" and "egg-suji, " but the caretakers rejected khichuri as they called it "Gaitta vat" (means a mixed food with rice and lentils which causes diarrhea to children according to their home experience), but the caretakers accepted the ingredients and recipe of egg-suji. Thereafter, the study continued with providing ingredients of egg-suji to respective caretakers. Ingredients of egg-suji consisting of 7 eggs, 500 g rice powder, 500 ml oil, and 250 g sugar were distributed at weekly intervals for 3 months to the mothers or caregivers of SAM children. At the beginning, egg-suji was made daily by mothers or caregivers with one egg (50 g), two tablespoons of rice powder (suji) (30 g), three teaspoons of sugar (15 g), two teaspoons of oil (10 g) and about 200 ml of water. Mothers used to feed this food to children 2-3 times a day. This amount of egg-suji provided 324 kcal of energy. When acceptability of the diet improved, the amount of oil and suji was increased. In the third month of intervention, egg-suji was improved containing one egg (50 g), five tablespoons of rice powder or suji (75 g), three teaspoons of sugar (15 g), five teaspoons of oil (25 g) and water. This amount of egg-suji provided 614 kcal of energy where carbohydrate provided 46.1% energy, protein provided 10.1% energy and fat/oil provided 44.8% energy ( Table 1). The cost of this amount of egg-suji was BDT = 18, equivalent to USD = 0.22 (2018). The mothers were encouraged to continue their family diets for children and continue breast-feeding where applicable. The acceptability of egg-suji increased substantially over time, and mothers were very happy to feed their children with the food which they cooked themselves. After 3 months of intervention, food supply was stopped and follow-up was continued for 2 months. Mothers were advised to prepare egg-suji and family foods from their own resources and feed their children. After the intervention was completed, it was advised for adoption of the rehabilitated SAM children in their normal family setup. Data collection Trained research staff identified SAM children (MUAC < 11.5 cm) through anthropometric measurements, collected baseline information with the help of a Rohingya lead person called Majhi (boat man). Food intake of children was estimated by 24-h recall method, and mothers were interviewed for their experiences as well as difficulties if they faced any. Food composition was calculated from the Food Composition Table of Bangladesh [13]. Body weights of the children were measured using an electronic weighing scale (Tanita, HD-314) with a sensitivity of 100 g. For children < 2 years, length was measured using locally made length board with sensitivity to 1 mm in which a measuring tape was fixed between a moveable foot plate and Fat, g 30.6 Carbohydrate, g 67.9 Dietary fiber, g 2.9 Ash, g 0 Quality control measures Along with counseling and supply of food ingredients, project staff measured the anthropometric indices, e.g., the body weight, height and MUAC of the children at given intervals. Senior investigators visited the study site two weekly and monthly to monitor activities of staff working at Rohingya camp sites. Five percent of the collected data were re-interviewed with mothers, and anthropometric measurements were reexamined by the investigators for verification within 1 week. Four research representatives from Bangladesh Medical Research Council (BMRC) also visited the Rohingya camps to monitor the activity of the study as independent observer. The study was approved by the Ethical Review Committee of BMRC. Statistical analysis SPSS software (version 20) and WHO Anthro software were used for statistical analysis of the data. Multiple regression was used to measure the relationship of weight gain with determinant variables. Repeated measures analysis of variance (ANOVA) was used to compare among the means of time series data of intervention. Statistical significance was accepted when P value was less than 0.05. Results Among the Rohingya children (n = 645), girls were more than boys (59.8% vs 40.9%). The mean age of the children was 15.2 ± 6.6 months where 220 (34.1%) children were aged 6 months to < 1 year, 335 (51.9%) were in age 1-2 years and 90 (14%) children were aged more than 2 years. In the baseline, mean ± SD body weight of the study children was 6.3 ± 1.0 kg, height was 67.9 ± 6.2 cm and MUAC was 11.1 ± 1.4 cm ( Table 2). The energy intake of children from family food in the baseline was 346.5 ± 22.6 kcal/day. During the first month of intervention, mean energy intake of the children from egg-suji was 455.3 ± 120.9 kcal/day which increased to 578 ± 84.9 kcal/day in the second month and to 609.6 ± 29.5 kcal/day in the third month (P = 0.001). In the same time, mean food frequency improved from 4.9 ± 1.0 to 5.7 ± 0.8 and 5.9 ± 0.3 at the end of second and third month of intervention (P = 0.001) and the trend continued during the follow-up period (Fig. 2). Body weight of the study children increased from 6.3 ± 1.0 kg to 9.4 ± 1.3 kg after 3 months of intervention and then to 9.9 ± 1.4 kg at the end of follow-up (P = 0.001). The height of the children increased from 67.9 ± 6.2 cm to 72.7 ± 6.2 cm after the intervention and then to 73.9 ± 0.4 cm at the end of follow-up (P = 0.001). MUAC of the study children increased from 11.1 ± 1.4 cm to 12.6 ± 6.3 cm at the end of intervention and to 12.8 ± 0.4 cm at the end of follow-up (P = 0.001) ( Table 3). Mean gain in body weight from baseline to intervention was 3.1 kg, the gain from intervention to the end of follow-up was 0.6 kg, the mean height gain from the baseline to intervention was 4.8 cm and from intervention to end of follow-up was 1.2 cm, and difference in MUAC between baseline and intervention was 1.4 cm and between intervention and end of follow-up was 0.2 cm. Consistent with the increase in weight, height and MUAC, there was a reduction in shortness as their height for age Z score (HAZ) changed from − 3.6 ± 1.3 to − 2.9 ± 1.4 after intervention and to − 2.8 ± 1.4 at the end of follow-up (P = 0.001). Thinness reduced significantly as weight for height Z score (WHZ) reduced from − 2.5 ± 1.2 to 0.7 ± 1.2 after intervention and then to 1.0 ± 1.1 after follow-up (P = 0.001). Low body weight reduced as the weight for age Z score (WAZ) reduced from − 3.8 ± 0.6 in baseline to − 1.0 ± 0.8 after intervention and to − 0.7 ± 0.8 after follow-up (P = 0.001). Mid-upper arm circumference Z score (MUACZ) improved from − 3.3 ± 0.5 to − 2.0 ± 0.5 after intervention and then to 1.8 ± 0.6 after follow-up (P = 0.001) ( Table 3). In the beginning, 99.1% children were underweight (< − 2 WAZ) in baseline, but the proportion reduced to 6.2% at the end of follow-up. In the beginning, 96.3% children were severely underweight (< − 3 WAZ), but at the end of follow-up, the proportion reduced to 0.5% (Table 4). Initially, 90% children were stunted (< − 2 HAZ) but reduced to 75.4% at the end of follow-up. Proportion of severely stunted (< − 3 HAZ) children in baseline was 70.7% which reduced to 45.3% at the end of follow-up. At the baseline, 65.3% children were wasted (< − 2 WHZ), but the proportion reduced to 0.3% at the end of follow-up. The proportion of severely wasted (< − 3WHZ) children were 30% in baseline but completely eliminated after the intervention. Initially, 100% children were SAM (MUAC < 11.5 cm) in the baseline, but they reduced to 0.0% at the end of follow-up. In the first month of intervention, 33 (5.1%) children were found sick and the proportion reduced to 25 (3.9%) in the second month of intervention and their number further reduced to 3 (0.5%) in the third month of intervention. Only 1 child (0.2%) became sick during the 2 months of follow-up period. The prevalence of SAM reduced from baseline to intervention and from intervention to follow-up (Fig. 3). At the beginning, WHZ distribution of children was left to the WHO standard normal curve. After the intervention, the distribution of WHZ of children moved right to WHO standard Gaussian curve. Multiple regression for weight gain of children at the end of intervention showed significant association (P = 0.001) with initial weight of the children, energy intake from egg-suji and frequency of feeding (Table 5). During the follow-up period, from the home food, 86.4% children consumed rice, 32.6% children consumed fish/meat/egg, 25.7% children took pulses, 52.1% children consumed vegetables as well as other foods like rice, dry fish, meat and potato (data not shown). Discussion Our study showed that feeding of homemade recipe with diverse foods by the mothers could improve the nutritional status of the children in 3 months. The food was low in cost which was USD 0.22 (BDT 18) to provide 614 kcal for 6 serving in the whole day. According to our results, if food ingredients are made available, SAM children can be cured at home without commercial foods. During the follow-up period of the study, we did not provide any food support, but we observed that nutritional status of children improved further. The recipe was made by family itself and fed along the family diets. Initially, the Rohingya mothers were reluctant to accept ingredients of food, but after explaining the role of each ingredient for the recovery from SAM, they began to accept. Demonstration of the nutrient-dense food preparation at home by trained health workers helped the mothers to cook and feed their children with homemade food with change of child feeding behavior. Complete recovery from SAM was a clear demonstration of women's empowerment that mothers gained new knowledge and skills and were able to cook the right food to cure their children from severe malnutrition. Recovery from malnutrition required intake of sufficient energy, protein and micronutrients. During the rehabilitation phase, egg-suji provided essential amino acids, energy and micronutrients which were microbiologically safe due to cooking by mothers and scientifically sound for growth of young children. The food was low in cost (BDT 18 for 614 kcal), whereas in Malawi, a study calculated that the food cost of treating SAM with RUTF had monopolized 25% of all child health expenditures to reach only 2% of the child population [14]. A systematic review showed that the total cost of management per SAM child was $203 by giving RUTF in Zambia, whereas in our study the total cost was only $40 being 20% of RUTF management cost [15]. Gain in body weight was faster than gain in height during the rehabilitation period. Stunting is a sign of long-standing failure to thrive mainly affecting the skeletal growth which is dependent on micronutrients like calcium, phosphorus, zinc and vitamin D content of the diet. During the 3 months of intervention, increase in body weight by 57.6% of the initial weight was remarkable and this was significantly correlated with the energy intake. The impact of illness was not significantly associated with gain in body weight because only a very small proportion of children suffered from illness during intervention. The rate of height gain was notable as there was 25.4 percentage point reduction (36%) in stunting. It could be related to the intake of high-quality diet providing essential nutrients. The composition of this therapeutic diet was close to usual home diet as carbohydrate provided about half of energy, protein provided about one-tenth and fat provided little more than one-third of energy of the diet. A study in India compared acceptability of RUTF (58%) and cereal-legume-based Khichri (77%) where Khichri was seen to be better [16]. The Rohingya children with SAM were in high risk of morbidity and mortality without any support in the emergency situation, so it was unethical to keep these children in a control group without giving any nutritional support by rehabilitation. In addition to continued breast-feeding, mothers were suggested to feed their children 5 times in a day as 3 major meals and 2 snacks [17]. Frequency of feeding is considered as a critical component in child feeding practice as it can ensure increased amount of food intake irrespective of the small size of their stomach. Other studies have reported that when caregivers gave frequent feeding to their children, they were more likely to improve in nutrition at status [6,18]. Egg-suji has been derived from family snack food for children and adults in Bangladesh. It is also chosen for its convenience of cooking in short time taking 7-10 min and an attractive tasty food normally used and it has high acceptability in Bangladesh [11]. While a sick baby was being fed food for recovery from sick condition, adult family members were seen to be supportive and were not seen to be eager to share a sick babies "egg-suji" food. The ingredients of egg-suji had appropriate nutrients to promote growth of these severely malnourished children. Egg has been considered as a powerhouse of nutrition due to its excellent profile as a nutrient-dense food containing a balanced source of essential amino acids and fatty acids, some minerals and vitamins as well as a number of defensive factors to protect against bacterial and viral infections [19,20]. Egg contains methionine as limiting amino acid which helps in protein synthesis and formation of enzymes and hormones. Sugar provided direct energy, reduced risk of hypoglycemia and improved taste of diet. Rice powder provided energy, B vitamins and protein; oil improved taste, enriched energy density of the diet, essential fatty acids and reduced the viscosity of diet which helped children to swallow. To improve feeding practices, it was helpful to empower the mothers with knowledge of food ingredients and skills for cooking the diet "egg-suji" [21]. The homemade recipe was convenient and sustainable after the intervention period. The research staff supervised and encouraged the mothers to sustain a good child feeding practice to overcome severe malnutrition. Frequent morbidity is a barrier against normal growth [22]. Initially, many of our study children suffered from fever, diarrhea, pneumonia, diphtheria, skin disease, etc. Children were referred to attend nearby primary healthcare center within the camps. Along the improvement in nutritional status, morbidity of children gradually reduced indicating improved immunity and better hygiene practices [23]. Further, the systematic review of 14 studies showed very little weight gain by giving RUTF (0.11 g/kg/day) compared to homemade nutrients-dense egg-suji recipe (5.0 g/kg/day) in our study [24]. Studies have indicated that SAM children who are hospitalized suffer from complications such as infections, leukocytosis, imperceptible pulse, pneumonia, septicemia and hypothermia had a high risk of mortality [25]. The Rohingya children were well cared by medical centers and free health centers of the Government of Bangladesh in their camp. It is important to note the extremely low mortality rate of our study children. It appears that the whole of the Rohingya children were well protected as they had an under-five mortality rate of 0.031 per thousand during the study period [25]. Further, the study subjects were selected according to the WHO criteria that there were no immediate reasons for hospitalization. During the rehabilitation, the parents were always encouraged to seek medical advice for their children in case of illness. However, one of the study children died during the study period. The results of our study indicate that RUTF or any other commercially made or imported ready-to-eat food containing high fat and high energy (60% from fat) is not required for management of SAM of the children [26]. Other studies also revealed scientific evidence to have better outcome of homemade food over RUTF for SAM treatment [23]. RUTF is extremely expensive compared to egg-suji feeding (USD 5 vs 0.22, i.e., 23 times more expensive) [15]. SAM children are abundant in resourcepoor countries of the world; therefore, the cost of treatment is a major concern where other competing needs are not met. Some limitations of our study need to be mentioned here, such as we faced communication problem with the Rohingya refugees for language but we could overcome it by the help of locally available interpreters. Secondly, our working hour was short during winter due to security problem. Thirdly, when we stopped food supply, the mother appeared to lose interest to speak with us as before. Lastly, we admit that we could not have a control group of similar SAM children as this was not approved ethically in refugee camp situation. This research used the standard design of intervention, selection of sample, the intervention that combined the supply of food ingredients and demonstrated a nutrient-dense recipe of egg-suji combined with knowledge-based nutrition counseling. The strengths of the study were the appropriate methods for intervention, data collection, quality control, data analysis and interpretation; also, the study results offer unique opportunity to save and improve SAM children at home without misusage of large amount of resources. Conclusion We conclude that family-made local, diverse and nutrient-dense food can completely eliminate and cure SAM children in refugee camps by their own caretakers. The result of this study encourages to apply such simple and cost-effective solution for treatment of uncomplicated SAM children.
2022-11-26T14:44:50.120Z
2022-11-25T00:00:00.000
{ "year": 2022, "sha1": "66db61579c7f46d78ffe07e5d1525cac13df109e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "66db61579c7f46d78ffe07e5d1525cac13df109e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
109936470
pes2o/s2orc
v3-fos-license
Multilevel Methods for Uncertainty Quantification of Elliptic PDEs with Random Anisotropic Diffusion We consider elliptic diffusion problems with a random anisotropic diffusion coefficient, where, in a notable direction given by a random vector field, the diffusion strength differs from the diffusion strength perpendicular to this notable direction. The Karhunen-Lo\`eve expansion then yields a parametrisation of the random vector field and, therefore, also of the solution of the elliptic diffusion problem. We show that, given regularity of the elliptic diffusion problem, the decay of the Karhunen-Lo\`eve expansion entirely determines the regularity of the solution's dependence on the random parameter, also when considering this higher spatial regularity. This result then implies that multilevel collocation and multilevel quadrature methods may be used to lessen the computation complexity when approximating quantities of interest, like the solution's mean or its second moment, while still yielding the expected rates of convergence. Numerical examples in three spatial dimensions are provided to validate the presented theory. Introduction The numerical approximation of quantities of interest, such as expectation, variance, or more general output functionals, of the solution of a diffusion problem with a scalar random diffusion coefficient with multilevel collocation or multilevel quadrature methods has been considered previously, see e.g. [2,8,13,14,20,23,27,34] and the references therein; in this isotropic case, the mixed smoothness required for the use of such multilevel methods has been provided in [9] for uniformly elliptic diffusion coefficients and in [26] for log-normally distributed diffusion coefficients. However, in simulations of certain diffusion phenomena in science and engineering, the diffusion that needs to be modeled may not necessarily be isotropic. One specific application we have in mind here stems from cardiac electrophysiology, where the electrical activation of the human heart is considered. It is known that the fibrous structure of the heart plays a major role when considering the electrical and mechanical properties of the heart. And while the fibres have a complex and generally well-organised structure, see e.g. [11,30,31,32], the exact fibre orientation may vary between individuals and also over time in an individual, for example due to the presence of scaring of the heart. More generally, we wish to be able to model diffusion in a fibrous media, where fibre direction and diffusion strength in fibre direction are subject to uncertainty. For this setting, the following random anisotropic diffusion coefficient was defined in [24]: where a is a given value and V is a random vector-valued field, over a given spatial domain D and a given probability space (Ω, F , P). The fibre direction is hence given by V/ V 2 with the diffusion strength in the fibre direction being V 2 and the diffusion strength perpendicular to the fibre direction is defined by a. While we only consider this model hereafter, the techniques we use may also be applied straightforwardly to other models of random anisotropic diffusion coefficients, such as, for example, the following model in three spatial dimensions: Here, f , t and s are vector fields describing the fibre, the transverse sheet and the sheet normal directions in the heart, which at each point in D yields an orthonormal basis. These vector fields could, for example, be derived from measurements or be generated by an algorithm such as the Laplace-Dirichlet Rule-Based algorithm described in [3]. Note that, in this model, the diffusion strengths are random fields a f , a t and a s , and the fibre and sheet directions are locally angularily perturbed by random fields α 1 , α 2 and α 3 . We shall consider the second order diffusion problem with this uncertain diffusion coefficient A given by for almost every ω ∈ Ω: − div with the known function f as a source. The result of this article is then as follows. Having spatial H s -regularity of the underlying diffusion problem, given by sufficient smoothness of the right hand f side and the domain D, then the random solution u admits analytic regularity with respect to the stochastic parameter also in the H s (D)-norm provided that the random vectorvalued field offers enough spatial regularity. This mixed regularity is the essential ingredient in order to apply multilevel collocation or multilevel quadrature methods without deteriorating the rate of convergence, see [20] for instance. The rest of the article is organised as follows: In Section 2, we provide basic definitions and notation for the functional analytic framework to be able to state and then also reformulate the model problem, by using the Karhunen-Loève expansion of the diffusion describing random vector-valued field V, into its stochastically parametric and spatially weak formulation. Section 3 then deals with the regularity of the solution of the stochastically parametric and spatially weak formulation of the model problem with respect to the stochastic parameter and some given higher spatial regularity in the model problem. We then use the fact that the higher spatial regularity can be kept, when considering the regularity of the solution with respect to the stochastic parameter, to arrive at convergence rates when considering multilevel quadrature, such as multilevel quasi-Monte Carlo quadrature, to approximate the solution's mean and second moment. Numerical examples are provided in Section 4 as validation; specifically we use multilevel quasi-Monte Carlo quadrature to approximate the solution's mean and second moment in a setting with three spatial dimensions. Lastly, we give our conclusions in Section 5. Problem formulation 2.1. Notation and precursory remarks. For a given Banach space X and a complete measure space M with measure µ the space L p µ (M; X ) for 1 ≤ p ≤ ∞ denotes the Bochner space, see [25], which contains all equivalence classes of strongly measurable functions v : A function v : M → X is strongly measurable if there exists a sequence of countably-valued measurable functions v n : M → X , such that for almost every m ∈ M we have lim n→∞ v n (m) = v(m). Note that, for finite measures µ, we also have the usual inclusion Let X , X 1 , . . . , X r and Y be Banach spaces, then we denote the Banach space of bounded, linear maps from X to Y as B(X ; Y); furthermore, we recursively define and the special case For T ∈ B(X 1 , . . . , X r ; Y) and v j ∈ X j we use the notation Tv 1 · · · v r := T(v 1 , . . . , v r ) ∈ Y. Subsequently, we will always equip R d with the norm · 2 induced by the canonical inner product ·, · and R d×d with the induced norm · 2 . Then, for v, w ∈ R d , the Cauchy-Schwartz inequality gives us where equality holds only if v = w, and we also have, by straightforward computation, that vw T 2 = v 2 w 2 . We also note that to avoid the use of generic but unspecified constants in certain formulas we use C D to mean that C can be bounded by a multiple of D, independently of parameters which C and D may depend on. Obviously, C D is defined as D C and we write C D if C D and C D. Lastly, note that for the natural numbers N denotes them including 0 and N * excluding 0. 2.2. Model problem. Let (Ω, F , P) be a separable, complete probability space. Then, we consider the following second order diffusion problem with a random anisotropic diffusion coefficient (1) for almost every ω ∈ Ω: where D ⊂ R d is a Lipschitz domain with d ≥ 1 and the function f ∈ H −1 (D) describes the known source. The random anisotropic diffusion coefficient is given as the random matrix field A ∈ L ∞ P Ω; L ∞ (D; R d×d ) , which satisfies the uniform ellipticity condition (2) a ≤ ess inf for some constants 0 < a ≤ a < ∞ and is almost surely symmetric almost everywhere. Without loss of generality, we assume a ≤ 1 ≤ a. We specifically consider diffusion coefficients that are of form where a ∈ R is a given positive number and V ∈ L ∞ P Ω; L ∞ (D; R d ) is a random vectorvalued field. We note that such a field A accounts for a medium that has homogeneous diffusion strength a perpendicular to V and has diffusion strength V[ω](x) 2 in the direction of V. The randomness of the specific direction and length of V therefore quantifies uncertainty of this notable direction and its diffusion strength. To guarantee the uniform ellipticity condition (2), we require that It is assumed that the spatial variable x and the stochastic parameter ω of the random field have been separated by the Karhunen-Loève expansion of V, yielding a parametrised expansion where y = (y k ) k∈N * ∈ := [−1, 1] N * is a sequence of uncorrelated random variables, see e.g. [24]. In the following, we will denote the pushforward of the measure P onto as P y . Then, we also view A[y](x) and u[y](x) as being parametrised by y and restate (1) as (6) for almost every y ∈ : We now impose some common assumptions, which make the Karhunen-Loève expansion computationally feasible. Assumption 2.1. The random variables (y k ) k∈N * are independent and identically distributed. Moreover, they are uniformly distributed on −1, 1 . Lastly, we note that the spatially weak form of (6) is given by for almost every y ∈ and all v ∈ H 1 0 (D). This also entails the well known stability estimate. There is a unique solution u ∈ L ∞ Py ; H 1 0 (D) of (7), which fulfils where c V is the Poincaré-Friedrichs constant of H 1 0 (D). Parametric regularity and multilevel quadrature We now derive regularity estimates for the solution u of (7) and apply multilevel quadrature for approximating the mean of u. The regularity estimates are based on the following assumption on the decay of the expansion of V. Assumption 3.1. We assume that the ψ k are elements of W κ,∞ (D; R d ) for a κ ∈ N and that the sequence γ κ = (γ κ,k ) k∈N , given by is at least in ℓ 1 (N), where we have defined ψ 0 := E[V] and σ 0 := 1. Furthermore, we define We furthermore assume that the vector field V is given by a finite rank Karhunen-Loève expansion, i.e. where := [−1, 1] M . We note that the regularity estimates however will not depend on the rank M and therefore, if necessary, a finite rank can be attained by appropriate truncation. Furthermore, for the regularity estimates we also require an elliptic regularity result. where C κ,er only depends on D and continuously on A Rκ . We assume from here on that A also lies in L ∞ Py ( ; R κ ). Note, that for κ = 0, this reduces to the stability estimate, for which the parametric regularity may be found in [24]. Therefore, we will hereafter only consider the case where κ ≥ 1. Such an elliptic regularity estimate for example is known for κ = 1, when the domain is convex and bounded and R κ = C 0,1 (D; R d×d ), see [16, Theorems 3.2.1.2 and 3.1.3.1]. The elliptic regularity estimate is also known to hold for κ ≥ 1 and d = 2, when the domain's boundary is smooth and R κ = W κ,∞ (D; R d×d ), see [7]. The rest of this section is now split into three subsections. The first subsection is dedicated to introducing some useful norms as well as deriving Corollary 3.11 and Theorem 3.12. These two results are then used in the second subsection to step-by-step provide regularity estimates for the different terms that make up the diffusion coefficient, based on Assumption 3.1 on the decay of the expansion of V, which then yields regularity estimates for the diffusion coefficient. By using this decay as well as Assumption 3.2, we derive the regularity estimates of the solution u in Theorem 3.18. In the third subsection, we then briefly discuss what kind of convergence rates and computational complexity this regularity of u yields, for the example of approximating E[u] using multilevel quadrature methods. Precursory remarks. For the Sobolev-Bochner spaces W η,p (D; X ) with η ∈ N and 1 ≤ p ≤ ∞, we introduce the norms given by for v ∈ W η,p (D; X ) where X is a Banach space with norm · X and where we make use of the shorthand · p,D;X := · L p (D;X ) . We may omit the specification of the Banach space X , for example when X is the space R, R d or R d×d . For these norms, we have the following lemma giving a bound after applying a div x , D x or ∇ x : For Proof. We calculate We also may compute The Leibniz rule also yields the following kind of submultiplicativity for these norms: Proof. Let α, β ∈ N d be two multi-indices, then we have We now can calculate By a change of variables, i.e. replacing α with α + β, and remarking that Consequently, we arrive at the desired estimate: By induction, we thus arrive at the following corollary: . . , X r and Y be Banach spaces and As we will need the Faà di Bruno formula, see [10], we just restate it here -slightly adapted to our notation and usage -for reference: Remark 3.6. Given W : X → Y and v : D → X , where X and Y are Banach spaces, X ⊂ X is open with img D v ⊂ X and W, v are both sufficiently differentiable for the formula to make sense, then is the set of integer partitions of a multiindex α into r non-vanishing multiindices, given by It also follows from [10] that, for α ∈ N d and r ∈ N, we have where S n,r denotes the Stirling numbers of the second kind, see [1]. Lastly, as we know that |α| r=1 r!S |α|,r equals the |α|-th ordered Bell number, we can bound it, see [4], giving Lastly, the Faà di Bruno formula now yields the following lemma: Proof. The Faà di Bruno formula leads to Furthermore, we also introduce the shorthand notations v~η ,p,D;X := v L ∞ Py ( ;W η,p (D;X )) , for v ∈ L ∞ Py ; W η,p (D; X ) where X is a Banach space with norm · X . We may also omit the specification of the Banach space X in this shorthand, for example when X is the space R, R d or R d×d . With this notation we still carry over the previous results, yielding the following lemmata and corollaries. Lemma 3.3 directly transforms to: For We use Lemma 3.4 to arrive at the following result: Lemma 3.9. Let η, ∈ N, 1 ≤ p, p ′ ≤ ∞, ν ∈ N M , X and Y be Banach spaces and for all α ≤ ν. Moreover, we also will require the following corollary: for all α ≤ ν. Lastly, from Theorem 3.12 we can the derive the following: . For α = 0, we remark that the Faà di Bruno formula and Corollary 3.10 yield which appear in the diffusion coefficient. For these we have: Lemma 3.13. We have for all α ∈ N M that ∂ α y B κ,∞,D ≤ k B γ α κ and ∂ α y C κ,∞,D ≤ k C γ α κ , with k B := k C := c 2 γκ . Furthermore, we know that img img D C ⊂ [a 2 , a 2 ]. Proof. More verbosely, B is given by from which we can derive the first order derivatives, yielding and from those also the second order derivatives. They are given by . Since the second order derivatives with respect to y are constant, all higher order derivatives with respect to y vanish. We obviously have~B~κ ,∞,D ≤ c 2 γκ . From (8) we can now derive the bound and (9) leads us to and are finished since c γκ ≥ 2. Starting from the equation the bound for C clearly follows analogously as for B. Lastly, we note that (4) directly implies that img img D C ⊂ [a 2 , a 2 ]. Next, we consider the terms for which we have: Lemma 3.14. We know for all α ∈ N M that and c D := c E := 2k C a 2 log 2 . x are composite functions, we can employ Theorem 3.12 to bound their derivatives. For this, we remark that the t-th derivative of v and w are given by using Lemma 3.13 -with which we then furthermore arrive at the estimates and, analogously,~w as well as, for α = 0, and, again analogously, Combining these estimates finally yields the assertions. We can now consider the two terms For these we have: Proof. We use Corollary 3.11 with the bounds from Lemma 3.13 and Lemma 3.14 to arrive at Lastly, the combinatorial identity (10) β≤α |β|=j with which the assertion follows. We now can consider the complete diffusion coefficient, yielding: Proof. We can state A as which, by taking the norm of the derivative and using Lemma 3.15, yields We now define the modified sequence µ κ = (µ κ,k ) k∈N as and also thus, we have ∂ α y A κ,∞,D ≤ |α|!k κ,A µ α κ . Now, Assumption 3.2 directly implies the following result: However, by also leveraging the higher spatial regularity in the Karhunen-Loève expansion of the random vector-valued field, we can show that the solution u admits analytic regularity with respect to the stochastic parameter y also in the H κ+1 (D)-norm. This mixed regularity is then the essential ingredient when applying multilevel methods. Proof. By differentiation of the variational formulation (7) with respect to y we arrive, for arbi- Applying the Leibniz rule on the left-hand side yields Then, by rearranging and using the linearity of the gradient, we find . Using Green's identity, we can then write . Thus, we arrive at We note that, by definition of c, we have c ≥ 2 and furthermore, because of Lemma 3.17, we also have that u H 1 (D) ≤ c, which means that the assertion is true for |α| = 0. Thus, we can use an induction over |α| to prove the hypothesis ∂ α y u κ+1,2,D ≤ |α|!µ α κ c |α|+1 for |α| > 0. This completes the proof. 3.3. Numerical quadrature in the parameter. Coming from the solution of (7), that is u ∈ L ∞ Py ; H κ+1 (D) , we now wish to know the moments of u. In this section, we will therefore consider the approximation of the mean of u. The mean of u is given by the Bochner integral Therefore, we may proceed to approximate it by considering a generic quadrature method Q N ; that is are the weight and evaluation point pairs. We assume that the quadrature chosen fulfils for some constants C > 0 and r > 0. We will employ the quasi-Monte Carlo quadrature based on the Halton sequence, i.e. ω [18]. Then, we know that, given that there exists an ε > 0 such that γ κ,k ≤ ck −3−ε holds for some c > 0, for every δ > 0 there exists a constant C = C(δ) > 0 such that (12) holds for r = 1 − δ, see e.g. [22] which is a consequence of [35]. Clearly, other, possibly more sophisticated, quadrature methods may also be considered, for example, other quasi-Monte Carlo quadratures, such as those based on the Sobol sequence or other low-discrepancy sequences as well as their higher-order adaptations, and anisotropic sparse grid quadratures, see e.g. [12,17,29,33]. To approximate the mean of u as in (11), we require the values u[y] for y = ξ i . These values can be approximated by u l [y], where u l is the Galerkin approximation of the spatially weak formulation on a finite dimensional subspace V l of H 1 0 (D); that is, u l is the solution of for almost every y ∈ and all v ∈ V l . We assume that a sequence of V l can be chosen for l ∈ N such that there is a constant K with (13) u − u l L ∞ Py ( ;H 1 (D)) ≤ K2 −κl . For example, we can consider V l to be the spaces of continuous finite elements of order κ coming from a sequence of quasi-uniform meshes T l using isoparametric elements, where the mesh size behaves like 2 −l . Then, it is known from finite element theory that we have (13) with K ∼ u L ∞ Py ( ;H κ+1 (D)) , see e.g. [5,6]. The combination of the error estimates (12) and (13) then leads to Thus, choosing N l := 2 κl/r finally yields In contrast, the mixed regularity, shown before in Theorem 3.18, allows us to consider a multilevel adaptation, which may be given as where ∆Q 0 := Q N0 and ∆Q k : Indeed, this is the sparse grid combination technique as introduced in [15], see also [14,20]. It thus follows that For complexity considerations, we shall consider a quadrature that is nested, i.e. we may set as it does not depend on N . Then, we note that Q ML l [u 0 , . . . , u l ] may explicitly be stated as Computing Q N l [u l ] requires thus the values u l,i (x) := u(x, ξ i ), which can be derived by solving Find u l,i ∈ V l such that Generally, when considering a sequence of finite element spaces V l as described above, the number of degrees of freedom behaves like O 2 ld and computing one u l,i using state of the art methods will have a complexity that is O 2 ld . As this has to be done N l times for the calculation of Q N l [u l ], a complexity scaling is obtained that is O 2 l(κ/r+d) . Therefore, for the computation of the multilevel quadrature Q ML l [u 0 , . . . , u l ], we arrive at an over-all complexity of We mention that also non-nested quadrature formulae can be used but lead to a somewhat larger constant in the complexity estimate, see [14] for the details. Remark 3.19. If we redefine the N l as N l := l (1+ε)/r 2 κl/r for an ε > 0, then we have and, as proposed in [2], we arrive at So, the logarithmic factor, which shows up in the convergence rate, can be removed by increasing the quadrature accuracy slightly faster. Note that this modification increases the hidden constant with a dependance on ε. Remark 3.20. In the particular situation of a standard quasi-Monte Carlo method, we can consider δ ′ such that δ > δ ′ > 0. Then, the quadrature error satisfies the estimate With a similar argument as in [2], it follows that That is, the logarithmic factor, which shows up in the convergence rate, is removed at the cost of replacing the constant C δ with C δ ′ and adds a constant with a dependance on δ ′ yielding an increased hidden constant. While we have exclusively considered the case of the mean of the solution u here, we do note that analogous statements may also be shown for example for the higher-order moments, see [20] for instance. Numerical Results We will now consider two examples of the model problem (1) with a diffusion coefficient of form (3) using the unit cube D := (0, 1) 3 as the domain of computations. Therefore, in view of H 2 -regularity of the spatial problem under consideration, we are only considering the situation with κ = 1. In both examples, we set the global strength a to a := 0.12 and choose the right hand side f ≡ 1. For convenience, we define exp We note that for j ∈ {2, 3} the covariance in the normal direction on the parts of the boundary with x j ∈ {0, 1} is suppressed. Example 2. For this second example we choose the description of V to be defined by Here, the covariance in the normal direction on all of the boundary is suppressed. The numerical implementation is performed with aid of the problem-solving environment DOLFIN [28], which is a part of the FEniCS Project [28]. The Karhunen-Loève expansion of the vector field V is computed by the pivoted Cholesky decomposition, see [19,21] for the details. For the finite element discretisation, we employ the sequence of nested triangulations T l , yielded by successive uniform refinement, i.e. cutting each tetrahedron into 8 tetrahedra. The base triangulation T 0 consists of 6 · 2 3 = 48 tetrahedra. Then, we use interpolation with continuous element-wise linear functions and the truncated pivoted Cholesky decomposition for the Karhunen-Loève expansion approximation and continuous element-wise linear functions in space. The truncation criterion for the pivoted Cholesky decomposition is that the relative trace error is smaller than 10 −4 · 4 −l . Since the exact solutions of the examples are unknown, the errors will have to be estimated. Therefore, in this section, we will estimate the errors for the levels 0 to 5 by substituting the exact solution with the approximate solution computed on the level 6 triangulation T 6 using the quasi-Monte Carlo quadrature based on Halton points with 10 4 samples. For every level, we also define the number of samples used by the quasi-Monte Carlo method based on Halton points (QMC); we choose N l := 2 l/(1−δ) · 10 with δ := 0.2; see Table 1 for the resulting values of N l . This then also implies the amount of samples used on the different levels when using the multilevel quasi-Monte Carlo method based on Halton points (MLQMC). Based on these choices, we expect to see an asymptotic rate of convergence of 2 −l in the H 1 -norm for the mean and in the W 1,1 -norm for the variance. Table 1. The number of samples for the first six levels and the respective parameter dimensions. Figures 1 and 2 show the estimated errors of the solution's first moment on the left hand side and of the solution's second moment on the right hand side, each versus the discretisation level for the QMC and MLQMC quadrature for the two different examples. As expected, the QMC quadrature methods achieves the predicted rate of convergence in both examples, and this rate of convergence also carries over to its multilevel adaptation (MLQMC). Conclusion In this article, we have considered the second order diffusion problem for almost every ω ∈ Ω: − div This models anisotropic diffusion, where the diffusion strength in the direction given by V/ V 2 is V 2 and perpendicular to it is a, which can be used to model both diffusion in media that consist of thin fibres or thin sheets. After having restated the problem in a parametric form by considering the Karhunen-Loève expansion of the random vector field V, we have shown that, given regularity of the elliptic diffusion problem, the decay of the Karhunen-Loève expansion of V entirely determines the regularity of the solution's dependence on the random parameter, also when considering this higher regularity in the spatial domain. We then leverage this result to reduce the complexity of the approximation of the solution's mean, by using the multilevel quasi-Monte Carlo method instead of the quasi-Monte Carlo method, while still retaining the same error rate. Indeed, while the QMC method yields a scheme, where the uncertainty added increases the complexity, this is not the case, when considering two or more spatial dimensions and the MLQMC method. That is, given elliptic regularity and up to a constant in the complexity, adding uncertainty comes for free. The numerical experiments corroborate these theoretical findings. While we considered the use of QMC and its multilevel adaptation, one can clearly also consider other quadrature methods, such as the anisotropic sparse grid quadrature, and then reduce the complexity by passing to their multilevel adaptations. Likewise, multilevel collocation is also applicable.
2018-02-12T16:30:30.000Z
2017-06-16T00:00:00.000
{ "year": 2017, "sha1": "b6fbe4d828a149eb17494e24588ac9689a40a3ca", "oa_license": null, "oa_url": "https://edoc.unibas.ch/75457/1/20200206113950_5e3becf682883.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b6fbe4d828a149eb17494e24588ac9689a40a3ca", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
209434532
pes2o/s2orc
v3-fos-license
Prevalence and Implications of Contamination in Public Genomic Resources: A Case Study of 43 Reference Arthropod Assemblies Thanks to huge advances in sequencing technologies, genomic resources are increasingly being generated and shared by the scientific community. The quality of such public resources are therefore of critical importance. Errors due to contamination are particularly worrying; they are widespread, propagate across databases, and can compromise downstream analyses, especially the detection of horizontally-transferred sequences. However we still lack consistent and comprehensive assessments of contamination prevalence in public genomic data. Here we applied a standardized procedure for foreign sequence annotation to 43 published arthropod genomes from the widely used Ensembl Metazoa database. This method combines information on sequence similarity and synteny to identify contaminant and putative horizontally-transferred sequences in any genome assembly, provided that an adequate reference database is available. We uncovered considerable heterogeneity in quality among arthropod assemblies, some being devoid of contaminant sequences, whereas others included hundreds of contaminant genes. Contaminants far outnumbered horizontally-transferred genes and were a major confounder of their detection, quantification and analysis. We strongly recommend that automated standardized decontamination procedures be systematically embedded into the submission process to genomic databases. ABSTRACT Thanks to huge advances in sequencing technologies, genomic resources are increasingly being generated and shared by the scientific community. The quality of such public resources are therefore of critical importance. Errors due to contamination are particularly worrying; they are widespread, propagate across databases, and can compromise downstream analyses, especially the detection of horizontallytransferred sequences. However we still lack consistent and comprehensive assessments of contamination prevalence in public genomic data. Here we applied a standardized procedure for foreign sequence annotation to 43 published arthropod genomes from the widely used Ensembl Metazoa database. This method combines information on sequence similarity and synteny to identify contaminant and putative horizontally-transferred sequences in any genome assembly, provided that an adequate reference database is available. We uncovered considerable heterogeneity in quality among arthropod assemblies, some being devoid of contaminant sequences, whereas others included hundreds of contaminant genes. Contaminants far outnumbered horizontally-transferred genes and were a major confounder of their detection, quantification and analysis. We strongly recommend that automated standardized decontamination procedures be systematically embedded into the submission process to genomic databases. Scientists typically re-use sequence data generated by others, and are therefore dependent on the reliability of the available genomic resources. For this reason, the problem of public data quality in molecular biology has long been identified as a crucial issue (Lamperti et al. 1992;Mistry et al. 1993;Binns 1993). The problem is even more acute nowadays with the advent of high-throughput sequencing technologies, when most datasets generated in genomic research are simply not amenable to manual curation by humans. This brings a new challenge to current methodologies in genomic sciences, namely, the development of automated approaches to the detection and processing of errors (e.g., Andorf et al. 2007;Schmieder and Edwards 2011;Parks et al. 2015;Delmont and Eren 2016;Drȃgan et al. 2016;Tennessen et al. 2016;Laetsch and Blaxter 2017;Lee et al. 2017). Data quality issues in genome sequences include sequencing errors, assembly errors and contamination, among other things. Errors due to contamination are particularly worrying for several reasons. First, they can lead to serious mis-interpretations of the data, as illustrated by recent, spectacular examples. Potential problems include mis-characterization of gene content and related metabolic functions (e.g., Koutsovoulos et al. 2016;Breitwieser et al. 2019), improper inference of evolutionary events (e.g., Laurin-Lemay et al. 2012;Simion et al. 2018), and biases in genotype calling and population genomic analyses (e.g., Ballenghien et al. 2017;Wilson et al. 2018). Second, contamination is suspected to be widespread. It occurs naturally in most sequencing projects due to foreign DNA initially present in the raw biological material (e.g., symbionts, parasites, ingested food; easily propagate across databases in a self-reinforcing vicious circle. If a DNA sequence from species A is initially assigned to the wrong species B due to a contamination of B by A, it is likely to keep its incorrect status for a while, and may even be identified as a contamination of A by B when the genome of A is eventually sequenced (Merchant et al. 2014). Despite all the possible problems stemming from contamination in genomic resources, most studies addressing this issue so far have focused on one particular genome (e.g., tardigrades) and/or one particular source of contaminants (e.g., humans). Only two studies that we are aware of have consistently screened more than one genome assembly. Merchant et al. (2014) focused on the bovine genome but also applied their pipeline to eight randomly drawn draft genomes (five animals, two plants, one fungus), with contrasted results. Cornet et al. (2018) analyzed 440 genomes of Cyanobacteria and uncovered a substantial level of contamination in .5% of these. There is obviously a need for further assessment of the problem of contamination in publicly available genomic data. Probably the research goal most sensitive to contamination is the detection of horizontally-transferred genesnothing resembles a transferred sequence more than a contaminant does. Horizontal gene transfer (HGT) between species is a pervasive process in prokaryotes, which dramatically affects gene phylogenies and species ability to adapt to environmental changes (Ochman et al. 2000;Koonin 2016). Whether it substantially influences genome evolution also in large eukaryotes is a matter of debate (Andersson 2005;Boto 2014). A number of examples are documented (e.g., Schönknecht et al. 2014), but a quantitative assessment of the prevalence of HGT in eukaryotes is difficult, and many HGT candidates were subsequently shown to result from contamination. Controversies over the confusion between HGT and contaminants have concerned the human genome (Willerslev et al. 2002;Salzberg 2017), the Nematostella vectensis sea anemone genome (Starcevic et al. 2008;Artamonova et al. 2015), and the Hypsibius dujardini tardigrade genome, among others. In H. dujardini, the initial estimate of 17% of genes being of foreign origin was revisited to 1% when contamination was properly accounted for (Hashimoto et al. 2016;Koutsovoulos et al. 2016). A straightforward way to identify contamination in a newly sequenced genome is to compare the assembled sequences to existing databases using BLAST-like algorithms. If a sequence's best match is assigned to a species that is phylogenetically distant from the target organism, then the sequence is annotated as a contaminant. There are several problems with this simple strategy. First, this does not allow one to distinguish contaminants from HGT. Second, this approach is entirely dependent on the correctness of the reference database. A best-BLAST-hit survey can only propagate, not correct, pre-existing taxonomic mis-assignments, as discussed above. Third, such an approach is also dependent on the completeness of the reference database, and on the phylogenetic position of the target organism. If the reference database is imbalanced and dominated by one or a few particular taxa (typically model organisms), then its power to properly discriminate genuine sequences from contaminants will be maximal for newly sequenced organisms closely related to the dominant taxa, and much lower for organisms distantly related to the dominant taxa. Solutions to these problems exist, and include (i) considering multiple BLAST hits, not just the "best" one, (ii) using an appropriately balanced reference database, (iii) incorporating information on synteny (i.e., physical co-localization of loci on the same scaffold), and ultimately phylogeny, in addition to sequence similarity. Here we collated these ideas in an integrated framework aiming at properly quantifying the prevalence of contamination in genomic data based on reliable, existing tools. We applied this pipeline to 43 published genomes of arthropods distributed in the Ensembl database. We report that data quality is highly heterogeneous across species in this widely used database, some genomes being heavily affected by contamination. Our results also show that a careful annotation of contaminant sequences is mandatory in any subsequent attempt to detect HGT. Foreign sequence annotation We developped a dedicated pipeline for the simultaneous detection of contaminants and HGT candidates in published genome assemblies. This pipeline was optimized and benchmarked in arthropods, but can be applied to any other taxa, provided that an adequate reference database is available. The outline of the pipeline is presented in Figure 1. It takes as input a genome assembly and a set of predicted coding sequences (CDS). It returns a set of CDS annotations with the following categories: genuine arthropod gene, HGT candidate, contaminant candidate, orphan gene, uncertain. Five non-metazoan taxonomic groups are considered as potential sources of contaminants and HGT: eubacteria, archaea, fungi, viridiplantae and 'protists'. Each investigated genome is processed independently and without any a priori on the source(s) of contamination. As discussed below, the power of this pipeline to detect foreign sequences depends on the level of fragmentation of the considered assembly. The first step of the pipeline is a preliminary taxonomic assignment of CDS based on sequence similarity. Using DIAMOND BLASTP (v0.8.22, "more-sensitive" mode, otherwise default parameters; Buchfink et al. 2015), each CDS was blasted against a custom protein reference database (see below). Hits with identity greater than 40%, alignment length greater than 75 amino-acids and E-value lower than 10 210 were retained. A minimum of two such hits to two different species was required for taxonomic assignment. CDS not matching this criterion were regarded as orphan genes ('no reliable taxonomic assignment') and not considered further. For each CDS, the 10 hits with the smallest E-values were consideredor less if less than 10 hits had an e-value below 10 210 . A CDS was assigned to a given taxonomic group (i.e., eubacteria; archaea; viridiplantae; fungi; protists) if at least 70% of its best hits fell within this group. These were called "foreign CDS candidates". In addition, a CDS was assigned to the "confident-arthropod" group if 100% of its best hits were to a species of Metazoa, among which at least 70% to a species of Arthropoda. Finally, a CDS was assigned to "other metazoa" if at least 70% of its best hits were to species of non-arthropod metazoa, and none to a species of arthropods. CDS not matching any of these criteria were considered taxonomically unassigned. Using the 10 best hits instead of just the best one provides a robust way to account for potential contaminations and other sources of taxonomic mis-assignment in the reference database. The 70% threshold was empirically determined as providing a reasonable trade-off between sensitivity and specificity. The second step of the pipeline is a test of synteny. All foreign CDS candidates as well as the "confident-arthropod" CDS were mapped onto the species genomic scaffolds using GMAP (v2017-04-24; Wu and Watanabe 2005) with the option "-npaths=0". To account for variable fragmentation of genome assemblies (i.e., N50), we allow for "chimeric alignments" (i.e., CDS whose 5 and 3 ends map to different scaffolds). We required a minimum alignment length of 100bp and a minimum identity of 95%. A foreign CDS candidate was considered as a HGT candidate if it was physically linked to (i.e., mapped to the same scaffold as) at least one "confident-arthropod" CDS. A foreign CDS candidate was considered as a contaminant candidate if it mapped to a scaffold to which no "arthropod-confident" CDS mapped, and at least another non-metazoa CDS mapped. A foreign CDS candidate was considered as "uncertain" if it did not reliably map to any scaffold or if it was the only CDS to map to a given scaffold. When present, the "confidentarthropod" tag was propagated across all scaffolds linked by chimeric alignments. This synteny-based step can also be performed at the contig scale in case of doubts regarding the scaffolding process; this should increase the proportion of foreign candidates classified as "uncertain". The corresponding script is available on GitHub (https://github. com/ClementineFrancois/Foreign-CDS-detection). The analysis of the 43 arthropod assemblies of this study took around 48 hr to run on 50 CPU. Evaluated genome assemblies The 43 arthropod genomes available in Ensembl Metazoa (Release 37, as of October 2017; Kersey et al. 2017) were investigated using our dedicated pipeline. This included 36 insects, two crustaceans, four chelicerates and one myriapod (see Supplementary Table S1). For each Figure 1 A simplified flow diagram of the pipeline developed for this study. Each species assembly is evaluated independently through this pipeline, which requires the set of coding sequences (CDS) as well as the genomic scaffolds of each species, and an appropriate reference database. In this diagram, boxes referring to 'data', 'reference database' and 'tools' are colored in blue, green, and red, respectively. See the main text for detailed explanations. species, the set of masked genomic scaffolds ("dna_rm.toplevel") as well as the set of all predicted coding sequences ("cds.all") were retrieved from Ensembl Metazoa. Depending on the species, the set of annotated CDS was either generated by Ensembl or imported from other reference databases relying on different annotation pipelines. Scaffolds shorter than 200 bp were discarded. The longest transcript was selected for each gene. Coding sequences shorter than 150 bp were discarded. Custom reference database A custom protein reference database was built to cover all domains of life and included 937 species (4,622,809 sequences). The proteomes of 100 eukaryotic species were retrieved from Ensembl (Release 90; Zerbino et al. 2017) and Ensembl Metazoa (Release 37; Kersey et al. 2017). These included 40 metazoa (of which 20 arthropods), 20 fungi (of which 10 fungi known to infect arthropods), 20 Viridiplantae and 20 'protists'. The proteomes of 837 prokaryotic species, of which 748 eubacteria and 89 archae, were retrieved from the Microbial Genome Database for Comparative Analysis (mbgd_2016-01; Uchiyama et al. 2014) selecting one species per genus. An additional 11 known symbionts of arthropods were subsequently included. Within each proteome, redundant sequences (.90% identity) were removed using CD-HIT (Fu et al. 2012). Information on the content of the custom reference database is provided in Supplementary Table S2. Validation of the contaminant candidates In two species of interest, the tetranucleotide (4-mer) frequencies of candidate contaminant CDS were visually compared to those of "confident-arthropod" CDS using a Principal Components Analysis (PCA). PCA was performed in R using the 'prcomp' function and results were plotted using the 'pca3d' package. Validation of the HGT candidates We took a phylogenetic approach to validate / invalidate HGT candidates in one species of interest, the pea aphid Acyrthosiphon pisum. All HGT candidates detected in the pea aphid assembly were clustered into families with Silix (v1.2; Miele et al. 2011), requiring a minimum of 60% of identity (default parameters otherwise). For each family, a protein alignment of the candidate HGT sequence(s) and its (their) 50 best BLAST hits in the custom reference database was generated with MAFFT (v7; Katoh and Standley 2013). Only BLAST hits with identity greater than 40%, alignment length greater than 75 amino-acids and E-value lower than 10 210 were considered. The alignements were cleaned using HMMcleaner (stringency parameter = 12). Phylogenetic trees were inferred using RAxML (v8.2; Stamatakis 2014) with the model 'PROTGAMMALGX' of amino-acid substitution and 100 bootstrap replicates. Phylogenetic trees were inspected by eye. Statistical analyses According to the recommendations of Warton and Hui (2011), all proportion data were logit-transformed prior to statistical analyses, using the 'car' R package (Fox and Weisberg 2011). The normality of the residuals was checked for all models reported in this article. All data analyses were performed with R 3.4 software (R Core Team 2018) using the vegan (Oksanen et al. 2018) and seqinr (Charif and Lobry 2007) packages. Data availability This study is based on publicly available data from the Ensembl database (the accession numbers are listed in Table S1). Table S1 describes the genomic features of the 43 arthropod species from EnsemblMetazoa investigated in this study. Table S2 details the composition of the custom reference database. Table S3 describes the categorization of all CDS in the 43 arthropod genomes. Table S4 indicates the inferred function and potential donor for the six validated HGT families in the pea aphid assembly. Figure S1 shows the correlation between the log-transformed N50 of each genome assembly and the percentage of foreign CDS candidates initially identified in the 1 st similarity-based step of the pipeline which were subsequently considered as uncertain in the 2 nd synteny-based step. Figure S2 shows the number of contaminant and HGT candidates detected in each of the 43 arthropod genomes, according to the assembly N50. Figure S3 displays the Principal Components Analysis of CDS tetranucleotide frequencies in the pea aphid and bumblebee assemblies. Figure S4 shows the distribution of the number of contaminant CDS per contaminant scaffold. Figure S5 contains the RAxML phylogenies inferred for the six validated HGT families in the pea aphid assembly. RESULTS & DISCUSSION Overview of the 43 arthropod genomes: contamination We applied our newly introduced contamination/HGT annotation pipeline to 43 assemblies from Ensembl Metazoa. Detailed results are displayed in Figure 2 and Suppl. Tables S1 & S3. Out of 43 arthropod assemblies, 28 were completely devoid of non-metaozan contamination (including the 12 Drosophila species), while 4 of them contained more than 150 contaminant CDS. The number of predicted contaminant CDS per assembly ranged from 0 to 827 among species, representing 0-5% of all CDS, and 0-8% of the CDS for which a taxonomy assignment was possiblewhich is probably the most meaningful measure of the prevalence of contamination (Suppl. Table S1). The most contaminated assemblies were those of the bumblebee (Bombus impatiens) and the pea aphid (Acyrthosiphon pisum). The number of contaminant scaffolds (i.e., containing at least two contaminant CDS and no genuine arthropod CDS) varied from 0 to 202 across assemblies (Suppl. Table S1). The contaminant CDS were either scattered across many small scaffolds (e.g., 448 contaminant CDS distributed across 202 scaffolds in the pea aphid) or carried by just a few long contaminant scaffolds (e.g., 827 contaminant CDS in 30 scaffolds in the bumblebee). The size of contaminant scaffolds ranged from 602 bp (in the barley midge Mayetiola destructor) to 793,321 bp (in the deer tick Ixodes scapularis), and their cumulative length represents up to 2,497,466 bp in the pea aphid Acyrthosiphon pisum. As an evaluation of the reliability of our results, we tested the taxonomic consistency of the contaminant scaffolds detected in our analyses. Indeed, all CDS encoded on a given contaminant scaffold are expected to derive from the same organism, thus to be assigned to the same non-metazoan group (e.g., eubacteria). Out of 408 detected contaminant scaffolds, only one was taxonomically inconsistent. This 20kb scaffold from the Lucilia cuprina (blowfly) assembly encoded one eubacterial and two fungal CDS. It could be a chimera between two contaminant sequences. The great majority of detected contaminations originated from eubacteria (1,796 out of 1,849 contaminant candidates for the 43 species), except in blowfly Lucilia cuprina which was mostly contaminated by fungal sequences (41 CDS; Figure 2 and Suppl. Table S3). The fact that no archaeal contamination was detected in any assembly (Suppl . Table S3) might at least in part reflect a taxonomic gap in the public reference databases. This problem has already been evidenced from the study of mammalian gut microbiota (Raymann et al. 2017) and likely impacts all database-dependent studies. In summary, out of 43 published genome assemblies, 15 (i.e., 35%) presented at least some traces of non-metazoan contamination, including four which were substantially contaminated. These figures are likely an underestimation of the actual prevalence of contamination because of the limitation due to incompleteness of reference genomic databases, as discussed above. Moreover, the overall prevalence of contamination is expected to be even higher as we did not consider metazoan contaminants. Yet contamination from wet lab technicians as well as from model organisms extensively used in research facilities (e.g., mouse, zebrafish, . . .) is likely to occur in any sequencing project. Our results are consistent with recent analyses which uncovered similar level of contamination in published genome assemblies (e.g., Borner and Burmester 2017). In particular, Bemm et al. (BioRxiv: https:// doi.org/10.1101/122309) reported from 0 to ca. 5% of bacterial contamination in Ensembl Metazoa genome and identified the bumblebee Bombus impatiens as one of the most highly contaminated assemblies. In addition, our analyses focused on CDS, which are among the most conserved and easy to annotate sequences of a genome, i.e., probably most easily filtered for contamination by assembly pipelines. Therefore the situation regarding contamination is probably even worse as far as non-coding sequences are concerned. Overview of the 43 arthropod genomes: HGT candidates In this study, potential HGT candidates were detected at very low level in all genome assemblies. Across the 43 investigated species, the number of CDS suspected to derive from an HGT event ranged from 2 to 81 per assembly, with a median of 12 HGT candidates (Figure 2, Suppl. Tables S1 and S3). The HGT candidates represented up to 1.25% of the taxonomically-assigned CDS for a given species (in the spider mite Tetranychus urticae). These HGT candidates have yet to be validated. These results are consistent with several recent studies on this species which evidenced an unexpectedly high level of putative HGT from bacteria and fungi (Grbić et al. 2011;Ahn et al. 2014;Altincicek et al. 2012;Wybouw et al. 2012). The proportion of HGT candidates was rather variable among species, and substantially lower than 1% in a large majority of species (median = 0.14%). Around a third of all detected HGT candidates likely originated from eubacterial donors, and another third from viridiplantae ones (respectively 286 and 295 candidates out of 756; Suppl. Table S3). Similarly to contaminant candidates, very few putative archaeal HGT were detected. These preliminary results should be considered with a high degree of caution as HGT candidates have not been validated through a phylogenetic approach or an experimental confirmation via PCR or re-sequencing, so the prevalence of HGT in arthropod genomes is likely over-estimated. Still these preliminary results can be compared to previous HGT studies on Metazoa. In a review including 8 metazoan species (Schönknecht et al. 2014), the number of phylogeneticallysupported HGT ranged from 12 to 198 genes across species (with the repeatedly documented exception of the bdelloid rotifers containing 2,700 HGT; see Nowell et al. 2018). In another study, Crisp and colleagues (2015) analyzed 26 metazoan genomes and identified from 2 to 100 HGT across species. Both studies evidenced the same order of HGT prevalence in metazoans as we preliminary did on arthropods. Influence of the fragmentation level of the assembly Our ability to detect contaminants and HGT decreases with the fragmentation of genome assembly. Indeed, highly fragmented assemblies contain many small scaffolds which are more likely to encode a single CDS. If detected as suspicious in the first step of the pipeline, such CDS (i.e., alone on their scaffold) would then be considered as uncertain in the second synteny-based step, thus decreasing the power of our pipeline. The N50 was highly variable across the 43 arthropod assemblies, ranging from 2.3 kb (in the fly Megaselia scalaris) to 41.5 Mb (in the mosquito Anopheles gambiae), with a median at 742 kb (Suppl Table S1). We found a negative correlation between genome assembly N50 and the percentage of foreign CDS candidates classified as "uncertain" at the second step of the pipeline (linear model, p-value = 0.0002, R 2 = 0.2854; Suppl. Figure S1). This indicates that the actual prevalence of contamination was underestimated in our study. Of note, despite the decreased power to detect contaminants and HGT in fragmented assemblies, our pipeline identified high amounts of putative contaminants and HGT in some low-N50 genomes (Suppl. Figure S2). As a matter of fact, the highest contamination levels were identified in low-N50 assemblies (Bombus impatiens, N50 = 1.3 Mb; Acyrthosiphon pisum, N50 = 431 kb). Detailed investigation in three species Further analyses were performed in three species of interest: the pea aphid (Acyrthosiphon pisum), the bumblebee (Bombus impatiens) and the fruit fly (Drosophila ananassae). We assessed the reliability of our sets of contaminant / HGT candidates and discussed their origin through the analyses of their tetranucleotide content, across-scaffold distribution, and phylogeny. The case of the pea aphid (Acyrthosiphon pisum): The genome assembly of the pea aphid showed one of the highest number of predicted contaminant and horizontally-transferred CDS (Suppl . Table S1). We thus investigated in more details the sets of contaminant and HGT candidates. The phylogenetic signal repeatedly described in tetranucleotide (4-mer) frequencies of CDS means that such frequency patterns convey information about the evolutionary history of the sequence (Pride et al. 2003;Teeling et al. 2004;Dick et al. 2009) and should theoretically enable to discriminate between contaminant and arthropod sequences, similarly to the rationale behind the Blobtools suite (which considers the scaffold %GC; Laetsch and Blaxter 2017) or the algorithm CONCOCT (for the automated binning of metagenomic contigs; Alneberg et al. 2014). The 4-mer frequencies of the contaminant candidates identified in the pea aphid assembly, as well as those of the 'confident-arthropod' CDS, were visualized using a PCA (plotting the three principal components). Almost all contaminant candidates fall outside of the cluster of resident arthropod genes (Suppl. Figure S3a), supporting the reliability of the set of contaminants identified in the pea aphid assembly. In this assembly, the 448 predicted contaminant CDS derived from 202 scaffolds. Contaminant CDS were scattered across many small scaffolds harboring only a few CDS each, a pattern similar to most of the screeened assemblies (Suppl. Figure S4). 99.5% of the contaminant scaffolds (201 out of 202) were from eubacterial origin. An examination of the taxonomy of BLAST hits indicated that a vast majority of contaminant sequences originated from donors of the order Enterobacterales, and showed closest matches to species of the families Enterobacteriaceae and Erwiniaceae. Interestingly, these two families contain several well-described bacterial symbionts of aphids, such as the obligate endosymbiont Buchnera aphidicola or the facultative symbionts Hamiltonella defensa and Serratia symbiotica (Oliver et al. 2010). However, none of the detected contaminant CDS blasted reliably on the genome of Buchnera aphidicola nor on the genomes of common aphid secondary symbionts (Hamiltonella defensa, Serratia symbiotica, Spiroplasma, Cardinium and Rickettsia), although these species were represented in our reference database. Symbiont-derived sequences were likely present in the raw dataset and subsequently removed from the assembly, which is a common procedure in sequencing projects (e.g., see International Aphid Genomics Consortium 2010). This targeted cleaning approach can only be applied to well-known symbionts of the focal organism. The remaining contaminant sequences might thus correspond to less studied aphid symbionts, such as species of the genera Pantoea or Erwinia, which showed strong BLAST matches with contaminant CDS and have been identified as aphid gut symbionts (Harada et al. 1997;Gauthier et al. 2015). The 75 HGT candidates detected in the pea aphid assembly clustered in 70 gene families, from which gene phylogenies were constructed. The 70 trees were inspected by eye, and only six of them were considered as reliably supporting an instance of HGT (Suppl. Figure S5). The other 64 trees were disregarded mainly because the terminal branch leading to the putatively-transferred sequence was too long for a reliable phylogenetic placement. Five HGT likely originated from eubacterial donors (Suppl . Table S4), including a transposase gene. The remaining HGT concerned four CDS, which were likely acquired from a fungus (Suppl. Figure S5, Suppl. Table S4). The functional annotations of the best BLASTP hits (NR database) suggest that the horizontally-transferred genes of fungal origin encode a phytoene desaturase, an enzyme involved in carotenoid biosynthesis. This result is congruent with previous studies in aphids wich indicated that the phytoene desaturase gene had undergone several duplication events after its transfer from a fungal donor (Nováková and Moran 2011). This HGT event seems to be ancient and shared with the red spider mite Tetranychus urticae (Altincicek et al. 2011;Grbić et al. 2011), which is consistent with the phylogenetic tree we inferred for this gene family (cf. Suppl. Figure S5). Carotenoid pigments can confer many essential benefits (e.g., protection from oxidative damage, light detection, photoprotection, signaling) and are acquired by most animals from their diet. HGT events enabling an organism to de novo synthetise carotenoids could confer a substantial adaptive advantage to the recipient species (Bryon et al. 2017). Of note, only a minority of the suspected HGT (six out of 70) were confirmed via our phylogenetic analysis. This confirms that evidence solely based on sequence similarity are not sufficient to demonstrate the existence of an HGT event, far from it. A phylogeny-based validation is required. For example the controversy on human genome demonstrated that most, if not all, putative horizontally-transferred sequences initially identified through a BLAST approach, actually originated from classical vertical descent (Stanhope et al. 2001). The case of the bumblebee (Bombus impatiens): The genome assembly of the bumblebee represents a particularly striking example of host genome contamination by symbiont sequences. In this assembly, the 827 predicted contaminant CDS derived from only 30 scaffolds. Using the same approach as described above in the pea aphid, the 4-mer frequencies of contaminant candidates and 'confident-arthropod' sequences were visualized using a PCA, which clearly separated the two sets of CDS (Suppl. Figure S3b). This pattern supported the reliability of the set of contaminants identified in the bumblebee assembly. The 827 contaminant CDS were concentrated in only 30 contaminant scaffolds harboring up to 108 CDS each, a pattern strikingly different from the other assemblies we analyzed (Suppl. Figure S4). All contaminant sequences were of eubacterial origin, and ca. 97% (799 out of 827) consistently showed high sequence similarity with two species of the Orbaceae family present in our reference database, namely Gilliamella apicola and Frischella perrara. These 799 Orbaceae CDS correspond to just 25 contaminant scaffolds, the lengths of which sum up to 2,157,077 bp. Gilliamella apicola is known to be a gut symbiont of bumblebees and its genome size is 2.2 Mb (Kwong and Moran 2013), suggesting at first sight that the whole genome of this species could be included in the bumblebee assembly. However, Martinson et al. (2014) described a new bumblebee gut symbiont sequenced concurrently with the genome of its host. This symbiont, Candidatus Schmidhempelia bombi, is another good candidate as it was not present in our reference database, has a genome size of at least 2 Mb, and shares significant sequence similarity with Frischella perrara and Gilliamella apicola. All contaminant CDS were blasted against the three assemblies available in NCBI (Candidatus Schmidhempelia bombi str. Bimp; Gilliamella apicola str. WkB30; Frischella perrara str. PEB0191). 769 CDS out of 827 showed 100% nucleotide similarity with sequences of Candidatus Schmidhempelia bombi. The maximum sequence similarity with Frischella perrara and Gilliamella apicola were 95% et 87%, respectively. We conclude that almost the entire genome of Candidatus Schmidhempelia bombi is present in the bumblebee assembly distributed by Ensembl Metazoa, although this symbiont was described and its genome sequence published in 2014 (Martinson et al. 2014). The bumblebee assembly is therefore a textbook example of a complete symbiont genome accidentally sequenced alongside the focal organism and mistakenly incorporated into the primary assembly (Sadd et al. 2015). As of today, while both NCBI and the European Nucleotide Archive have twice updated the bumblebee assembly since March 2018 (exclusion of bacterial sequences; BIMP_2.2, GCA_000188095.4), EnsemblMetazoa is still distributing the first version of the assembly (BIMP_2.0) which includes the endosymbiont sequences, with obvious implications regarding downstream analyses. The case of Drosophila ananassae: We focused on Drosophila ananassae because several studies demonstrated widespread HGT from Wolbachia into the genome of this species (Hotopp et al. 2007;Klasson et al. 2014). However, only three eubacterial HGT candidates were detected by our pipeline, even though four Wolbachia strains were present in our reference database. Besides, none of these HGT candidates showed any good match with Wolbachia sequences when blasted against NR NCBI. This discrepancy could have been explained if these HGT occurred a long time ago, causing horizontally-transferred sequences to degenerate beyond the point where they would be recognized as CDS, and thus would not have been screened in our pipeline. However, at least 28 of these Wolbachia horizontally-transferred sequences seem to be expressed at low abundance in D. ananassae (Hotopp et al. 2007), suggesting that they are not too degenerate to be transcribed. Another explanation would be an excessive cleaning of Drosophila ananassae assembly causing all foreign (HGT and contaminant) sequences to be systematically removed, regardless of their physical integration into the fly genome. This would explain why none of the previously described HGT was detected. This hypothesis is supported by the fact that no contaminant sequence was detected in any of the 12 Drosophila assemblies (Suppl . Table S1). A last hypothesis would be that horizontally-transferred sequences are still functional and present on the genomic scaffolds, but were somehow filtered out during the annotation step (prediction of CDS; imported from FlyBase for all Drosophila species) and thus were not screened by our pipeline. This hypothesis is supported by the fact that 239 proteins of the Wolbachia reference assembly (ASM367136v1, NCBI) showed good BLASTX matches with the genomic scaffolds of Drosophila ananassae (E-value , 10 230 ). It should be noted that Ensembl has updated the gene set of D. ananassae since our analyses, adding thousands of CDS (release dana_r1.05 from FlyBase), including some with high similarity to Wolbachia sequences. This example illustrates the potential downstream impacts of the cleaning and annotation procedures implemented in genome sequencing projects, which can result in bona fide genes of interest being discarded, and therefore taken away from the genomic databases and literature. Moreover, the lack of specific documentation on the procedures implemented for each assembly makes the (frequent) successive versions changes hardly tractable for the users, although these can have substantial impact on the distributed genomic data. CONCLUSIONS Identifying genes of foreign origin in a genome is a goal of major biological interest, which is required to properly account for the problem of contamination in published genome assemblies. Applying a consistent, automated, reproducible foreign sequence annotation pipeline, we revealed considerable heterogeneity among arthropod genomes from the Ensembl Metazoa database in terms of prevalence of contaminants. Of the 43 arthropod assemblies we analyzed, 28 were completely devoid of contaminant sequences (including the 12 Drosophila species), 11 included a few, while four of them were heavily affected (. 150 contaminant CDS). The highest level of contamination was detected in the bumblebee assembly which contained 827 contaminant CDS likely originating from a single endosymbiont. This disparity between entries of a single, widely used database is worth noting, beyond the heterogeneity of annotation procedures among genome assemblies. Some of the Ensembl Metazoa assemblies were "cleaned" to the point that previously documented HGT have been removed, whereas others included hundreds of contaminant genes. Most of the detected foreign sequences proved to be contaminants, while very few HGT were confirmed. Therefore any analysis of HGT solely based on existing gene annotations would presumably yield results of little, if any, biological relevance. Contamination is in large part unavoidable and a major confounder of all downstream genomic analyses. While researchers should be accountable for the cleaning of their NGS datasets prior to distribution, there is inevitably some heterogeneity among labs and consortiums in terms of procedures and scientific goals. Thus we recommend that reproducible decontamination procedures (e.g., Tennessen et al. 2016;Laetsch and Blaxter 2017; this study) be systematically embedded into the submission process to genomic databases. ACKNOWLEDGMENTS These analyses largely benefited from the Montpellier Bioinformatics Biodiversity computing cluster platform. The authors would particularly like to thank Dr. I. Uchiyama for his help in extracting microbial genomes from the MBGD database. We are grateful to Céline Scornavacca, Paul Simion and Yoann Anselmetti for valuable help with bioinformatic scripts. We thank the editors and the anonymous reviewers for their valuable comments on an earlier version of this manuscript. This work was supported by Swiss SNF grant number PP00P3_170627.
2019-12-22T14:02:56.077Z
2019-12-20T00:00:00.000
{ "year": 2019, "sha1": "1c0b10a78c28564588bc176588c6e26cd06f1ab5", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/g3journal/article-pdf/10/2/721/37180398/g3journal0721.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ca38d5cf4dc8b73384703b7cf116645fe0bd6321", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
227252937
pes2o/s2orc
v3-fos-license
Fiber Raman Amplifiers and Fiber Raman Lasers Stimulated Raman scattering (SRS) is a nonlinear optical effect, observed for the first time in 1962, which lies at the heart of fiber Raman amplifiers and fiber Raman lasers [...]. amplification, is that the signal gain may be pushed into a transmission span preventing the signal from reduction in power. Thus, lower signal powers can be used, nonlinear penalties are reduced, and higher loss can be tolerated [13,14]. Lumped or discrete FRAs use a fiber medium that is localized before or after transmission to fully or partially compensate for the transmission loss. Discrete FRAs are primarily used to increase the capacity of fiber-optic networks, opening up new wavelength windows, which are inaccessible by EDFAs. In FRAs, the signal is amplified at a wavelegth given by the frequency difference between the pump and the Stokes frequencies, thus, by choosing the pump wavelength, the gain at any wavelength can be obtained. Moreover, by combining multiple pump wavelengths centered at different wavelengths, a flat gain in an ultra wide bandwidth is achievable [15]. Dispersion-compensating Raman amplifiers are interesting because they integrate two crucial tasks, dispersion compensation and discrete Raman amplification, into a single component [16]. High-power fiber lasers have achieved output powers of multiple kilowatts from a single fiber. Due to its inherent material advantages, Ytterbium has been the primary rare-earth-doped gain medium, so fiber lasers are largely confined to its narrow emission wavelength region. Fiber Raman lasers (FRLs) can lead to conversion to wavelengths higher than the starting wavelength, generating several Stokes orders by cascading effects. The most important advantage of Raman lasers is that any laser wavelength can be achieved from the ultraviolet to the infrared with a suitable choice of the pump wavelength, providing that wavelengths are within the transparency region of the material and sufficiently high nonlinearity and/or optical intensity are reached. For this reason, currently, FRLs are the only wavelengths scalable, high-power fiber laser technology that can span the wavelength spectrum [17]. Fiber Raman lasers (FRLs) are similar to ordinary lasers. A first analogy is that, in FRLs, lasing occurs when the Raman-active gain medium is placed inside a cavity, for example, between mirrors reflecting the first Stokes wavelength. A second analogy is that, in FRLs, the threshold power is obtained when Raman amplification during a round trip is as large as compensating the cavity losses. However, there are also some important differences between FRLs and traditional lasers. The first difference is that an amplifier medium based on Raman gain is used rather than on stimulated emission from excited atoms or ions. The second difference is that the required wavelength for pumping Raman lasers does not depend on the electronic structure of the medium, so it can be chosen to minimize absorption. Raman lasers in fiber can be classified into two general categories. In the former, the wavelength is shifted by one Raman-Stokes shift, while, in the latter, called the cascaded Raman laser, the wavelength is shifted by multiple Raman-Stokes shifts. A single wavelength shift FRL is a fiber resonator at the Stokes wavelength, in which SRS shifts the spectrum of the propagating pump radiation through an optical fiber towards lower frequency Stokes components. Raman lasing is obtainable in conventional single-mode telecom fibers, as well as other passive fibers, by trapping Stokes components by reflectors and by pumping the laser by a high-power rare-earth-doped fiber laser. A multiple wavelength shifts FRL takes advantage of SRS cascading. The pump light gives rise to the "first-order" laser light in a single frequency-shifting step, which remains trapped in the laser resonator. Afterwards, the "first-order" laser light can be pushed to very high power levels, becoming itself the pump for the generation of the "second-order" laser light, which is shifted by the same vibrational frequency of the first order. Using this technique, conversion of the pump light to an "arbitrary" desired output wavelength through several discrete steps can be performed by using a single laser resonator. Tunability is an important property for lasers. In order to obtain a tunable FRL, cavity mirrors can be integrated within the fiber by tunable fiber Bragg gratings [18]. Germanosilicate fibers are extensively used in FRLs because their Raman gain value is about eight times higher than silicate fiber. In a GeO-doped fiber, taking advantage of a linear cavity configuration and a purely axial compression of the fiber bragg gratings (FBGs), a high-power and widely tunable all-fiber Raman laser can be obtained with continuously tuning over 60 nm [19]. Low-loss phosphosilicate (P 2 O 5 -SiO 2 ) fiber has a peak Raman shift of 1330 cm −1 , therefore to make a FRL at 1484 nm, only two cascaded cavities (at 1239 nm and 1484 nm, respectively) are required, thereby greatly increasing the FRL efficiency. In addition, in phosphosilicate (P 2 O 5 -SiO 2 ) fiber, a FRL with a tuning range of about 50 nm can be obtained [20]. For applications requiring high power laser with wavelengths longer than 2 microns, FRLs are an attractive option [21]. Chalcogenide glass fibers have Raman gain coefficients approximately two orders of magnitude greater than the gain coefficients of silica. For wavelengths longer than 1.8 µm, the development of Raman fiber lasers based on chalcogenide glasses has now become technically achievable [22]. Single-frequency laser sources are difficult to implement, because they suffer from poor robustness and quality (linewidth and stability) and are expensive. A Raman gain based on a distributed feedback (DFB) fiber laser has a number of potential advantages: first, the possibility of generating narrow linewidth low-noise oscillation in wavelength bands outside the band of rare-earth doped materials; second, Raman-based fiber laser systems do not suffer from issues associated with high-concentration rare-earth doped fibers, which limit their efficiency due to thermal effects [23,24]. By exploiting the multiple scattering of photons in a disordered gain medium, a random laser can be obtained, allowing a coherent light source without a traditional cavity. For different scientific and medical applications, ultrafast Raman fiber lasers are an interesting option. An efficient way to obtain a high power ultrafast Raman fiber laser is pulse pumping, but it requires a real-time synchronization between the pump pulses and laser cavity. In order to overcome this limitation, random fiber lasers with distributed Raman gain and Rayleigh feedback in standard telecommunication optical fibers have been realsed. Their advantages are a simple and flexible design, quasi-CW operation, narrow spectrum generation, high beam quality, and pump energy conversion efficiencies comparable to the efficiencies of conventional cavity lasers [25] and wavelength tuning [26]. Optical communication systems are often limited by fiber dispersion, which broadens the pulse by fiber losses. In principle, the use of fundamental solitons, as an information bit, can solve the prolem of dispersion [27]. However, due to the fiber losses, soliton width begins to increase because of a decrease in the peak power during propagation inside the fiber; therefore, amplification is required in order to recover original width and peak power. Taking advantage of SRS, solitons can be amplified by injecting CW pump radiation periodically [28]. When the wavelength of the pump pulse is close to or inside the anomalous dispersion region of an optical fiber, the Raman pulse should experience the soliton effects, i.e., under suitable conditions, almost all of the pump-pulse energy can be transferred to a Raman pulse that propagates undistorted as a fundamental soliton. Raman solitons were also generated by using a conventional fiber with the zero-dispersion wavelength near 1.3 µm, which led to 100-fs Raman pulses near 1.4 µm [29]. An interesting application of the soliton effects led to the development of the Raman soliton laser. Such lasers provide their output in the form of solitons of widths 100 fs, but at a wavelength corresponding to the first-order Stokes wavelength, which can be tuned over a considerable range [30]. In fiber optic communications, the growing demand in terms of transmission capacity has fulfilled the entire spectral band of the erbium-doped fiber amplifiers (EDFAs). This dramatic increase in bandwidth rules out the use of EDFAs, leaving fiber Raman amplifiers (FRAs) as the key devices for future amplification requirements. Today, almost every long-haul or ultralong-haul fiber-optic transmission system uses FRAs. In the field of high-power fiber lasers, commercially available fiber-based Raman lasers can deliver output powers in the range of a few tens of Watts in continuous-wave operation with high efficiency and broad gain bandwidth, covering almost the entire near-infrared region. Due to their tunability, compactness, and capability for multi-wavelength operation, FRLs have great commercial potential in a variety of applications. However, for both FRAs and FRLs, a number of interesting basic physical challenges still remain open and a number of new applications could be envisaged for the future implementation of high-capacity optical communication systems and for the future realization of high power fiber lasers, respectively. Conflicts of Interest: The author declares no conflict of interest.
2020-12-03T09:06:11.582Z
2020-11-27T00:00:00.000
{ "year": 2020, "sha1": "a7abee550ef0cb6bbc98000d152ef9e0730b4110", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/11/12/1044/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6327cfe347bf01101bdd08fb62a08489bb987fc1", "s2fieldsofstudy": [ "Chemistry", "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
15977569
pes2o/s2orc
v3-fos-license
On the Fraction of Quasars with Outflows Outflows from active galactic nuclei (AGNs) seem to be common and are thought to be important from a variety of perspectives: as an agent of chemical enhancement of the interstellar and intergalactic media, as an agent of angular momentum removal from the accreting central engine, and as an agent limiting star formation in starbursting systems by blowing out gas and dust from the host galaxy. To understand these processes, we must determine what fraction of AGNs feature outflows and understand what forms they take. We examine recent surveys of quasar absorption lines, reviewing the best means to determine if systems are intrinsic and result from outflowing material, and the limitations of approaches taken to date. The surveys reveal that, while the fraction of specific forms of outflows depends on AGN properties, the overall fraction displaying outflows is fairly constant, approximately 60%, over many orders of magnitude in luminosity. We emphasize some issues concerning classification of outflows driven by data type rather than necessarily the physical nature of outflows and illustrate how understanding outflows probably requires a more comprehensive approach than has usually been taken in the past. INTRODUCTION The role of outflows from quasars and active galactic nuclei (AGN) has recently become an important feature in the overall framework of how galaxies and star formation processes evolve over cosmic time. Mergers and other interactions triggering AGN seem to provide feedback affecting the larger scale environment. Recent efforts to include the effects of this so-called AGN feedback focus on two modes: a "radio" mode whereby a relativistic jet heats the surrounding interstellar and intercluster media (e.g., Best 2007), and a "quasar" mode whereby a lower velocity but higher mass outflow also helps to clear out post-merger shrouding gas and quenches star formation (e.g., Di Matteo, Springel, & Hernquist 2005). We focus on this second mode in this paper. For this mode, a number of questions require addressing. How common are outflows? Do all AGN have outflows? What drives outflows? Is there a single all-governing structure of AGN? Answering these questions will help us to understand the role AGN outflows with respect to issue of feedback, and other important issues like chemical enrichment and accretion. In the ensuing sections we aim to achieve several goals: (1) to review the ways in which outflows are detected in AGN over all luminosity scales; (2) to comment on the merits of various catalogs of outflows; and (3) to arrive at the true (possibly property-dependent) observed frequency of outflows. In its most basic interpretation, the observed frequency of outflows can be equated with the fraction of solid angle (from the view point of the central black hole) subtended by outflowing gas. This interpretation assumes that all AGN feature outflows and that not all sight-lines to the emitting regions are occulted by the outflow. Alternatively (and equally simplistic), the frequency can be interpreted as the fraction of the 1 Department of Physics & Astronomy, The University of Wyoming (Dept. 3905), 1000 East University Ave., Laramie, WY, 82071 duty cycle over which AGN feature outflows (assuming the outflow subtends 4π steradians). The actual conversion of the fraction of AGN featuring spectroscopic evidence of outflow to the solid angle subtend by such outflows has been treated by Crenshaw et al. (1999) and Crenshaw, Kraemer, & George (2003). This computation involves further knowledge of the line-of-sight covering factor (that is, the fraction of lines-of-sight that reach the observer that are occulted by the outflow) as well as an understanding of range of solid angle sampled by the AGN used (e.g., Type 1 versus Type 2 AGN). The true situation is likely in between these two extremes, and may depend also on properties we can not currently constrain, such as the time since the AGN was triggered. Additionally, we strive here to build a case that more effort should be made to consider outflows of all types together. Often data limitations of one sort or another have led to the study of limited parts of parameter space (e.g., outflow velocity or velocity dispersion), creating artificial or at least biased divisions. There appears to be a continuous range in properties of outflows and these should only be regarded as fundamentally different when there is clear evidence to reach such a conclusion. Below we discuss the identification of outflows ( §2) and the datadriven subcategories ( §3). We show an illustrative example of how combining the different outflow subclasses may lead to a more unified physical understanding of outflows ( §4). Finally, we bring together the different survey methodologies to determine an overall fraction of AGN displaying the signatures of outflows ( §5) and summarize the case for more global studies of the outflow phenomenon. We adopt a cosmology with Ω M = 0.3, Ω Λ = 0.7, and H 0 = 70 km s −1 Mpc −1 . OUTFLOWS Outflows from AGN are primarily detected in ultraviolet and X-ray absorption against the compact continuum source (i.e, the inner portions of the accretion disk) and/or the more extended broad emission line region. In a few cases, outflows have been demonstratively observed in emission both from the broad line region in narrow-line Seyfert 1 galaxies (e.g., Leighly & Moore 2004;Leighly 2004;Yuan et al. 2007) and from the narrow line region of Seyfert 1 galaxies (e.g., Das et al. 2005Das et al. , 2006. [Arguably, the fact that broad emission lines in most AGN have only a single peak is also a signature of outflowing gas (e.g., Murray et al. 1995).] For emission-line gas, reverberation mapping provides a direct means at establishing the location of the gas. For absorption-line gas, placing a distance between the gas and the ionizing continuum relies on using absorption-line diagnostics to assess the photoionization parameter (U ), and having other information that constrains the density (n) of the gas. The distance, r, is related to these quantities via where ν LL = 3.3 × 10 15 Hz is the frequency of the Lyman limit. Constraints on the density can come from time-variability (if ionization/recombination dominates the variability timescale), or the presence of excited-state lines. Density information is not typically available for intrinsic absorbers, so secondary indicators must be employed to separate intrinsic absorbers from absorption by interloping structures (e.g., IGM filaments, galaxy halos and disks). In order of decreasing utility and importance, the secondary indicators of an intrinsic origin for an absorptionline system are: (1) velocity width, (2) partial coverage, (3) time variability, (4) high photoionization parameter, (5) high metallicity, (e.g. Barlow & Sargent 1997). Not all intrinsic absorbers exhibit all of these properties, but the probability of an intrinsic origin is higher if an absorber exhibits more than one property. Likewise, with the exception of the first two indicators (and the first only in its most extreme, see §3.1), each of these indicators have been observed in intervening material. Thus, by themselves, no one indicator should be taken to imply an intrinsic origin. Historically, the first criterion has led to three divisions in the classification of intrinsic absorbers. Outflows with the largest velocity dispersions are termed "broad absorption lines" (e.g., Weymann et al. 1991, BALs, FWHM ≥ 2000 km s −1 ). On the other extreme, intrinsic absorbers where the velocity dispersion is sufficiently small as to cleanly separate the C IV doublet are called "narrow absorption lines" (e.g., Hamann & Ferland 1999, NALs, FWHM ≤ 500 km s −1 ). Since there is a whole continuum of velocity widths, this has led to an in-between class known as "mini-BALs" (e.g., Churchill et al. 1999). Below we examine each of these classes in terms of their observed frequency, and note various issues in determining this number, including dependencies on quasar physical properties. Broad Absorption Lines (BALs) Broad absorption lines in the spectra of quasars are the most easily identifiable forms of outflows. The large velocity width is very readily associated with accelerated, outflowing gas. As such, these garnered more attention historically than their smaller velocity-width kin (e.g. Weymann et al. 1985;Turnshek et al. 1988;Turnshek 1988;Weymann et al. 1991;Voit et al. 1993). Weymann et al. (1991) established criteria, summarized in a number called the BALnicity index (BI), for determining if an absorption line constituted a BAL. The BI was a modified form of an equivalent width whereby one counted absorption that fell below 90% of the true quasar continuum that was contiguously below this level for more than 2000 km s −1 . Moreover, no absorption within 3000 km s −1 of the quasar redshift was counted in order to remove possible contamination by absorptionline gas not physically associated with the quasar central engine (e.g., interstellar gas from the quasar host galaxy, or intergalactic material from the host cluster). [Note: With a minimum velocity of 3000 km s −1 and a minimum contiguous width of 2000 km s −1 , this means that no absorption falling entirely within 5000 km s −1 of the quasar redshift is counted.] This index was established using low-dispersion data of high-redshift (1.5 ≤ z ≤ 3.0) objects from the Large Bright Quasar Survey, or LBQS (Foltz et al. 1987a(Foltz et al. , 1989Hewett et al. 1991Hewett et al. , 1995, and was designed to yield a pure sample of objects with bonafide outflows. We note here that the utility of BI was driven purely by the data quality (signal-to-noise ratio and resolution) of the LBQS spectra in conjunction with the desire to remove false positives (at the expense of losing some true BAL quasars). While the use of BI to define samples of BAL quasars has utility, especially in comparing results between data sets of varying quality, it excludes some fraction of real high-velocity dispersion outflows that qualitatively appear to be BAL quasars but just fail to have positive BI. An improvement on the BI, termed the intrinsic absorption index (AI), was developed by Hall et al. (2002) to alleviate the inadequacies of BI in selecting objects where high velocity outflows were clearly observed but were not included as BAL quasars by the BI criteria (e.g., UM 660, PG 2302+029). The AI was designed to be more flexible and inclusive and has been very useful in its application to newer and better quality datasets like the Sloan Digital Sky Survey (SDSS). This flexibility, while good at including objects not previously selected by BI, has increased the contamination of samples of intrinsic absorption while still not including other forms of intrinsic absorption (e.g., Ganguly et al. 2007). The incidence of BALs has primarily been determined using optical spectra where, historically, large samples of high-redshift quasars (to get rest-frame UV coverage) could efficiently be selected (e.g., with color-selection). In such surveys (Hewett & Foltz 2003;Reichard et al. 2003;Trump et al. 2006;Ganguly et al. 2007), roughly 10-25% of objects are observed to host BALs. An issue with optical/UV surveys, however, is potential biases in the selection of quasars against those hosting BALs due to the fact that much of the continuum is absorbed (e.g., Goodrich & Miller 1995;Goodrich 1997;Krolik & Voit 1998) and intrinsically reddened (e.g., Reichard et al. 2003). Using the LBQS, where the observed frequency of BAL quasars in the redshift range 1.5 ≤ z ≤ 3 is 15% using a BI criterion, Hewett & Foltz (2003) estimated a true BAL frequency of 22% from comparisons in the k-corrections of BAL and non-BAL quasars. The recent catalog of BAL quasars using an AI criterion from Trump et al. (2006) found a BAL frequency of 26% (in the redshift range 1.7 ≤ z ≤ 4.38). Both of these estimates are based on the C IV λ1548, 1550 doublet, which is the most commonly used species in selecting intrinsic absorption owing to the relatively high abundance of carbon, the high ionization fraction of C 3+ in moderatelyionized gas, and the resonant absorption of the doublet. To combat possible selection biases in the optical, one can examine quasar catalogs selected in other bands. Becker et al. (2000) examined radio-selected quasars from the FIRST Bright Quasar Survey and found a BAL quasar frequency of about 18% (though it is only 14% if only BI> 0 objects are counted, comparable to other estimates based on optical-selection). This again predominantly used the C IV doublet and objects at z > 1.7. We note that a subset of radio-selected BAL quasars can be identified as polar outflows (e.g., Zhou et al. 2006;Brotherton, de Breuck, & Schaefer 2006;Ghosh & Punsly 2007). At least one of these objects, FIRST J155633.8+351758, appears to be an optically reddened and beamed radio-quiet quasar (Berrington et al. 2007). The presence of BAL outflows in such objects as well in as edge-on FR II BAL quasars, (e.g., Gregg et al. 2006) indicates high-velocity outflows are present in a variety of geometries. There is as yet no observational signature in the absorption spectra that is correlated with orientation indicators, so any geometrically restrictive model such as those identifying BAL outflows solely with equatorial winds are either wrong or incomplete. Any complete picture of outflows must reflect a range of geometries. It has yet to be established observationally how often polar outflows occur compared to equatorial, or if the location or dynamics differ. In addition to radio-selection, one can examine the frequency of BAL quasars from infrared selection. Recently, Dai et al. (2007) compared the catalog of BAL quasars (Trump et al. 2006) from the Third Data Release (DR3) of SDSS and the parent sample of DR3 quasars (Schneider et al. 2005) with the Two Micron All-Sky Survey (Skrutskie et al. 2006, 2MASS) Point-Source Catalog (PSC). With some variation with redshift, they reported an overall true BAL quasar fraction of 43±2%, markedly higher than estimates based on UV/optical data alone. Presumably, this difference accounts for the effects of dust and absorption that may bias UV/optical selection techniques against find BAL quasars. We point out that this estimate relies heavily on the automated techniques employed in finding BAL quasars in a large dataset such as SDSS. From a critical look at 5088 1.7 < z < 2 quasars from SDSS DR2, Ganguly et al. (2007) noted several instances of false (and missed) classifications in the Trump et al. (2006) catalog. A comparison of the Ganguly et al. (2007) sample with the 2MASS PSC reveals a BAL fraction of 66/287 (23%), completely consistent with the analysis of Hewett & Foltz (2003). Blindly using the Trump et al. (2006) catalog yields a BAL fraction of 96/287 (33%), consistent with the z < 2 points from Dai et al. (2007, see their Figure 4). At face value, this implies that nearly 30% of the Trump et al. (2006)-2MASS cross-matched sample consists of falsepositives. We return to the issue of false-negatives below. Narrow Absorption Lines (NALs) and mini-BALs Intrinsic NALs and mini-BALs have, within the last decade, come to light as a very powerful and complementary means of studying outflows. Unlike their very broad kin, these absorbers are generally not blended and, therefore, offer a means to determine ionization levels and metalicities using absorption-line diagnostics. Thus, NALs and mini-BALs are more useful as probes of the physical conditions of outflows. The drawback, however, is that truly intrinsic NALs and mini-BALs are more difficult to identify, since interloping structures such as the cosmic web, galaxy clusters, and galactic disks and halos also have comparable velocity spreads ( 800 km s −1 ). Historically, progress was made by statistically identifying an excess of absorbers over what is expected from randomly distributed intervening structures (e.g., Weymann et al. 1979). With improved technologies (such as high-resolution spectroscopy with large telescopes), we can now take advantage of the other secondary indicators to separate intrinsic from intervening absorption. In the following subsections, we discuss the frequency of two subclasses based on both historical and more recent studies. We distinguish between absorbers that appear near the quasar redshift (associated absorbers), and those that appear at large velocity separations. 3.2.1. Associated (z abs ∼ zem) Absorbers (AALs) The term "associated" refers to narrow velocitydispersion absorption-line systems that lie near the quasar redshift. It has been shown that the frequency of such systems is much larger than those at large velocity separations (Weymann et al. 1979;Foltz et al. 1987b;Anderson et al. 1987;Aldcroft et al. 1994;Richards et al. 1999;Richards 2001). Typically, associated absorbers are defined as those lying within 5000 km s −1 of the quasar redshift (Foltz et al. 1986). As such, they were historically very complementary to BAL quasars selected using BI. Updating BAL classification to reflect the better data quality usually available today does allow for some confusion among classes, at least in some cases, and this should be kept in mind. The issue of what types of quasars hosted AALs was the subject of much scrutiny with some studies claiming to see an excess of AALs (e.g., Foltz et al. 1987b), while other studies claimed no excess (e.g., Sargent, Steidel, & Boksenberg 1988). It was surmised that strong AALs (i.e., those with a large C IV equivalent width) were preferentially found in optically-faint, steep radio spectrum quasars (Møller & Jakobsen 1987;Foltz et al. 1988). However, more recent studies have found that AALs are found (with varying frequency) in all AGN subclasses from Seyfert galaxies (e.g., Crenshaw et al. 1999;Kriss 2006) to higher luminosity quasars (e.g. Ganguly Misawa et al. 2007), and from steep to flat radio spectrum sources (Ganguly et al. 2001;Vestergaard 2003). An important issue in the consideration of AALs as it relates to outflows is where the absorbing gas originates. We note here a few arguments for a direct association with outflows from the central engine. While detailed studies of individual objects have shown absorption-line components that must reside in the host galaxy far from the central engine (e.g., Hamann et al. 2001;Scott et al. 2004;Ganguly et al. 2006), on the whole there have been no documented cases of AALs that are truly redshifted with respect to the actual systemic velocity. If AALs were to originate in the host galaxy, one would expect some fraction of the absorbers to arise from infalling material. In fact, the velocity distribution of C IV AALs is sharply peaked with the C IV emission redshift (Ganguly et al. 2001), implying a close dynamical connection between AALs and the broad emissionline region. In addition, blind studies of AALs using secondary indicators find that ≥ 20% are time-variable (Wise et al. 2004), and that ∼ 33% show partial coverage (Misawa et al. 2007). From an analysis of 59 z < 1 quasars, Ganguly et al. (2001) showed that the overall frequency of AALs was 25±6%, with some variation with broad-band spectral properties. Similar frequencies have been established at higher redshift by Vestergaard (2003, 27 ± 5%) and Misawa et al. (2007, 23%), both of which made attempts to filter out contamination by intervening absorbers. Oddly, these fractions are lower than the recent study of Ganguly et al. (2007), who find an AAL frequency of 1898/5088 (37%), although the 5000 km s −1 velocity cutoff for traditional AALs was not strictly adhered to in that survey. We note that 1478/1898 (78%) AALs in that study were missed by the AI selection used by Trump et al. (2006). These certainly constitute falsenegatives from the standpoint of finding intrinsic absorption, though not from the standpoint of finding only BAL quasars. While Vestergaard (2003) did note that quasars with AALs are redder on average, a comparison of the Ganguly et al. (2007) sample with the 2MASS PSC reveals that the frequency of AALs is similar to the parent sample (107/287, 37%). Thus, the selection of AALs quasars is not affected by optical biases (e.g., reddening or large optical absorption) like BAL quasars. High Velocity NALs The first observational evidence for intrinsic narrow velocity-dispersion absorption appearing at high ejection velocity (many tens of thousands of kilometers per second) came nearly a decade ago and include: PG 2302+029 (Jannuzi et al. 1996), Q 2343+125 , and PG 0935+417 . Models of quasar winds generally are able to explain outflows with ∆v/v ∼ 1, but are challenged by these ∆v/v << 1 systems. One idea is that the sight-line cuts across the outflow that would produce a BAL under a difference orientation (e.g., Elvis 2000;Ganguly et al. 2001), but this has yet to be demonstrated theoretically. These systems are also interesting because they only absorb photons from the compact continuum. Thus, partial coverage indicators provide severe constraints on the geometry of the flow. Hamann et al. (1997, black stars); and the mini-BAL in PG 2302+029 from Jannuzi et al. (1996, yellow star). In terms of demographics, the first assessment of the frequency of these systems came from Richards et al. (1999) and Richards (2001). From a statistical analysis examining the variation in the velocity distribution of C IV NALs with quasar radio-loudness and spectral index, Richards et al. (1999) estimated that as many as 36% of C IV NALs may arise from outflowing gas. Recently, Misawa et al. (2007) report that only 10-17% of C IV NALs in the velocity range 5000-70000 km s −1 show evidence of partial coverage. (Thus it is possible that Richards et al. (1999) overestimated the fraction of highvelocity NALs, or that 50-70% of intrinsic C IV NALs do not show partial coverage.) This is not a statement, however, on the fraction of quasars that host such outflows. Vestergaard (2003) reported that high velocity intrinsic NALs appeared in 18±4% of 1.5 < z < 3.6 quasars in the velocity range 5000-21000 km s −1 , with about a factor of two variation between radio core-dominated (17±10%) and radio lobe-dominated (33±15%) morphologies. In a recent survey of 1.8 < z < 3.5 SDSS sources, Rodriguez Hidalgo et al. (2007) find about 12% of quasars have high-velocity NALs in the velocity range 5000-50000 km s −1 , and ∼ 2.3% in the velocity range 25000-50000 km s −1 . This latter velocity range is often missed by surveys due purely to observational cutoffs. Over this velocity range, absorption by C IV can become confused with Si IV absorption. AN EXAMPLE ILLUSTRATING THE MERITS OF COMPREHENSIVE OUTFLOW STUDIES When parameter space is truncated, either intentionally (e.g., through subclass segregation or the desire to avoid false positives/negatives) or unintentionally (e.g., by data limitations), real correlations that could lead to physical understanding may be missed. The wide variety of observational techniques and the improvements in sample size now make it possible to study the outflow phenomenon in a more complete manner than ever before possible. Dai et al. (2007) 1.7-4.0 to 0 to +25 1000 26% → 40% Note. -The percentages in the last column indicates the fraction of AGN (in the redshift and luminosity ranges listed in columns two and three, respectively) that host intrinsic absorption (with velocities and velocity widths listed in columns four and five, respectively). An arrow indicates a percentage that has been corrected for possible selection biases. The luminosities in column three were computed assuming a h = 0.7, Ωm = 0.3, Ω Λ = 0.7, q = 0.5 cosmology. As discussed above, outflow velocity (both in terms of dispersion and range) has often been limited in many studies. By putting together different surveys, we can study the velocity properties of outflows as a function of other potentially important parameters. One recent example where this has been shown to be fruitful is in plotting maximum velocity of absorption, v max , against luminosity. This was originally done by Laor & Brandt (2002) for PG quasars, who, finding an upper envelope formed by the soft X-ray weak absorbers, argued for radiation-driven winds as quasar outflows. More recently, others have added other samples of outflows to this plot (e.g., Gallagher et al. 2006;Misawa et al. 2007;Ganguly et al. 2007) showing that this envelope can be extrapolated to higher luminosities and similarly constrains luminous BAL quasars. We have further added to this plot in Figure 1 (with the envelope fit from Ganguly et al. 2007), including all types of intrinsic outflows and the most extreme in terms of both terminal velocity and luminosity. Polar BAL quasars are included as well and fall among the general BAL quasar population. Intrinsic NAL systems are well represented throughout the full luminosity range. Some data biases are present; for instance, the wavelength range of SDSS spectra limited the observed v max to less than 30000 km s −1 for most quasars studied by Ganguly et al. (2007), so the gap under the envelope for luminosities above 10 46 erg s −1 is potentially not real. Similarly, searching for intrinsic absorption at velocities above 50000 km s −1 becomes very difficult since the C IV lines become blueshifted into the N V/Lyman α region where identification is especially challenging. Of course, the empirical fit does not take into account relativistic effects, so it is inappropriate to extrapolate it to arbitrarily high luminosities. Taken at face value, the fit implies that a quasar with luminosity λL λ (3000Å)= 10 47.3 erg s −1 would be capable of driving an outflow at the speed of light, but this is clearly unphysical. More data, and insights establishing better criteria as to which outflows sample the envelope are needed to improve our understanding of the dependence on the terminal velocity on quasar physical properties. The figure also illustrates another (though more subtle) issue. While AALs are typically defined as those NALs appearing within 5000 km s −1 , some studies at lower redshift have opted for smaller velocity differences, claiming that 5000 km s −1 is an arbitrary cut-off. The figure shows that there is a physical reason for this. As the figure apparently shows, no outflow appears to be driven to a velocity larger than is allowed by radiation pressure. At lower redshifts, most objects studied are also lower luminosity (e.g., Seyfert galaxies). The figure clearly shows that objects with λL λ (3000Å) 3 × 10 44 erg s −1 are not capable of radiatively driving outflows with velocities larger than 5000 km s −1 . An insight such as this is not only interesting for understanding AGN outflow physics, but is also of use to other fields that make use of intervening absorption-line systems (as it presents an additional means at separating intrinsic from intervening absorbers). Table 1 summarizes the incidence of outflows in AGN from several recent surveys (with the redshift and luminosity ranges of the AGN listed in columns two and three, respectively, and the ranges in outflow velocity and velocity width listed in columns four and five, respectively). Inspection of the table shows that the outflow fraction is dependent both on the characteristics of the parent sample of AGN used, and on the forms of intrinsic absorption included. If one only counts BALs observed in higher luminosity AGN [λL λ (3000Å) 10 45 erg s −1 ], then the outflow fraction is 23 % (Hewett & Foltz 2003;Ganguly et al. 2007). However, this is by no means a complete assessment of outflows. WHAT IS THE TRUE FRACTION OF OUTFLOWS? In order to compute a more complete outflow fraction, one must deal with three issues: (1) cross-talk among classifications, (2) dependence of frequencies on quasar properties, and (3) mode of the outflow. Here, we ignore the third issue and focus on the first two. Crenshaw et al. (1999) surveyed outflows in lower luminosity [λL λ (3000Å) 10 45 erg s −1 ] AGN and we use their result, 59%, as one benchmark (see also Kriss 2006, who finds a similar percentage based on O VI absorption). For higher luminosity AGN, we start with the Hewett & Foltz (2003) percentage of 23%, as it is the purest, and most well-defined sample of outflows. To this, we must add in two things, the contribution from AALs, and the contribution from high-velocity NALs/mini-BALs. There is general agreement between Ganguly et al. (2001) and Vestergaard (2003) that the AAL fraction is 23-27 %. As noted by Ganguly et al. (2001), quasars that host broad-absorption lines also tend to host associated absorption. However, the sample of quasars employed in the Vestergaard (2003) estimates explicitly does not include BAL quasars. Thus, while there is likely some cross-talk between the class of BAL quasars and AAL quasars, the above range should minimize this effect. Thus, the outflow fraction counting BALs and AALs (integrated over "all" high luminosity AGN) is 46-50%. For high velocity NALs/mini-BALs, there is a more sizeable error margin (12-30%) owing to cross-talk and dependence on quasar property. Adding this uncertain number gives our final tally: 57-80%. This fraction is surprisingly comparable to that of lower luminosity AGN. An alternative approach is to begin with the complete sample of outflows from Ganguly et al. (2007). Correcting their overall outflow fraction (2515/5088, 49%) for quasar selection biases (following the strategy of Dai et al. 2007), we find an outflow fraction of 60±5% (66/287 + 107/287, see §3.1, and §3.2.1). This falls in the above range, and the only missing form of outflow from that sample is NALs/mini-BALs at velocities larger than ∼ 30000 km s −1 . We further note that this explicitly eliminates cross-talk between quasars hosting BALs and quasars hosting AALs. SUMMARY We conclude that, largely independent of AGN luminosity, 60% is a good reference number for the percentage of AGN with intrinsic outflows. This number may increase slightly with more thorough searches of parameter space (e.g., very high velocity outflows). Until evidence suggests otherwise, we recommend that quasar outflows be studied as a single phenomenon whenever possible and that restrictions based on absorber subclass or data limitations be clearly stated and considered in the interpretation of results. Catalogs should clearly state their contamination issues and their limitations for particular applications as appropriate. A more comprehensive understanding of the outflow phenomenon awaits us. We wish to thanks the anonymous referee for comments that improved the quality of the paper. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. We acknowledge support from the US National Science Foundation through grant AST 05-07781.
2014-10-01T00:00:00.000Z
0001-01-01T00:00:00.000
{ "year": 2007, "sha1": "3d2bb774282108a2bf7c0110deec28691f72fd88", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0710.0588", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3d2bb774282108a2bf7c0110deec28691f72fd88", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
54546298
pes2o/s2orc
v3-fos-license
Democracy, political risks and stock market performance This study examines the impacts of democracy and political risk on stock market. Using annualized panel data for 49 emerging markets for 2000-2012 we find evidence that democracy and political risk do have impact on stock market returns and the relationship between democracy and political risk is parabolic i.e., there is a threshold level of democracy after which political risk begins to decline. Our also results suggest that decreases in political risk lead to higher returns. Introduction There are many real life events which propose that stock market performance and political stability might be strongly related.However, there exists hardly any empirical research testing this relationship.The beginning of 2011 witnessed the Arab Spring, which consisted of large prodemocracy demonstrations against dictatorships in the MENA region that even escalated to civil war in Libya.The riots began in Tunisia and spread to Egypt, Libya and several other countries leading to political instability in the entire area.Because the unrest seemed to be transmitted from one country to another, investors became more and more worried; for example, on January 27, 2011, Egypt's benchmark index, the EGX 30, dived 10% and even the world's major markets in the USA, Europe and Asia tumbled because the protests were expected to continue moving to other oil producer countries in the area.The unrest in Egypt lasted for all of 2011 because the Egyptian military, which seized control of the government after the revolution, refused to release power to the democratically elected government.Between January 3, 2011 and January 2, 2012, the EGX 30 index lost almost 50% of its value, dropping from 7073.12 to 3679.96. In 2006, after several months of political crisis, the Thai military ousted the elected prime minister from power and, together with the ruling elite, appointed a new prime minister in 2008 to lead the country during the next several years, which consisted of more-or-less violent demonstrations between the supporters of the ousted prime minister and his opposition.The political instabilities led foreign investors to reduce their exposure to the Thai market, dragging down prices for a period; however, because the demonstrations remained peaceful, the markets calmed and began to rise. Latest examples of the relationship between unstable political environment and stock market performance are offered by the political turmoil in Ukraine in 2014, which led to conflict with Russia and collapsed the Russian stock market, and the demonstrations for democracy in September 2014 in Hong Kong which had negative impacts on Hong Kong stock market. The effects of political risk have been found to be statistically significant in emerging stock markets (see, e.g., Erb, Harvey and Viskanta (1996a), Diamonte, Liew and Stevens (1996) and Perotti and van Oijen (2001)).Moreover, the ever increasing international capital flows could reinforce the impact of political turmoil on stock markets.Lensink, Hermes and Murinde (2000) support this by providing evidence that an increase in political risk leads to increase in capital flight. Although these studies incorporated democracy as a part of their political risk component, there has not been a study to our knowledge that examined whether democracy can affect the behavior of the stock markets1 .This study aims to fill the gap by investigating the effects of democracy and political risks on the stock market performance for a set of emerging markets.Several studies on democracy and political risk (see, e.g., Gleditsch and Hegre (1997); Hegre, Ellingsen, Gates, and Gleditsch (2001); Reynal-Querol (2002a,b);and Rock (2009) and their references) have observed that the semi-democracies are more prone to conflicts, corruption and other political risks than full democracies and autocracies.This reflects that the semi-democracies, unlike full democracies and full autocracies, have not yet established strong institutions that might prevent protests and other anti-government activities, which makes these countries more vulnerable to political instabilities. Thus, it might be argued that democratization initially increases political risk and reduces it only after a certain threshold level of democracy has been reached.For this to hold, democracy's relationship with political risk could be described by a U-curve that indicates that the countries at the ends of the curve have smaller political risks than the countries in the middle (see Figures 1 and 2, in which the x-axis presents the level of democracy and the y-axis represents the political risk level for several emerging markets).The quadratic polynomial in the figures describes this nonlinear relationship between democracy and political risk: , where denotes the countries' political risk, represents the democracy level and its square.It is notable in this that although the coefficient is negative, is positive, which indicates that, after passing a threshold level, the higher levels of democracy decrease political risk, in this functional form. [Insert Figures 1 and 2 here] The main question this study aims to answer is the following: Do democracy and political risks have effect on stock market performance or are the markets immune to the political environment? As a by-product of our analysis, we also contribute to the political risk sign paradox (see below and Section 2.3) and identify several determinants of emerging stock market returns. There is no commonly accepted theory relating democracy to stock market returns; thus, the issue between their relationship is mainly empirical.On the one hand, consistent with ICRG (International Country Risk Group) classifications, the lack of democracy, or democratic accountability, is part of the total political risk; thus, it should be priced in share prices together with other risks, following Erb et al. (1996a).On the other hand, Perotti and van Oijen (2001) find that political risk has a positive sign that indicates that politically safer countries have higher excess returns than markets with more political risk; supporting this, Diamonte et al. (1996) posit that portfolios that experienced decreases in their political risk also produced larger returns than portfolios with increased political risk. It could also be argued that democracies are generally associated with better institutions, such as the protection of private property and better enforcement of laws and regulations.However, because democracies are subject to frequent change of government officials, they might be considered as politically more unstable than autocracies with respect to governmental stability and political predictability.Conversely, this attribute might indicate that democracies are better able to adjust to political and economic environments.Semi-democracies, on the other hand, might be lacking the growth supporting effects of democracy (better institutional environment), but they suffer from its negative effects on stability (increased political uncertainty, corruption). Aggregate stock market returns are fundamentally related to economic growth.The evidence for the effects of democracy on economic growth are far from unanimous, however. Among others, Tavares and Wacziarg (2001) posit that democracy has both positive and negative effects; after all the effects are accounted for, the total impact is slightly negative.Persson and Tabellini (2007), in turn, find that democracy has positive effects on economic growth.Acemoglu, Johnson, Robinson and Yared (2008) show that, after controlling for factors affecting both democracy and economic growth, the relationship between democracy and growth disappears. Instead, the authors argue that the cross-country correlation between income and democracy reflects only the common development paths of political and economic environment.To sum up, Docouliagos and Ulubasoǧlu (2008) provide meta evidence from 84 democracy-growth studies that democracy net effect on economy is not detrimental.Moreover, Rodrik and Wacziarg (2005) indicate that the even the process of democratization comes at no costs to growth with likely boost in growth and reduction in economic volatility.Further evidence on the negative effects of democracy on volatility of growth is provided in Mobarak (2005).However, regardless of the connection between economic growth and stock market performance, it is possible that democracy and political stability might continue to have a direct impact on stock market performance over and above their impact on economic growth. We utilize two different sources for measuring democracy, the Polity variable from Polity IV and the democratic accountability subcomponent from the International Country Risk Guide's (ICRG's) political risk component.Political risk itself is quantified by the ICRG's political risk composite index, excluding Democratic accountability (more information on these indices can be found from Section 2 and Appendix 1).In addition to the composite index, we study its subcomponents individually to discover which risks have the most significant effects on stock market performance.These subcomponents are Government stability, Socioeconomic environment, Investment profile, Internal conflicts, External conflicts, Corruption, Military in politics, Religious tensions, Ethnic tensions, Law and order and Bureaucracy quality.We also examine two risk vectors that aggregate several political risk subcomponents.The first is Conflicts and tensions from Internal and External conflicts, in addition to Religious and Ethnic tensions.The second is Quality of institutions, which incorporates Corruption, Law and order and Bureaucracy quality. As our core sample, we study annual data on 49 emerging markets for the years 2000-2012.Using a large set of control variables for both local and global factors, we aim to capture both the effects of democracy and its interaction with political risk by using the following two methods: pooled OLS with clustered standard errors and system GMM model by Blundell and Bond (1998). Our results are partly mixed and emphasize the use of several measures of democracy. While icrg finds consistent and statistically significant relationship between democracy and its squared term with the world market adjusted local returns, polity does not support this.However, consistent with Perotti and van Oijen (2001), we report the positive relationship between political risk and returns indicating thatsomewhat counter intuitivelydecreases in political risks are shown to be related to higher returns.In addition, the interaction effects between the icrgdemocracy level and political risk are negative, whereas those of squared democracy and political risk are positive.Of the control variables, logarithm of the GDP per capita, exchange rate changes, development of the local banking and financial sector and the global inflation rate affect emerging market returns2 . In addition to using two estimation methods and two measures for democracy, we also test the robustness of the results with several ways in Appendix 2: by altering the observation periods; by using the mean of our democracy measures to quantify democracy; by different estimation method; and by excluding markets from our core sample data based on their political risks and democracy level.The effects of the interaction terms remain rather consistent in our estimations. The rest of the study is organized as follows.Section 2 presents our data and the descriptive statistics.Section 3 describes our estimation strategy, and section 4 reports the estimation results.Section 5 concludes. Data The governmental systems and the democracy level of emerging markets varies along the entire autocracy-democracy spectrum from more centrally led systems, such as China, to full democracies, such as Israel, when compared with the more developed countries (that are all closer to full democracies).Because of this and because it has been noted in the previous studies (Diamonte et al. (1996), Erb et al. (1996a), Bilson et al. (2002)) that emerging markets are more vulnerable to political instabilities than developed markets, we concentrate our analysis on emerging stock markets.As our core dataset, because of our estimation strategy and data availability, we utilize an unbalanced panel data on 49 developing countries over the 2000-2012 period.In addition, for the robustness tests and the crisis study, we extend our data to begin in 1988, with several different starting periods, aiming to provide a comprehensive picture of the developing stock markets and their macroeconomic and political environments.Table 1 summarizes the descriptive statistics for our variables. Table 1 here Stock market performance The fact that most of the emerging markets were founded and opened their stock markets to foreign investors at the beginning of the 1990s limits both the number of suitable markets and the observation period. Democracy Democracy is a complex political and social phenomenon and as such the concept is challenging to measure accurately.To measure democracy, its attributes must be understood.These includeat the leastfree and competitive elections with open political participation and constraints on representatives, in addition to their accountability to their electorate.There has been some criticism of the typically used measures of democracy (Munck and Verkuilen (2002) provide a 3 Although MSCI Barra has announced that it will classify Israel as a developed country as of May 2010, we include it in our dataset because it was an emerging market during most of our sample period. comprehensive study of the conceptualization, measuring and aggregating problems related to the measures), and we acknowledge that neither of the measures we use to quantify democracy is perfect.Furthermore, Casper and Tufis (2003) warn that even highly correlated democracy measures can produce different results; thus, researchers must justify their measurement choices carefully.Therefore, to take into account as many aspects of democracy as possible and to address data selection issues, we use two different measures for democracy: the Polity index of Polity IV and the democratic accountability index from the Political Risk Service, published in ICRG.Both of these measures are available for the entire sample period for all of our studied markets.The data from Polity IV are available for free, whereas ICRG data are not. Our first measure of democracy, the Polity index, polity, is the difference between Polity IV's Democracy and Autocracy indices ranging from -10 (full autocracy) to 10 (full democracy).Polity IV's Democracy index measures the competitiveness and openness of executive recruitment, constraints on chief executive representatives and the institutions and procedures that allow citizens to participate in politics.The values range from zero to ten, and a higher rating implies higher levels of democracy.Polity IV's Autocracy index is constructed similar way to the Democracy index and is based on the competitiveness of political participation, the regulation of participation, the openness and competitiveness of executive recruitment and the constraints on the chief executive.Its values range from zero to ten, with a higher value denoting higher autocracy.4 Although Munck and Verkuillen (2002) list several strengths of the polity index, they also argue that the index is too minimalistic in its measurement of democracy because it lacks one important component of political participation (the right to vote) and suffers from redundancy issues in some of its measures and aggregates its components too simply. As a second measure of democracy, we use the Democratic accountability index, icrg, from ICRG.The data measure the level of democracy by examining governance on the basis of how free and fair elections are, the presence of (opposition) political parties, the existence of legal protection of personal liberties and government accountability to its electorate.The index ranges from one to six, with the higher number denoting better democracy.5 We also considered one more widely used democracy variable (used, for example, by Barro (1999), Acemogly et al. (2008) and Asiedu and Lien (2011)), the political rights metric by Freedom House, which does not explicitly measure democracy or democratic performance.Instead, it aims to measure rights and freedoms that are related to democracy with a list of 10 questions that range from whether there are free and fair elections to the right to vote and form political parties, whether the opposition has any role to play in government and whether the freely elected government actually holds power, is free of corruption and is accountable for its actions6 .The highest ranking of one indicates the highest degree of freedom whereas seven denotes the absence of political rights.Munck and Verkuillen (2002) criticize the usefulness of the index because it includes too many components (some of which are not even relevant to democracy), the measuring and coding of the components is unclear and the aggregation of the components is overly simple. The most serious problem with the Freedom House data in our case is, however, that it incorporates several of the subcomponents (government stability, corruption, foreign and domestic military involvement in politics and ethnic tensions) of our political risk component index into its democracy index; thus, using the Freedom House data as our democracy measure might contaminate our regressions.Freedom House also provides an index for civil liberties but this works no better for us than the political rights index because it includes subcomponents such as socioeconomic conditions, external and internal conflicts, law and order and ethnic tensions.Thus, we exclude the Freedom House's democracy measurement from our dataset. To ease the comparison between these measures, we follow Barro (1999), Acemogly et al. (2008) and Asiedu and Lien (2011) and normalize the measures between zero and one, with the higher number indicating a more democratic country.Although both of our democracy variables measure slightly different aspects of democracy, their correlation is high at 0.74.However, as Table 1 shows, polity presents an average value of 0.64 for Pakistan, whereas icrg measures its democracy at a level of 0.36.Conversely, for Bahrain, polity shows only 0.08, whereas icrg's average democracy value is 0.43.To account for these differences in the democracy variables, we also consider the average of these measures as our democracy variable as a robustness check. Political risk Political risk does not have one single definition, although it may generally be understood as the risk of unanticipated transformations in the national and international business environment as a result of political changes, such as sudden changes in taxation laws and government policies, foreign and domestic conflicts, in addition to the quality of the governing institutions.Quantifying political risk is difficult, although the events related to it are clearly visible.We rely on ICRG's Political Risk components, which provide a means of assessing the political stability of the countries on a relative basis.The index has been widely used e.g. by Diamonte et al. (1996), Erb et al. (1996a), Bilson et al. (2002), Bekaert et al. (2011) and Asiedu and Lien (2011) to study foreign direct investment and stock market behavior.ICRG's index was originally designed to analyze potential risks to international business operations but as share-issuing companies face identical risks, the measure can also be used to study stock market behavior.The ICRG index is constructed using subjective staff analysis of available information; in that sense, it can be considered a forward looking measure.Thus, it may be suitable for stock market analyses because share prices reflect expectations of future income.The index is composed of 11 components, including Government stability, External conflicts, Internal conflicts, Ethnic tensions, Military in politics, Religious tensions, Socioeconomic conditions, Investment profile, Bureaucracy quality, Corruption and Law and order (in addition to Democratic accountability as the twelfth, but we study it separately) 7 .The political risk rating is performed by assigning risk points to these components with minimum points being zero and maximum depending on the maximum weight that the particular component is given in the overall political risk assessment, which ranges from 4 to 12, with higher points denoting lower risks.In addition to the political risk composite index, we build two additional risk ratings from its sub-components.The conflicts and tensions component sums the external and internal conflicts with the ethnic and religious tensions, on the one hand, whereas our quality of institutions component follows Bekaert et al. (2011) and sums corruption, law and order, and bureaucratic quality.As with democracy measures, the data are normalized to lie between zero and one. According to the standard portfolio model, investors demand higher return for higher risk; thus, it would be expected that our political risk components would have a negative effect on excess returns, which is actually the case with some of the previous results from Erb et al. (1996a) and Bilson et al. (2002).However, Perotti and van Oijen (2001) find a significant positive relationship between political risk and excess returns (decreases in risks lead to higher returns), which is further supported by the results from Diamonte et al. (1996) and Erb et al. (1996a) that state that emerging countries receiving upgrades to their political risk profile also receive higher returns than those being downgraded.This setting creates a political risk sign paradox because it is unclear what sign the political risk and democracy components should take.One of our intensions is to examine this paradox and study whether political risk is even a significant determinant of returns. It might be argued that the democracy level is highly correlated with political risks. The political risk component includes a measure for Military in politics, for example, which measures the military's presence (or absence) in the governance system.Because democracies should not have any military presence in their governance, it could be expected that the correlation 7 More accurate definitions of each of these terms are provided in Appendix 1 Table 1. between these two is close to 1.To account for possibly multicollinearity suspicions, we calculate the pairwise correlations between our democracy measures and the political risk componentin addition to its subcomponentsand report these in Table 2. Correlation between democracy and political risk differs slightly between the democracy measures but is not very high (polity: 0.0925, icrg: 0.2394).Of the individual subcomponents, Bureaucracy quality has the highest positive correlation, which is followed by Corruption, Military in politics, Religious tensions and Investment profile.Naturally, Government stability has negative and rather low correlation with democracy because of elections.In general, however, the correlations in our basic setting are not too high to affect the estimation results. Table 2 here Control variables Because we are studying return data with yearly frequency, the stock prices compress a large amount of information.We must control changes in both the financial and economic environments in our econometric framework.A significant amount of literature has previously studied the effects of macroeconomic factors and their relationship to equity returns (see e.g., Chen et al. (1986), Flannery and Protopapadakis (2002) and Rapach et al. (2005) and references therein) and has found monthly evidence, for example, that inflation, industrial production, term spread and interest rates are priced factors on the U.S. and other developed markets.However, because emerging markets do not report or do not possess some of these factors that are typically used, our control variables dataset choice is partly dictated by the availability of the reliable data.We aim to control both domestic and foreign factors and capture the countries' current level of economic development with a logarithm of GDP per capita in the U.S. dollars and annual GDP growth; rate the macroeconomic uncertainty of the economy with inflation measured with a GDP deflator; study the markets' relationship to changes in industrial activity with the change in industrial production; and use the narrow money growth (M1) and broad money growth (M2) metrics to measure the financial development of each country.We also include the exchange rate with the U.S. dollar to measure the foreign exchange exposure for each currency and proxy the stock market openness with the ratio of market capitalization to GDP.To capture the level of banking sector development we include a variable for domestic credit to private sector as a percent of GDP to our dataset and use the equity markets turnover to GDP ratio to proxy market liquidity. Our Estimation methods To capture the effects of democracy and political risk on stock market performance, we use two different methods; we begin with a pooled regression (clustering the standard errors across countries) and continue with system GMM, a linear dynamic panel data model that is designed for short, wide panels.It can be used for unbalanced panels and to avoid the dynamic panel data bias in which the models contain unobservable panel-level effects that are correlated with a lagged dependent variable and render standard errors inconsistent.Model also accommodates multiple endogenous variables by using internal instruments, which makes it a particularly attractive alternative to finding external instruments that remain valid and robust across all panels. System GMM is a GMM-based estimator method based on the work of Arellano and Bond (1991) and was developed by Arellano and Bover (1995) and by Blundell and Bond (1998). The original Arellano-Bond estimator takes the first difference of the data and uses the lagged values of the endogenous variables as instruments.That is why it is often referred to as the difference estimator.Arellano and Bover (1995) note, however, that the lagged levels make poor instruments for first differences, particularly if the variables are close to the random walk; thus, they formulated the basis for a new, more efficient estimator, the system GMM, which gained its final form (and the conditions under which the estimator is valid) in Blundell and Bond (1998).System GMM avoids problem of poor instruments by introducing additional moment conditions and Hayakawa (2007) has shown theoretically that system GMM is less biased in small samples than difference GMM.However, Roodman (2009) warns that the downside of both of the estimatorsand particularly of the system GMMis that they use too many instruments, which may give a false sense of certainty because a large number of internal instruments can over-fit the endogenous variables and weaken the Hansen tests for instrument validity.This problem arises when the number of time observations in the dataset increases, in particular.Moreover, Bun and Windmeijer (2010) have shown that the weak instrument problem may be problematic also for the system GMM approach.Even more criticism of the system GMM is aimed at its requirements.For system GMM to be valid, both the country-fixed effects and omitted variables must be orthogonal to the lagged differences of the right hand side variables that are used as instruments for the level equation. Because neither of these assumptions can be tested, Hauk and Wacziarg (2009) have concluded in their Monte Carlo study that an even larger problem than the weak instruments of the system GMM, is the validity of its moment conditions, which leads to some bias in its results.Despite its shortcomings, because the system GMM can handle the close-to-random-walk stock returns and small samples better than difference GMM, it is used as our main method in the formal econometric tests. System GMM estimation procedure assumes that there is no autocorrelation in idiosyncratic errors.Thus, for each regression, we test for autocorrelation and the validity of the instruments and report the p-values for the test for second order autocorrelation and for the Hansen (1982) J-test statistic for overidentifying restrictions.However, as Roodman (2009) notes, the Hansen's test statistic loses power when the number of instruments is large relative to the crosssection sample size (here, the number of countries).A sign of this is a p-value of 1.000 for the Hansen J-statistic.To avoid this, the typical rule of thumb is that the number of instruments, , should be less than the number of the cross section sample size, , i.e., the instrument ratio should be more than one.When , the assumptions underlying the dynamic panel data models may be violated.Furthermore, a low ratio between sample size and instruments raises the susceptibility of the estimates to a Type 1 error, i.e., significant results are produced even though there is no underlying association between the variables involved.The simplest solution to this problem is to reduce the instrument count.We use two methods to accomplish this.Because the instrument number increases significantly with the length of the sample period, we limit our data sample to begin in the year 2000 and limit the number of lagged levels to be included as instruments by collapsing the instrument set as described by Roodman (2009).However, because it is not clear that really is a threshold level for reliable results, often we present the results for both the limited and unlimited instrument sets.In the robustness regressions, we also study different sample periods. Roodman ( 2009) also makes an important point that researchers should not interpret the results of the autocorrelation test and Hansen's test based on the conventional significance levels of 0.05 or 0.10.These levels, although useful for defining the significance of the coefficient, are not appropriate when trying to exclude specification problems, which are based on not rejecting the tests.Thus, when the p-value obtains a value only slightly higher than 0.10, this should not be considered as strong evidence for the model. As our basic estimation method, we use the two-step GMM estimator with Windmeijer (2005) correction in our estimations because it is asymptotically efficient and robust to heteroskedasticity.However, as a robustness test, we also estimate the results with a robust one-step estimator. Benchmark regressions This section studies the following question: Does democracy have any effect on stock market performance?The economic reasoning of the equity market dynamics stems loosely from the APT theory.As Equation (1)the basis of our workpresents, we estimate the impacts of democracy and political risk on stock market performance controlling for a large number of economic and financial variables that we believe to be important for the stock market performance. ( ∑ where refers to markets; to time; is the country-specific effect; is the world market adjusted return of market at time ; is a measure of democracy and its square; refers to different political risks; and are the interaction terms; is a control variables vector comprising of all other potential covariates; and is an error term that captures all other omitted variables, with ( ) for all s.In effect, we are estimating the emerging stock market integration with respect to world returns as a by-product.If the emerging stock markets would be completely integrated should hold, i.e., global factors would explain all the movements in the returns.The previous studies (e.g., Bekaert (1995), Erb et al. (1996b) and Bekaert et al. (2011)) have indicated that the political factors, in particular, might be of importance for market segmentation. In all of these forms, the lagged value is included to capture the possible persistency of the left-side variable and the mean-reverting dynamics.Our main interest, however, is in the parameters, , which measure the effects of democracy, political risk and their interactions on stock market performance. Variables affecting emerging stock market performance Because we have several highly correlated financial, political and economic variables, an estimation of the full model will generate a large amount of insignificant regressors that increase the number of instruments and needlessly inject noise into the estimated model.Thus, our aim is to reduce the number of variables into a more manageable set that best explains the variation in integration.In this task, we follow Bekaert et al. (2011) and Bekaert et al. (2014) and employ general-to-specific algorithm, explained in Hendry and Krolzig (2005).The algorithm constitutes of a process that eliminates variables with coefficient estimates that are not statistically significant over multiple steps.Concretely, we begin by estimating Equation ( 1) with all variables.We then eliminate the least statistically significant variable by using a significance threshold of 15%.The use of relatively high significance levels reflects the preference of keeping a model with some useless regressors instead of eliminating any important variables.We continue step-by-step estimating the model and excluding the individual variablessimultaneously testing at every step whether an already excluded variable should be included againuntil we arrive at a final model specification. However, we make few exceptions in the selection algorithm and leave the previous returns to the model; because we are concentrating on democracy, political risk and their interaction terms, we do not eliminate these variables either, although they might be insignificant. Effects of democracy and political risk We begin by studying the direct effects of democracy and political risk on stock market performance by estimating Equation (1) without the squared term and the interaction terms and .We present the results for all of our control variables and collapsed instrument set in Table 3 using the polity index as our democracy measure in columns ( 1), (3) and ( 5); and icrg in columns ( 2), ( 4) and ( 6).The columns ( 1) and ( 2) in Table 3 report the estimation results from pooled OLS, whereas columns ( 3)-( 6) are from system GMM.Roodman (2009) provides examples and argues that the high number of instruments can generate both invalid results and can lead to the weakening of the Hansen's test statistic.Thus, we report the results for both, the full instrument set (columns ( 3)-( 4)) and for the limited instrument set (columns ( 5)-( 6)). The dependent variable in all the estimations is the world market adjusted returns. Table 3 here As shown, the signs and sizes of the coefficients remain rather similar across the estimations but the significance levels differ.However, for both, political risk and democracy, the sign is positive indicating that improvements in political risks and democracy lead to higher returns.However, while the coefficients of political risk are significant in almost every case, the estimates for democracy are significant in only three of the six estimations. Of the local variables, exchange rate changes and domestic credit supply to the private sector (banking sector development) both have negative signs, which indicates that appreciation of the local currency and increases in the credit supply would lead to smaller local returns.Moreover, financial market development, measured by the growth in broad money supply (M2), market capitalization and turnover, has a consistent and positive effect on returns.It is also found that the economic development measured by logarithm of GDP per capita has a negative effect on returns. World inflation is the only global variable that is consistently significant and negative across all the estimations, which indicates that increases in global price levels negatively affect emerging market returns.For pooled OLS, together with constant, also world industrial production (positive) and term spread (negative) have statistically significant coefficients but these are excluded from the final system GMM models. At the end of the table, we report observation numbers and the coefficient of determination for pooled OLS, in addition to the numbers of instruments and the instrument ratios for the dynamic panel data models.In addition, we report the p-values for the AR(2) test and Hansen's J test.The former indicates that the assumption of no serial correlation in error term is valid for all of our estimations, whereas the latter examines the validity of our instruments and does not reject our results.All the other tests are passed with 10% level, except the AR(2) test for models ( 5) and ( 6).Thus the result of these models should be treated with some caution. Interaction effects of democracy and political risk We continue by estimating the Equation (1) in its full form, including interaction terms.We proceed through the model selection algorithm for each of the estimations again and report the results in Table 4. Again, columns (1) and ( 2) report the results from pooled OLS, whereas columns (3), ( 4), ( 5) and ( 6) are the results from system GMM with full and collapsed instrument sets.Odd columns use polity as their democracy measure, whereas even columns use icrg. Table 4 here What can be seen is that the coefficients differ slightly between estimation methods and particularly the significance of the democracy and its interactions varies between democracy measures.While icrg-democracy provides consistently highly significant estimates, almost none of the coefficients involving polity is significant.This leads to conclude that our results are dependent on the democracy measure.Also, when interaction terms are taken into account, the -variable is found to be positive and statistically significant in all of the estimations.This supports the view that decreases in a country's political risk level increase local stock market returns.The results for democracy are also positive but significant for only half of the cases.In addition, Table 4 presents evidence that for icrg the coefficient of is statistically significant and negative, which indicates that when the democracy level reaches a certain threshold, its effect on returns becomes negative.Results for polity, however, cast some doubt on the results because the coefficients in them are not statistically significant, although they have identical signs with icrg.Based on the correlations (Table 2), the differences in results are mostly owning to Law and Order, Socioeconomic conditions, Investment profile, Military in politics and Internal conflicts (five largest differences in political risk component correlations with democracy measures).In addition, it may be noted from the pooled OLS estimations that the coefficient of determination increases 2-3 percentage points; thus, the total contribution of interaction terms is rather small. An interesting and somewhat surprising result is that the coefficient of is negative, although neither of the coefficients is negative independently.This would indicate that the higher the democracy level and lower the political risk, the smaller the returns which is by contrast to the expectations from the previous results.We relate this result to the quadratic relationship between political risk and democracy level which was demonstrated in Figures 1 and 2 with the following: .In this relationship, is negative and is positive, which indicates that political riskiness increases until a certain threshold democracy level and then begins to decrease after that.Thus, when the squared term of democracy, , is included in the regression, has a negative effect on political risk, which causes their interaction to be negative.Conversely, has a positive effect on .Table 4 shows further that separately estimated is negative and positive but their interaction term is positive.Overall, these results suggest that, with this model specification, the democracy level, when measured with icrg, has effects on returns, both independently and interacting with political risk. The weakening of the power of Hansen's test through the large number of instruments can be observed from the Hansen's test results in Table 4.When the instrument ratio is small (columns (3)-( 4)), Hansen's test almost never rejects the validity of the instruments; thus, it might be more appropriate to study the results with a collapsed instrument set (columns ( 5)-( 6)).In these results, Hansen's test does not reject any of our estimations with conventional significance levels but the AR(2) test is rejected with 10% level for polity.Because our pooled OLS estimations are not subject to either of these tests and continue to provide similar results to system GMM, we consider our results to be rather reliable.However, we continue to study the robustness of the results in Appendix 2. Interaction effects of democracy and political risk components Next, we study the effects of democracy on stock market performance more carefully and decompose the political risk component into its subcomponents and use these in Equation (1) separately as political risks.We aim to study whether these subcomponents exhibit similar behavior as the political risk component and report the results in Table 5.For each estimation method, we utilize the models found in the previous subsection; however, to converse space, we present the results only for the icrg index estimated with system GMM with collapsed instrument set and do not report results for the control variables.Full estimation results are available from the authors upon request. Table 5 here Table 5 shows that all other political risk subcomponents except Ethnic tensions and Bureaucracy quality have positive signs and most of them are statistically significant.Of the components, Conflicts and tensions and Quality of instutions vectors, Government stability, Investment profile, Military in politics, Religious tensions and Law and order behave similarly as the political risk component with significant interaction terms with democracy and its squared term. None of these estimations can be rejected with 10% level based on the AR(2) test and Hansen's J test.It should, however, be noted that as was already mentioned in section 2.3, Military in politics and Religious tensions have positive correlations with the democracy that might affect the results for these subcomponents. Conclusions We study 49 emerging financial markets to discover whether their performance is related to their country's democracy level and, in particular, to its interaction with political risk.We use two measures for democracy and two panel data methods, pooled OLS and system GMM, to capture the direct and interaction effects of democracy and political risk on the global market adjusted 12month average returns. We find evidence that the level of democracy of a country affects stock market returns interacting with political risk, particularly during the 2000-2012 period.We also provide (partly counter-intuitive) evidence that lower political risks are associated with higher returns which lends support to findings of Perotti and van Oijen (2001), Diamonte et al. (1996) and Erb et al. (1996a). Moreover, we find several other variables to affect local returns.In part, our findings also provide evidence about the segmentation of the emerging stock market from the world market.Nonetheless, a word of caution is in order.Our results do not pass all robustness tests and they are found to be democracy measure and time-period dependent.Thus the estimations highlight the importance of using several different democracy measures for estimations that include democracy because the results might differ among them. Because the data on emerging market returns remains limited, more accurate results can only be obtained in the future as both the number of markets increases and the observation periods are elongated.Further analysis on the topic of democracy, political risk and stock market performance calls for a theoretical model.However, this study may operate as a pioneering empirical work on this topic and the basic idea can be extended to other sectors in finance, such as the bond markets and FDI flows.These ideas, however, are left for future studies.The data on democracy measured with polity and political risk measured with ICRG averaged over a maximum period of 1988 to 2010 with several starting years (see Table 1 for the starting year for each market).Both measures are normalized to an interval from zero to one, with a higher number indicating more democratic country and lower political risk.In total, there are 49 countries represented.A squared curve is fitted to the data points.The OLS regression of democracy on political risk with both the democracy and its squared value as independent factors yields the following: , with p-values of 0.000 and 0.001, respectively, and .The data on democracy measured with icrg and political risk measured with ICRG averaged over a maximum period of 1988 to 2010 with several starting years (see Table 1 for the starting year for each market).Both of the measures are normalized to an interval from zero to one, where a higher number indicates a more democratic country and lower political risks.In total, there are 49 countries represented.A squared curve is fitted to the data points.The OLS regression of democracy on political risk with both the democracy and its squared value as independent factors yields the following: , with p-values of 0.110 and 0.038, respectively, and . Tables: Figures:Figure 1 . Figures: Figure 1.Democracy and political risk global factors aim at capturing fluctuations on the world business cycle and include world inflation, changes in oil prices, world industrial production, the U.S corporate bond spread (Moody's Baa minus Aaa bond yields) and the term-structure spread (U.S. 10-year bond yield minus 3-month U.S. Treasury bill rate). With the exception of exchange rates, industrial production and world factors, which are provided by Datastream, and the default spread, which is provided by the Federal Reserve Bank of St. Louis, all of the other control variables are obtained from the World Bank's World Development Indicators.See Appendix 1 Table1for details. Table 1 : Summary statistics First observation is the starting year of the data for each of the markets.Local returns refer to annual mean of local returns of MSCI country indices denominated in the U.S. dollars.The democracy variables polity and icrg are from Polity IV and International Country Risk Guide (ICRG), respectively.The data are normalized to lie between zero and one, where a higher number indicates a more democratic country.Political risk is the composite index of ICRG political risk index normalized to an interval from zero to one consisting of 11 subcomponents: Bureaucracy quality, Corruption, Ethnic tensions, External conflicts, Internal conflicts, Government stability, Investment proficiency, Law and Order, Military in politics, Religious tensions and Socioeconomic conditions.The higher number indicates a smaller political risk.The table is sorted according to polity. Table 2 : Correlations between democracy and political risk measures Table 3 : The direct effects of democracy and political risk to stock market behavior For more detailed data, definitions and sources, see Appendix 1, Table1.***, ** and * denote statistical significance at a 1%, 5% and 10% level, respectively.In the Hansen test, the null hypothesis is that the instruments are not correlated with residuals, whereas in the AR(2) test, the null hypothesis is that the errors in the first difference regression exhibit no second order serial correlation.Heteroskedasticity robust standard errors are in parenthesis. Table 4 : Interaction effects of democracy and political risk to world market adjusted local returns Table 5 : Interaction effects of democracy and individual political risks to stock market behavior Heteroskedasticity robust standard errors in parenthesis.***, ** and * denote statistical significance at 1, 5 and 10% level, respectively.
2018-12-07T23:52:56.056Z
2015-12-01T00:00:00.000
{ "year": 2015, "sha1": "4c139f9f0baaaac8701199df611318269e4da74e", "oa_license": "CC0", "oa_url": "https://jyx.jyu.fi/bitstream/123456789/48849/1/lehkonenheimonendemocracyandpolrisk.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "3f7c75e705bd82a524a10a4d289a0f89204873fd", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Economics" ] }
257042460
pes2o/s2orc
v3-fos-license
A school-family blended multi-component physical activity program for Fundamental Motor Skills Promotion Program for Obese Children (FMSPPOC): protocol for a cluster randomized controlled trial Background Fundamental motor skills (FMSs) are crucial for children’s health and comprehensive development. Obese children often encounter a considerable challenge in the development of FMSs. School-family blended PA programs are considered a potentially effective approach to improve FMSs and health-related outcomes among obese children, however, empirical evidence is still limited. Therefore, this paper aims to describe the development, implementation, and evaluation of a 24-week school-family blended multi-component PA intervention program for promoting FMSs and health among Chinese obese children, namely the Fundamental Motor Skills Promotion Program for Obese Children (FMSPPOC) employing behavioral change techniques (BCTs) and building on the Multi-Process Action Control (M-PAC) framework as well as using the Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) framework for improving and evaluating the program. Methods Using a cluster randomized controlled trial (CRCT), 168 Chinese obese children (8–12 years) from 24 classes of six primary schools will be recruited and randomly assigned to one of two groups by a cluster randomization, including a 24-week FMSPPOC intervention group and a non-treatment waiting-list control group. The FMSPPOC program includes a 12-week initiation phase and a 12-week maintenance phase. School-based PA training sessions (2 sessions/week, 90 min each session) and family-based PA assignments (at least three times per week, 30 min each time) will be implemented in the initiation phase (semester time), while three 60-min offline workshops and three 60-min online webinars will be conducted in the maintenance phase (summer holiday). The implementation evaluation will be undertaken according to the RE-AIM framework. For intervention effectiveness evaluation, primary outcome (FMSs: gross motor skills, manual dexterity and balance) and secondary outcomes (health behaviors, physical fitness, perceived motor competence, perceived well-being, M-PAC components, anthropometric and body composition measures) will be collected at four time-points: at baseline, 12-week mid-intervention, 24-week post-intervention, and 6-month follow-up occasions. Discussion The FMSPPOC program will provide new insights into the design, implementation, and evaluation of FMSs promotion among obese children. The research findings will also supplement empirical evidence, understanding of potential mechanisms, and practical experience for future research, health services, and policymaking. Trial registration Chinese Clinical Trial Registry; ChiCTR2200066143; 25 Nov 2022. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-023-15210-z. Introduction Fundamental motor skills (FMSs) are considered the building blocks for more advanced and complicated movements (e.g., games, sports, and recreational activities) that children will develop throughout their lives [1]. FMSs represent a degree of proficiency in a range of motor skills as well as underlying mechanisms such as motor coordination and control [1][2][3]. Commonly developed in childhood and subsequently refined into context-and sport-specific skills, FMSs can be categorized as three aspects: locomotor skills (e.g., running, jumping, and hopping), object control/ball/manipulative skills (e.g., throwing, catching, and dribbling), and stability skills (non-locomotor, e.g., balancing and twisting) [4,5]. The mastery of FMSs has been purported as crucial elements of children's physical, social and psychological development, which occurs in an orderly and sequential manner [6,7]. Good FMSs may be the foundation towards a healthier life because of positive connections with physical activity (PA) [8][9][10], contributing to greater physical fitness [2,4,11], body weight status [12,13], perceived motor competence [10,14], sports engagement [15], cognitive function [16,17], perceived well-being [18], and perceived quality of life [19]. Despite the important role of FMSs in children's holistic development, obese children are usually confronted with an accelerated challenge in developing FMSs [20]. There have been numerous studies indicating that obese children were delayed in FMSs development and showed a prominently poorer performance of FMS tests compared with their peers with healthy weight [21][22][23][24][25]. This may be attributed to multifaceted factors. For example, large body mass index (BMI) can lead to excessive pressure to children's skeletal system [26] and unfavorable changes in the major brain sites of neuroplasticity [27,28], and decrease perceived motor capabilities [29], which subsequently inhibit the development of children's FMSs. For obese children, poor FMSs may also weaken their motivation for engaging in PA and result in a low level of physical fitness, which in turn deteriorates adiposity status and causes a series of negative consequences for other health aspects, e.g., metabolic diseases, and mental disorders among children [30][31][32]. This vicious circle has been clearly illustrated in Stodden et al. 's model [33], which to some extent emphasizes the importance and necessity of promoting FMSs in obese children. Traditionally, PA intervention programs have shown effectiveness in improving FMSs [32] and other healthrelated outcomes (e.g., physical fitness, cognition, and mental health) in overweight and obese children [34][35][36]. Schools are considered an ideal setting to implement PA interventions with the aim of promoting FMSs, as children spend most of their waking hours at school and schools can also provide better conditions (e.g., facilities, equipment, curriculum, health experts, peer support), easier access and maximum reach for intervention efforts [37,38]. However, school-alone settings cannot address the large amount of out-of-school time (e.g., summer holidays) and parental influences that shape children's behaviors in the home setting [39]. By contrast, familybased PA programs usually focus on the impacts of parents' support, knowledge, attitudes, motivation, and other psychosocial factors towards children's behaviors and emphasize the co-activity of parents and children to promote children's FMSs development [37,40,41]. Previous studies have also provided evidence for the effectiveness of family-based PA interventions on improving children's FMSs and healthy behavioral patterns [40,41]. Nevertheless, for obese children, the development of FMSs requires more professional instruction and practice guided by qualified experts [3,42], and this cannot be fully guaranteed in family-alone interventions. In addition, a social cooperative and interactive environment (e.g., at school) can facilitate a better development of FMSs, PA self-efficacy and social skills for obese children [43], while this cannot be fully achieved in a family-alone setting. Taken together, this suggests that a school-family blended PA intervention paradigm which can combine the merits of both school-alone and family-alone approaches, shows great potential in promoting FMSs and related health outcomes among obese children. Notwithstanding the advocation of school-family integrated PA interventions, empirical evidence on obese children is still scarce, especially in China [44]. In addition, several limitations and research gaps of previous FMSs promotion programs should be further addressed. For example, previous studies show inconsistent findings in the intervention effect on balance among obese children [45,46], which may be attributed to the lack of specific PA sessions that are tailored to promoting children's balance in some studies [32]. Furthermore, the findings in terms of the long-term effects of PA interventions on obese children's FMSs were mixed in previous studies [32]. Some studies indicated a sustained intervention effect of PA intervention on FMSs (e.g., at the 36-week follow-up assessment) [47], while others found that the increase in FMSs of obese children was not maintained during the follow-up, which even relapsed back to baseline levels [48][49][50]. The underlying reasons may be the absence of theory-based targets of interventions and Behavioral Change Techniques (BCT) that play a crucial role in maintaining the PA intervention effects in the long run [51,52]. Moreover, the mechanisms of why the PA intervention successfully improved obese children's FMSs have not been systematically examined in previous studies [53]. Identifying the mediation and moderation mechanisms of the intervention program is important and necessary, as it contributes to future design of effective FMSs promotion programs. In addition, previous studies have also shown a series of methodological limitations (e.g., lack of randomization and blinding, lack of objective standardized measures, lack of comprehensive evaluations for the intervention fidelity and quality) [32,53], which may weaken the future implementation and generalization of the study findings. To address the research and practice gaps, the present study aims to develop, implement, and evaluate a 24-week school-family blended multi-component PA intervention program for promoting FMSs and health among Chinese obese children, namely Fundamental Motor Skills Promotion Program for Obese Children (FMSPPOC). The particular objectives of outcome evaluation include: (1) examine the intervention effects of the FMSPPOC on improving the primary outcome (i.e., FMSs) among Chinese obese children; (2) examine the intervention effects of the FMSPPOC on improving the secondary outcomes (health behaviors, physical fitness, perceived motor competence, perceived well-being, M-PAC components, anthropometric and body composition measures) among Chinese obese children; and (3) identify the interrelationships between FMSs and physical and psychological outcomes (i.e., mediation mechanisms) and the moderating role of demographics. Study design This study will apply a two-group double-blinded cluster randomized controlled trial (CRCT) with four measurement occasions, including baseline assessment (T0), midintervention assessment (T1: 12 weeks after the baseline assessment), post-intervention assessment (T2: 24 weeks after the baseline assessment), and follow-up assessment (T3: 48 weeks after the baseline assessment). Two groups include: 1) an intervention group (IG), receiving the FMSPPOC intervention for 24 weeks; and 2) a waiting-list control group (WCG), receiving the FMSP-POC intervention or relevant materials (based on participants' requirements) after the completion of all data collection for IG (see Fig. 1). The design, implementation, and reporting of the FMSPPOC will follow the SPIRIT guidelines and the CONSORT statement [54,55] Participants As childhood is a critical period for developing favorable FMSs that can continuously affect health and individual development in adolescence and adulthood, a health-promoting intervention that targets this age group is vitally important [56]. In pre-adolescence, neuroplasticity may be greater, changes faster based on experience [57,58] and the brain may be particularly sensitive to the effects of PA when the brain's neural circuits are still developing [59]. Therefore, to address age-related declines in PA and maximize the benefits of PA on FMSs, obese children aged 8 to 12 years were selected as the target population for this study with obesity defined as a body mass index (BMI) at or above the 95 th percentile of the sex-specific BMI-for-age growth charts (https:// www. cdc. gov/ obesi ty/ data/ child hood. html). Sample size estimate An average medium effect size (Cohen d = 0.54) of PA intervention on FMSs in obese children was proposed based on previous studies [32]. Considering the minimal class size is 30 and the prevalence rate of childhood obesity is approximately 20% [60], a cluster size of 6 (for each class) was selected in this study. Using an ICC of 0.01 [61], an alpha of 0.05, a statistic power of 80%, and an attrition rate of 15% [43], at least 84 participants in each condition (12 clusters per condition), for a total of 168 participants in 24 clusters will be required for this CRCT. Recruitment and eligibility criteria Based on the sample size estimate, this study is intended to recruit 24 classes from six primary schools (grades 2-5), excluding the private or special education schools or those participating in other PArelated programs. Using a random stratified sampling approach (e.g., socioeconomic status, geographical location, grade and class size of each school), six primary schools will be recruited in Shijiazhuang, Hebei, China. An invitation letter describing the study nature and participation requirement will be delivered to the principals of eligible schools. Upon the approval of the principals, one class from each grade will be randomly selected to participate in this study. All children in the selected classes will be provided with an informed pack comprising a plain language statement and a written informed consent form to be signed by their parents. The eligibility criteria will include: (1) aged 8-12 years; (2) Body Mass Index (BMI) greater than Chinese obesity cutoffs corresponding to 95th percentile of sex-specific and age-specific BMI reference standards [62]; (3) no previous substantial experiences in participating in FMSs-related promotion programs; (4) no prior diagnosis of physical, verbal, and cognitive disorders that may prevent participation in PA program and interrupt the outcome evaluation. Randomization and blinding To avoid the potential for contamination, randomization will be conducted at the school level. All eligible schools will be randomly assigned to one of the two groups prior to baseline assessment, where each group will consist of three schools. Each school will select 28 obese children from four classes covering grades 2-5 to participate in this study, and the children in the same school will receive the same treatment. Randomization will be implemented with a ratio of 1:1, using the Excel software by a researcher who will not be involved in the participant recruitment, data collection or evaluation. Due the ethical concerns, participants are not able to be blinded as they will be informed with the study purpose and group allocation in the written informed consent form. Intervention facilitators and outcome evaluators will be concealed for the group allocation. Fundamental motor skills Promotion Program for Obese Children (FMSPPOC) FMSPPOC is a school-family blended multi-component intervention program, which aims to promote the development of obese children's FMSs in a supportive environment, providing the children with both positive experiences (e.g., enjoyment, success, and accomplishment) in relation to PA, and cultivate interest in sports, so as to enhance their PA level/engagement, perceived motor competence, fitness and wellbeing in future. The FMSPPOC will last 24 weeks, consisting of two parts: a 12-week initiation phase, and a 12-week maintenance phase. Theoretical backdrop: multi-process action control model To enhance the effectiveness and implementation of the intervention, the Multi-Process Action Control model (M-PAC) will be used as the theoretical backdrop [63]. The M-PAC model postulates that individuals' behavioral change is a continued process from intention formation to behavioral initiation and maintenance, consisting of reflective, regulatory, and reflexive processes [64]. In the M-PAC framework, intention is conceived as a decisional construct (i.e., has intention/ has no intention). Similar with the tenets of other psychosocial theories (e.g., the Theory of Planned Behavior, TBP; Capability, Opportunity, Motivation, Behavior model, COM-B), the M-PAC emphasizes the influence of several determinants of intention which function in the reflective processes (i.e., consciously deliberated and expected consequences of performing a behavior) [65]. Particularly, instrumental attitude (e.g., PA is useful), affective attitude (e.g., PA is enjoyable), perceived capability (e.g., I have the ability to perform a behavior), and perceived opportunity (e.g., I have the time and can access to perform PA) play a crucial role in forming a behavioral intention [66]. Furthermore, the M-PAC framework proposes that whether a behavioral intention can be successfully translated to an actual behavior is determined by the reflective processes of affective attitude and perceived opportunity as well as the enactment of regulation processes. Particularly, higher levels of affective attitude and perceived opportunity are considered necessary for successful translation of behavioral intention into actual behavior than for intention formation. Similar with the Health Action Process Approach (HAPA), the M-PAC framework also suggests the importance of regulatory strategies (e.g., action planning and coping planning) in the intentionbehavior translation [67]. In addition, extending on previous psychosocial theories, the M-PAC framework highlights the important role of diverse impulsive components in behavioral maintenance (i.e., "continuance of actional control is thought to rely upon the development of reflexive processes") [68]. It suggests that impulsive components affect actional control most often through learned associations and are triggered through specific circumstances/cues and stimuli [69]. The M-PAC framework emphasizes the development of two crucial reflexive processes, including habit (e.g., I will engage in PA automatically) and identity (e.g., I am a person who is physically active), as individuals begin to perform the behavior more regularly [63][64][65]. Therefore, a developed behavioral pattern of action control will be determined by the independent influence of relative, regulatory, and reflexive processes [63][64][65]. Targeting the psychosocial components of the M-PAC model, a series of Behavioral Change Techniques (BCTs) will be also adopted [70,71] in the FMSPPOC program (see Table 1). 12-week initiation phase During the first 12-week initiation phase, the intervention consists of school-based PA sessions and familybased PA assignments for the whole classes including the obese children to prevent stigmatization or exclusion. Particularly, the school-based PA sessions will be delivered twice a week (90 min each session) for 12 weeks (totally 24 sessions). Three ball games (i.e., soccer, basketball, and volleyball) will be used as the main content of school-based PA sessions for FMSs promotion due to the consideration of effectiveness, enjoyment, feasibility, and greater adherence [72,73]. Each session will include a 5-min warm-up, a 15-min physical fitness training, a 60-min FMSs training (30-min learning and practice + 30-min game), and a 10-min cool-down. For the FMSs training part, three types of ball games will be delivered in different weeks, including week 1-4 for soccer, week 5-8 for basketball, and week 9-12 for volleyball. Each ball game section will include skill instruction Parents will be taught how to establish a favorable PA environment at home; Parent-child co-activity will be implemented; Parents will be asked to provide their child/children with verbal encouragement; A short-term goal and a long-term goal will be set; Incentive will be provided based on the engagement of family-based PA assignment; Ranking games will be implemented; Parents will be asked to monitor the child/children's PA performance and upload on the website Relevant knowledge such as long-term development of FMSs, benefits and consequences of healthy lifestyles, and potential risks of FMSs development will be delivered; Acute PA practice will be implemented; Problems will be discussed during the workshops and solutions will be confirmed by experts, PE teachers, parents and child/children on consensus; Feedback on behaviors and goals will be provided by PE teachers, experts and parents; Values and meaning of the activity as well as other tactics (e.g., visual and tactile symbols) of the new identity will be emphasized and learning, skill practice (guided by teachers + self ), game part, and review/summary. All the movements in each section will be designed to achieve a moderate-tovigorous intensity. Detailed content of the school-based PA sessions can be found in Table 2. In addition to school-based PA training, all the participants are asked to complete a 30-min family-based PA assignment at least three times per week during the first 12-week initiation phase. The purpose of the family-based PA assignment is to reinforce FMSs practice and promote PA engagement during the out-of-school time. For each PA assignment, participants are asked to complete several activities together with their parents (i.e., parent-child co-activity). These activities are modified from the school-based PA sessions that can be easily undertaken in a family-setting (e.g., require minimal equipment, can be undertaken indoors or outdoors within limited space). Parents will also be asked to track their child/children's weekly completion times and conditions and upload this information to a designated column of the Ding-Talk APP (https:// www. dingt alk. com/ en, Alibaba Group). At weekends, the research team members will check the participants' PA assignments and reward them with small red flower stickers as incentives: participants who complete PA assignments three times will receive one sticker while who complete the assignments five times or more will receive two stickers. The greater number of red flowers participants accumulates, the more prizes (e.g., sports bracelets, suits, and sneakers) they can redeem at the end of the intervention. 12-week maintenance phase 12-week maintenance phase will be implemented mainly during the summer holidays following the 12-week initial phase. This part will consist of three face-to-face workshops with three online webinars (using Ding-Talk APP) only for the target population of obese children. The offline workshops and online webinars will be conducted alternatively and biweekly. Each workshop and webinar will last for 60 min, consisting of 30-min expert talk, 20-min interactive activity, and 10-min Q&A. The main purposes of the offline workshops and online webinars will include: (1) maintaining children's FMSs practice; (2) promoting healthy lifestyles (e.g., regular engagement of PA, limited sedentary time, high quality sleep, balanced diet) in the long run; (3) addressing existing problems and future challenges. The topics will include (1) practicing FMSs in daily life (e.g., FMSs games introduction, tips of FMSs practice); (2) PA and health (e.g., recommendation of PA for children, co-activities of parent and child); (3) sedentary behavior and health (e.g., recommendation of screen time, tips of interrupting prolonged sitting); (4) good sleep (e.g., sleep recommendation in terms of the duration and quality); (5) eating happily and healthily (e.g., healthy food selection, healthy cooking, and "say no to snacks"); (6) existing problems and future challenges in children's FMSs (e.g., existing problems in daily practice of FMSs, how to maintain the development of FMSs after the project completion). Procedure and quality control The FMSPPOC development will be facilitated through five steps. Step 1 is to form a steering group, consisting of relevant stakeholders (e.g., research team members, PA instructors, primary school managers, and obese children and their parents). In Step 2, the research group will develop the intervention content and practical strategies. In the 3rd step of program production, all the stakeholders will discuss on consensus with respect to the refining and confirming of the intervention content and practical strategies, and then a two-month pilot study will be conducted to test and optimize the strategies and relevant materials. Participant recruitment and maintenance, data collection instrument and schedule, adaptability and feasibility of the intervention will be fully considered in the pilot study. In Step 4, a comprehensive plan for the intervention implementation will be constructed. To ensure the safety, supportiveness, enjoyment, and efficiency of the intervention during the implementation. An operational manual and training materials will be developed in this step. In Step 5, a plan to evaluate the implementation and effectiveness of the intervention will be developed. Any amendments will be made if necessary (Fig. 2). For the project implementation, two 60-min face-toface briefing sessions will be first provided for the intervention facilitators (i.e., PE teachers, student helpers, parents) one month prior to the FMSPPOC commencement (delivered by the investigators and/or research assistant) in the university gymnasium. One briefing session will focus on necessary knowledge and skills training so that PE teachers and student helpers can implement the intervention plan smoothly and effectively. In addition, the implementation details, rationale, nature, and benefits of FMSPPOC will be introduced. Another briefing session will train all evaluators on the measurement of FMSs, health behaviors, and physical fitness, as well as the specification of psychological measures. For the main study, the school-based PA sessions will be conducted in an indoor playground at a primary school during afternoon custody time (4:00-5:00 p.m.) by qualified PE teachers with assistance of student helpers. The attendance of participants will be also recorded. In addition, to monitor the PA intensity, three mid-tests will be conducted using downloadable, wireless Polar Table 2 The content of the school-based PA sessions 30-min game Week For the family-based PA assignments, relevant tasks and instructions will be introduced for parents in the beginning briefing session. The research assistants will remind parents to upload the required information each week via SMS messages and deliver the incentives correspondingly. For the second 12-week maintenance phase, the offline workshops will be implemented at a multi-function sports room in a primary school, while the online webinars will be conducted using Ding-Talk APP. These workshops and webinars will be facilitated by trained PE teachers and health experts from Hebei Normal University. Each participant will be asked to attend the workshops and webinars with at least one parent or legal guardian. Implementation evaluation This study will use the Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) framework [74,75] to assess the implementation of FMSPPOC among obese children. Details can be found in Table 3. Outcome evaluation A description of the measurements in the present study are presented in Table 4. All the measurements will be conducted on four occasions, including baseline assessment (T0), mid-intervention assessment (T1: 12 weeks after the baseline assessment), post-intervention assessment (T2: 24 weeks after the baseline assessment), and follow-up assessment (T3: 6-month after the intervention completion). In addition, the collection of all indicators at the fourtime points will follow the following process. Day 1-7: Demographic information (filled by parents) and Acti-Graph 3X + measured PA, sedentary behavior, and sleep; Day 8: anthropometric and body composition measures and FMSs; Day 9: physical fitness assessment; and Day 10: psychological outcomes. The Day 8-10 data collection will be implemented by four trained student helpers with assistance of PE teachers in an elementary school gymnasium, where the temperature will be constant at 20℃ and humidity will be controlled at 50% during the assessment. Data analysis Quantitative data analyses will be performed using SPSS and Mplus. Descriptive statistics of all continuous variables will be expressed as mean ± standard deviation (M ± SD). Independent samples t-tests and Chi-square tests will be employed for checking the randomization, analyzing the dropout, and detecting the potential confounders at baseline. Intention-to-treat approach will be used for the primary analyses, with per-protocol analyses as sensitivity tests. Missing data will be imputed using a multiple imputation approach with linear and logistic regression equations. The statistical significance level will be set to p < 0.05 (two-tailed). Generalized linear mixed • To ensure more than 85% participants completing the intervention program 1. Explore the experience of participating in the program from participants' views 2. Identify the conditions and circumstances that influence the intervention effectiveness 3. Identify the reasons that result in the variations/differences in the intervention effects across participants within the program 4. Identify the potential contamination for the intervention effectiveness 5. Count numbers of sports injuries and other unintended consequences during the intervention period, and analyze the reasons 2. How do completers compare to non-completers? • Dropout analysis will be conducted to identify the baseline difference between completers and dropouts • Sensitivity test will be conducted to identify the impact of the dropouts on the evaluation of the intervention effects 3. What are the effects of the intervention on participants? • Significant improvement in primary outcome (FMSs) and other healthrelated physical and psychological outcomes, e.g., PA, fitness, wellbeing, perceived motor competence and PA enjoyment (detailed outcome measures are shown in Table 4) • Using a process evaluation framework to assess the fidelity and quality of intervention implementation (e.g., exercise intensity will be monitored during the intervention; a fidelity evaluation scale will be used) • Trained research staff observe teachers' implementation of the FMSPPOC program (e.g., percent of lesson content conveyed, whether lessons will be presented in recommended order) and parents' involvement in the familybased PA assignments • Using self-reported e-logs to assess the participants' satisfaction with the intervention implementation • Using a self-reported scoring sheet to assess the consistency in delivering the intervention and evaluate the consistent commitment of facilitators and collaborators 1. Identify the acceptability, adaptability, and practicality of the intervention 2. Identify any modifications that have been done during the intervention process, and explore the reasons behind them 3. Identify the barriers that influence the fidelity of the intervention 4. Identify the potential facilitators and barriers that influence the intervention implementation and provide strategies to address them in the future 2. Did participants adhere to the intervention program? • Analyses for the attendance rate of each session and overall completion rate • Check the completion of the family-based PA assignment by selfreported e-logs • Self-reported e-logs completed by the intervention facilitators and collaborators Maintenance Main study & Follow-up 1. The extent to which the intervention outcomes are sustained • Data analyses for the sustained intervention effects during 6-month follow-up 1. Identify the components that influence the successful sustainability of the current program 2. Identify the modifications made to sustain the current program 3. Identify the facilitators and barriers of seeking more collaborators in future 4. Identify the facilitators and barriers that maintain the program and promote the program to a wider population in future 2. The number of participants sustained, and the number of new participants enrolled in the program • Expected: more than 85% of participants sustained and more than 300 new participants from five main districts (60 ~ 80 participants in each district) 3. The number of PE teachers and parents have sustained their implementation of the program or disseminated the program to others • Expected: more than 70% of PE teachers and parents sustained the implementation of the program or introduce the program to others 3. The degree to which the Shijiazhuang Education Bureau institutionalize the FMSPPOC as an on-going part of their regular activities (i.e., supply their own funding, integrated into programmatic activities, regularly train their staff in implementation, continue to provide data for monitoring and evaluation) • Expected: more than 70% of service providers/collaborators continue to implement the FMSPPOC program and for at least three years 4. The number of new service providers/collaborators, including but not limited to NGOs, sponsorships, and local government support, join the FMSPPOC Gross motor skills Evaluated using The Test of Gross Motor Development-Third Edition (TGMD-3) which has been validated in China with satisfied reliability [76]. The TGMD-3 includes two sub-scales, the locomotor skill sub-scale composed of six skills: run, gallop, hop, horizontal jump, slide (judged on four performance criteria) and skip (judged on three criteria), and the ball skill sub-scale (previously named object control skill in the TGMD-2) composed of seven skills: one hand forehand strike of self-bounced tennis ball, kick a stationary ball, overhand throw, underhand throw, two hand strike of a stationary ball, one hand stationary dribble and two hand catch. Before the assessment of each skill, an accurate verbal description and demonstration of each skill was carried out by a trained researcher. Each child completed three trials, one for practice and then two formal trials. Only the scores of the two formal trials were recorded for the evaluation. Children's performances were observed and evaluated following 3 ~ 5 qualitative performance criteria for each TGMD-3 assessment skill: every criterion was scored 1 point (present) or 0 point (absent) using process-oriented checklists [77]. The total score for each item is given by the sum of both trials. Items' sums were used to calculate the score for the locomotor (46) and ball control skills sub scales (54) as well as for the overall TGMD-3 scores (100) [77] Manual dexterity and balance Assessed using a subscale of the Movement Assessment Battery for Children-Second Edition (MABC-2, band 2 and 3) [78], which has demonstrated good reliability and validity in Chinese children [79]. Manual dexterity, which is composed of placing pegs, threading lace and drawing trail 2. Balance, which includes one-board balance, walking heel-to-toe forwards and hopping on mats. The scoring was consistent with the method published in the movement ABC-2 UK manual [78] Secondary outcomes Health behaviors: PA, sedentary behavior, and sleep The ActiGraph GT3X + accelerometer (ActiGraph LLC, Pensacola, FL, USA) will be used to objectively monitor whole-day PA. All participants will be asked to wear a monitor at the waist on an elasticized belt at the right midaxillary line. Participants were encouraged to wear the accelerometer 24 h per day (removing only for water-based activities: i.e., swimming/bathing) for at least 7 d, including two weekend days. Days with > 16 h/d of activity recordings (from midnight to midnight) were considered as valid [80], and the minimum amount of non-sleep data that was considered acceptable for inclusion was at least 4 days with at least 10 h of wake wear time per day, including at least one weekend day [81]. Data were collected at a sampling rate of 80 Hz downloaded in 1-s epochs with the low-frequency extension filter using the ActiLife software version 6.13 (ActiGraph LLC) and reintegrated to 15-s epochs for analysis. Non-wear time will be defined as a period of 20 consecutive minutes or more zero counts [82]. Night sleep duration was calculated in R software using GGIR package (version 2.0) default algorithm, as described by Van Hees et al. [83]. Evenson cut-off points [82] will be applied to determine non-sleep time spent in light (25- [84], involving a total of 11 physical fitness indicators. The 7 described below are suitable for primary school students, including BMI, vital capacity, 50 m sprint, Sit and reach, timed skipping rope, timed sit-ups (just Grades 3-6) and 50 m × 8 shuttle-run (just Grades 5 and 6). Following the guidelines, test examiners conducted each test per a protocol determined a priori. Each fitness indicator score was weighted by a grade-and sex-specific percentage Perceived motor competence Assessed using the subscale (athletic competence) of the Self-perception Profile for Children (SPPC) [85]. Athletic competence subscale of SPPC includes 6 items, three of the items are worded such that the first part of the statement reflects low competence or adequacy, and three are worded to first reflect high perceptions of competence or adequacy. This counterbalancing is reflected in the scoring of items, where half of the items are scored 1, 2, 3, 4 and half are scored 4, 3, 2, 1. This is to insure that children are tracking the content of the items and are not simply providing random response choices or are always checking the same side of all questions. In addition, the Chinese version of the SPPC has adequate reliability, ranging from 0.61 to 0.76 [86]. The structure and criterion validity are acceptable [87] Perceived well-being The Chinese version of the 12-item Psychological Well-Being Scale for Children (PWB-C) will be used [88,89]. PWB-C contains six dimensions of psychological well-being: environmental mastery, personal growth, purpose in life, self-acceptance, autonomy, and positive relations with others. Options were given on a 4-point Likert scale, ranging from 1 ("almost never") to 4 ("very frequently"). The mean score of the 12 items will be calculated, with a higher score indicating a higher level of perceived well-being models (GLMM) will be used to evaluate the intervention effects on outcome measures, with time, condition, and the interaction of time and condition as fixed effects, adjusting for baseline values. Post-hoc tests will be conducted where a significant interaction effect is detected by using the least significant difference method. In addition, structural equation modeling will be used to analyze the interrelationship (e.g., mediation mechanisms) of outcomes, using a bias-corrected bootstrap approach (5000 resamples). For the qualitative data, a thematic analysis approach will be applied using NVivo 11 software. Discussion Obesity is becoming an increasing problem also in China and Chinese children, but rather little evidence is available how to address this. There are indicators, that this trend is similar to those in other countries. In 2008, Stodden et al. [33] proposed a "conceptual model of children's motor development" based on partial evidence and experience, which hypothesized the relationship between FMSs and multiple health indicators (e.g., PA, perceived motor competence, physical fitness, and weight status) of children. The authors point out that FMSs levels may positively or negatively affect PA and weight status in children, and that healthy or unhealthy weight status may also promote or restrict the development of FMSs in children over time, with perceived motor competence and physical fitness mediating the relationship between FMSs and PA [33]. In recent years, overweight and obese children have been consistently reported as scoring significantly lower than their healthy weight counterparts on FMSs, suggesting that poor FMSs can contribute to overweight/obesity [21][22][23][24][25]94]. Similarly, obesity may likewise act as a constraint on FMS development and proficiency, generating biomechanical changes and adjustments in movement [94]. In previous studies, FMSs have been shown to correlate with PA in schoolaged children [4], and strong positive associations have been observed between FMS proficiency in children at age 6 and leisure time PA in adults at age 26 [95]. Further to regular PA participation, additional health benefits of FMSs proficiency have been associated with increased cardiorespiratory fitness [96] and perceived motor competence [10,14], as well as reduced overweight and obesity [12,13]. This indicates the importance of effective interventions being implemented to allow overweight/ obese children to develop their FMS early, reducing their risk of obesity through continued PA into adolescence and adulthood. To our knowledge, FMSPPOC is the first school-family blended multi-component FMS-promotion program to be designed in China, which makes up the research and practical gaps as suggested in previous studies and shows a series of strengths: 1) standard methods will be applied for assessment of FMSs; 2) the use of theoretical framework and behavioral change techniques will ensure the implementation and effectiveness of the FMSs promotion program; 3) the gold standard for scientific designs (i.e., CRCT) will be applied and comprehensive measures will be implemented, which can contribute to a robust M-PAC components of PA for both parents and children The Chinese translated items of M-PAC components of PA will be used [90][91][92]. The questionnaire package includes measures for behavioral intention, instrumental and affective attitudes, perceived capability, perceived opportunity, parental support (intentional and actual), action planning and coping planning, action control, habit strength, and identity. The response options and scoring approach will be consistent with the settings in previous studies [65,66,90,91] Anthropometric and body composition measures Height and weight will be measured calibrated medical digital scales (RGT-140, Changzhou, China) and portable stadiometer (GMCS-I, Beijing, China) to the closest 0.05 kg and 0.1 cm, respectively following a standardized protocol [84]. Waist circumference will be measured using a flexible plastic tape at 1 cm above the umbilicus from the horizontal level in a standing position, at the end of a normal expiration [92]. Each of aforementioned anthropometric indexes was measured twice and the mean value was used for data analysis. A Bio-Impedance Analysis (BIA) was conducted using Portable body composition analyzer (InBody230, Seoul, South Korea) and Lokin Body 120 software (DMS-BIA technology; InBody Co., Seoul, South Korea) to estimate body composition, including percent of body fat (PBF), fat mass (FM), fat-free mass (FFM, kg) and skeletal muscle mass (SMM, kg). The instrument was validated against dual-energy X-ray absorptiometry for school-age children with satisfactory results for estimating body fat [93] Additional information Demographics Children's age, gender, grade (primaries 1-6), ethnicity (Han or others), parental educational level (below college; college or above) and yearly household income (low: RMB < 84,000; medium: RMB 84,000 -132,000; high: RMB > 132,000) [60] will be reported by parents examination of the program effectiveness and a better understanding of the potential mechanisms; 4) the use of the RE-AIM framework will enhance the quality of the program and enable us to evaluate the implications of FMSPPOC among obese children broadly. We anticipate that FMSPPOC will also be a new paradigm of secondary obesity prevention. In addition, we will propose and assist health and education governments to advocate and disseminate the blended intervention among all primary schools to tackle unhealthy lifestyles in children and further improve health status of obese children in China. Other countries and regions with similar demographics to China can also learn and benefit from this product. Surely, comparisons to other cultures need to follow up and mechanisms are required to be evaluated, too. With that, the research of this study and comparable one can inform not only research and practice but also theory refinement and scaling up approaches.
2023-02-21T14:52:40.015Z
2023-02-20T00:00:00.000
{ "year": 2023, "sha1": "006aac7e3b513d9de5c34577ac7c1b2963b28166", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "006aac7e3b513d9de5c34577ac7c1b2963b28166", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
266222462
pes2o/s2orc
v3-fos-license
Refractive Associations With Whole Eye Movement Distance and Time Among Chinese University Students: A Corvis ST Study Purpose Eye movement has been frequently studied in clinical conditions, but the association with myopia has been less explored, especially in population-based samples. The purpose of this study was to assess the associations of eye movement measured by the Corvis ST with refractive status in healthy university students. Methods A total of 1640 healthy students were included in the study (19.0 ± 0.9 years). Eye movement parameters (whole eye movement [WEM]; whole eye movement time [WEMT]) were measured by the Corvis ST. Spherical equivalent (SE) was measured using an autorefractor without cycloplegia. IOL Master was used to assess axial length (AL). Results AL was negatively correlated with WEM and WEMT (rWEM = −0.28, rWEMT = −0.08), and SE was positively correlated with WEM and WEMT (rWEM = 0.21, rWEMT = 0.14). For the risk of high myopia, breakpoint analysis and restricted cubic spline model showed that the knots of the significant steep downward trend of WEM and WEMT were 0.27 mm and 20.4 ms, respectively. The piecewise linear regression model revealed a significant correlation between AL, SE, and WEM when the value of WEM was below 0.27 mm. Additionally, when WEMT exceeded 20.4 ms, a significant decrease in AL and an increase in SE were observed with increasing WEMT. Conclusions A larger distance and longer duration of eye movement were correlated with a lower degree of myopia and shorter AL, and there was a threshold effect. Translational Relevance The findings might aid in understanding the pathogenesis of myopia and provide a theoretical foundation for clinical diagnosis and prediction. Introduction Myopia, the most common refractive error, is becoming increasingly prevalent worldwide, with a notably high incidence in East and Southeast Asia. 1,2pproximately 10% to 20% of myopic patients will develop high myopia, which can lead to complications of irreversible vision loss, including myopic macular degeneration, retinal detachment, and so on. 3Myopia and high myopia are primarily characterized by excessive elongation of the eye. 4In recent years, corneal visualization technology (Corvis ST; Oculus, Wetzlar, Germany) has been used to quantitatively evaluate the biomechanical properties of the cornea, eyeball, and constant components, facilitating the study of relevant biomechanics. Corvis ST is a non-contact device that provides much information about the biomechanics of the cornea.Whole eye movement (WEM) and whole eye movement time (WEMT) are indicators that reflect eye movement in the measurement and represent the overall force profile of the cornea, eyeball, and constant components.When the air is released, the eyeball itself moves back slightly, and when the cornea returns to its original contour, the eyeball moves forward again.Slight but noticeable movement of the entire eye can be found during the measurement. 5Hwang et al. 6 proposed that eye displacement can be used to quantify the biomechanical parameters of orbital soft tissue behind the eye, including changes in ocular fat and extraocular muscles.WEM has some important relationships with clinical factors.][9][10] In a small clinical sample study, longer axial length (AL) was associated with smaller WEM, 11 but investigations in normal populations were lacking.3][14] The studies suggested that eyes with longer AL generally exhibit greater compliance of the eyeball, resulting in lower ocular rigidity.During the jetting process, the eyeball is more prone to deformation rather than posterior displacement, leading to a lower WEM. 6,15,16EM and WEMT are relatively new parameters, and few studies have addressed their epidemiological association with AL and refractive status, especially in healthy populations.In the few studies to date, WEM has only been briefly correlated with AL as an indicator of corneal biomechanics, and the sample size is small.10,11,17,18 Data on eye movement and refractive parameters in normal populations are lacking, and possible nonlinear associations have not been explored.Therefore this study aimed to explore the linear and nonlinear associations between eye movements and refractive parameters in Chinese university students and to provide a basis for exploring the mechanism of myopia. Study Population The study comes from the Dali University Student Eye Health Study, a school population study conducted in Yunnan province in southwestern China.The purpose of this study was to identify exposures and risk factors for common eye diseases among college students.0][21][22][23] Before the investigation, informed consent was obtained from each participant before enrollment.All freshmen of Dali University participated in this questionnaire survey and eye examination in 2021.People with ocular lesions unrelated to myopia (keratoconus, acute infection, etc.) were excluded.A total of 2014 students completed the questionnaire and vision test, with a response rate of 74.7%.In addition, 369 participants with a history of corneal refractive surgery and five participants without eye movement parameters were excluded, leaving 1640 participants for the current analysis.There were no differences in age or sex between subjects and nonsubjects (P > 0.05).The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of the Affiliated Hospital of Yunnan University (February 22, 2021; approval number 2021040). Eye Examination The WEM and WEMT were measured by the Corneal Dynamic Graph Flow Analyzer (Corvis ST; Oculus).This is a device that uses a noncontact tonometer, Scheimpflug geometry, and an ultra-highspeed camera to measure intraocular pressure and various corneal biomechanical parameters.When the Corvis ST jets, a slight but noticeable movement of the entire eye can be observed during the deformation to recovery of the cornea, as shown in Figure 1.During the examination, the participant placed the lower jaw in the mandibular drag and the forehead against the frontal rest; the examination eye was open and flat in front of them, gazing at the red dot in the screen of the red-dot instrument; and the measurer operated the joystick to align the cornea to automatically identify the parameter.Only the reliable measurements that were identified as "OK" by the Corvis ST monitor were selected. The refractive status was measured using an autorefractor (KR800; Topcon Optical Company, Tokyo, Japan) without cycloplegia.Spherical equivalent (SE) was calculated as the sum of spherical and one-half of the cylindrical.Myopia was defined as an SE less than −0.5 diopter (D), and high myopia was defined as an SE less than −6.0 D. For refractive measurement, the first five valid readings are used and averaged with vector methods to give a single estimate of refractive error.For spherical and cylinder components, all five readings should be at most 0.50 D apart.AL was measured using the IOL Master (Zeiss Meditec, Dublin, CA, USA).The AL measurement was conducted three times, and then the average value was obtained.These eye examinations were carried out by professional ophthalmologists. Covariate Variables Sociodemographic characteristics, including age and sex (men or women), were recorded using a selfadministered questionnaire.Weight and height were measured by trained school nurses using standard procedures.The weight on the digital scale is accurate to 0.1 kg, and the height on the scale is accurate to 0.1 cm.Body mass index (BMI) was calculated as weight (kg) divided by height (m) squared (kg/m 2 ). Statistical Analyses Because of the strong correlation between the biometric parameters of the left and right eyes (Pearson correlation coefficient > 0.90), only the right eye was analyzed at present.Pearson's correlation coefficient was used to test the relationship between WEM and WEMT age, SE, and AL.The t tests were used to identify differences in systemic factors and ocular parameters between men and women.Linear regression models were used to examine the associations between WEM and WEMT SE and AL.Considering that the association of eye movements with refractive parameters might be nonlinear, we explored the associations using the restricted cubic spline model and breakpoint analysis.In restricted cubic spline analysis, the independent variable is divided into intervals and a cubic polynomial function is used to fit the data within each interval.Breakpoint analysis describes the change in the dependent variable relative to the independent variable by fitting multiple segments. All analyses were performed using SPSS 23.0 software (SPSS Inc., Chicago, IL) and R software version 4.2.3.A P value < 0.05 was considered statistically significant. Results A total of 1640 students (510 men and 1130 women) aged 15.7 to 24.4 years were enrolled, with a mean age of 19.0 ± 0.9 years.Basic and ocular characteristics, as well as sex differences, are shown in Table 1.The mean AL and SE of the overall population were 24.84 mm and −3.70 D. The mean WEM and WEMT were 0.21 mm and 20.89 ms, respectively.Men tended to have greater weight, height and BMI, lower myopia, longer AL, greater WEM, and longer WEMT than women (all P < 0.05, Table 1). The scatter plots of WEM, WEMT and age, AL, and SE are shown in Figure 2. The values of both WEM and WEMT increased with increasing age (r WEM = 0.07, r WEMT = 0.08, both P < 0.05; Figs.2A, 2B).Table 2 shows the associations between WEM and WEMT SE and AL.In both the crude and ageand sex-adjusted models, the associations with SE and AL were statistically significant for each unit or quartile increase in WEM and WEMT (all P < 0.05).Specifically, after adjusting for sex and age, we found that for each 1 mm increase in WEM, the SE value increases by 10.86 D and the AL decreases by 7.42 mm, whereas for each 1 ms increase in WEMT, the SE value increases by 0.54 D, and the AL decreases by 0.21 mm. For the risk of high myopia, restricted cubic spline analysis showed that the knots of the significant steep downward trends of WEM and WEMT were 0.27 mm and 20.4 ms, respectively.These breakpoints were used as cutoff values in further analysis.In the restricted cubic spline model adjusted for sex and age, it was observed that the risk of high myopia decreased as WEM values increased below 0.27 mm and as WEMT values increased above 20.4 ms (Fig. 3).Consistently, we further analyzed the association of WEM and WEMT with AL and SE in a piecewise linear regression model.The results revealed that the association between AL, SE, and WEM was not significant when the value exceeded 0.27 mm (Figs.4A, 4B).However, a significant decrease in AL and an increase in SE were observed as WEMT increased beyond 20.4 ms (Figs.4C, 4D). Discussion Our study provides population-based data on the relationship between eye movement and myopia and AL.The findings suggest that larger WEM and WEMT are associated with a lower degree of myopia and shorter AL, especially at WEM values less than 0.27 mm and WEMT greater than 20.4 ms.In addition, eye movements better explain changes in AL than they explain changes in SE. In the current study, the mean values of WEM and WEMT were lower than those of Abdi et al. 24 and Li et al. 25 Age may be the largest explanation for the fact that our subjects were younger.In addition, WEM and WEMT were positively associated with age, which was consistent with previous studies. 24,26This correlation may be due to changes in retrobulbar fat composition with age.As individuals age, retrobulbar fat decreases, and eye displacement and time increase. 27ased on a small sample, Hwang et al. 6 found that females had a larger MEM than males, whereas our study found the opposite result.Currently, there is no biological mechanism supporting the sex differences in eye movements.Thus the observed differences might be explained by chance findings.Additionally, variations in sample size, age, refractive errors, and orbital structures, could contribute to the inconsistent results. 9][30] Shorter WEMT was independently associated with more severe visual field defects in normal tension glaucoma. 8,14EM was smaller in patients with Graves orbitopathy or thyroid orbitopathy than in healthy subjects. 15,31As we know, AL and refractive status are closely related to these diseases, but there are few studies that directly study eye movement and AL and refractive status.We directly explored the relationship between these and found that AL and SE are related to eye movements.Regarding the relationship, the explanation is mainly based on globe compliance and optic nerve traction.Previous studies have shown that optic nerve traction exerted on the eyeball during eye movements deforms the optic nerve head, and this effect is more pronounced during adduction than during abduction. 32,33When looking at close objects, the eye movement is in the adducted state, which may cause transient axial elongation. 34In highly myopic eyes, the eyeball is overly elongated, which "relieves" some of the inherent optic nerve traction so that the same airblowing induces less eye movement. 16,35esides, based on growth, deformation, and stress linkages within the eyeball, forces generated by the extraocular muscles during eye movements may be responsible for the axially convergent elongation. 16owever, the direction of extraocular muscle forces is not axial, so it is unlikely that these forces cause scleral remodeling to produce posterior dilation of the globe. 36In cases of axial length elongation and high myopia, the deformability of the eyeball increases as it elongates axially; that is, the elongation is typically accompanied by a reduction in collagen fiber bundles within the sclera, making it thinner and more elastic.][39] The cornea and sclera are nonlinear, anisotropic, and viscoelastic soft tissues.The extraocular fat and muscles also exhibit significant viscoelasticity. 39,40herefore eye movements are also generally nonlinear.Interestingly, our results found a threshold effect on the relationship between eye movement and AL and SE, that is, the relationship is only meaningful when WEM is less than 0.27 mm and WEMT is greater than 20.4 ms.This result shows the nonlinear process of eye movement, but the specific value and phenomenon have not been revealed in a previous study, and the mechanism needs to be further explored in the future.Besides, in the visual system, a series of muscles in our eyeballs are responsible for controlling eye movement. 12Lin et al. 41 found that inflammation is associated with the development of myopia.In the active inflammation phase, edema and swelling of extraocular muscles, connective tissue, and orbital fat can lead to increased intraocular pressure through ocular hyperemia and increased adventitial venous pressure, followed by decreased orbital compliance leading to slowed eye movements. 31,42,43lthough both SE and AL are parameters representing the degree of myopia, our analysis showed that eye movements explained more of the variation in AL than in SE.A study by Hagen et al. 44 proposed that sustained elongation of AL is compensated for by refraction of the lens, which involves a reduction in lens power and a deepening of the anterior chamber, thereby delaying the onset of myopia.Additionally, it has been suggested that AL is a better indicator of eye size than SE. 45Furthermore, it is worth exploring the possibility of a mechanical or physical relationship between eye movements and AL.This relationship warrants further investigation to determine the exact cause and underlying mechanism. The current study highlights the nonlinear relationship between eye movements and refractive error, suggesting that ocular movement parameters have the potential to serve as new indicators for assessing the risk of myopia.In future diagnostic and predictive This study explored the relationship between eye movement and myopia based on a large sample of healthy people, but there are still some limitations.Firstly, the study design is cross-sectional, which cannot be used to infer the causal relationships.It is also likely that different refractive status might lead to changes in eye movements.Secondly, the population in this study only included adolescents aged 15-24 years, which limits the generalization of the conclusions to other age groups.Expanding the participant pool to encompass a more diverse age demographic could offer more valuable insights in future research endeavors.Finally, some other factors that may affect the observed association, such as genetic and visual accommoda-tion factors, were not considered in our analysis.Future studies need to control these confounding factors as much as possible. Conclusions This study explored the correlation between eye movement and refractive status among Chinese university students.The findings showed that eye movement distance and duration were negatively correlated with longer ALs and more myopic refractive error, and there was a threshold effect.In addition, changes in eye movements are more closely related to AL than SE.These results inform base data for ophthalmologists and other populations and have implications for Figure 1 . Figure 1.Schematic diagram of whole eye movement by Corvis ST. Figure 3 . Figure 3. Association of WEM and WEMT with the risk of high myopia.(A) WEM-high myopia, the reference was 0.27 mm of WEM; (B) WEMT-high myopia, the reference was 20.4 ms of WEMT.OR, odds ratio; CI, confidence interval; adjusted for gender and age. Table 1 . Characteristics and Gender Differences of the Included Participants in the Study SD, standard deviation. Table 2 . Linear Regression Analyses of Associations Between WEM, WEMT, and Refractive Parameters CI, confidence interval. understanding the role of eye movements in relation to refractive errors and developing the clinical abnormal values for eye movement indicators.orbitopathy.J Endocrinol Invest.2021;44:453-458.44.Hagen LA, Gilson SJ, Akram MN, Baraas RC.Emmetropia is maintained despite continued eye growth from 16 to 18 years of age.Invest Ophthalmol Vis Sci.2019;60:4178-4186.45.Rezapour J, Tran AQ, Bowd C, et al.Comparison of optic disc ovality index and rotation angle measurements in myopic eyes using photography and OCT based techniques.Front Med.2022;9: 872658.
2023-12-16T05:10:55.554Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "047e789ce87a4b7005ef01f465b7c5a8cbffef20", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1167/tvst.12.12.13", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "047e789ce87a4b7005ef01f465b7c5a8cbffef20", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15424975
pes2o/s2orc
v3-fos-license
Synergistic effect of a rehabilitation program and treadmill exercise on pain and dysfunction in patients with chronic low back pain [Purpose] The present study examined the influence of treadmill exercise added to a low back pain rehabilitation program on low back extensor strength, pain, and dysfunction in chronic low back pain patients. [Subjects and Methods] Twenty men aged 22–36 years with chronic low back pain were randomly divided into experimental and control groups of 10 patients each. Both groups underwent a low back pain rehabilitation program lasting 30 min each, thrice/week for 8 weeks. The experimental group was prescribed an additional 30 min of treadmill exercise without a slope at a speed of 3.0–3.5 km/h, at which patients could walk comfortably. Low back extensor strength was tested using the Medx lumbar extension machine, pain level was tested, using the visual analog scale, and dysfunction was tested, using the Oswestry Low Back Pain Disability Questionnaire. [Results] Changes in low back extensor strength by angle showed significant interaction effects between measurement time and group at 12°, 24°, and 36°. The results of the visual analog scale and Oswestry Questionnaire showed a decreasing trend after the experiment in both groups. However, there was no interaction effect of the additional treadmill exercise in the experimental group. [Conclusion] The combination of a low back pain rehabilitation program and treadmill exercise has a synergistic effect, to some extent, on the improvement of low back extensor strength and should be considered for treatment and rehabilitation of low back pain patients. INTRODUCTION Over 70% of the population in developed countries experiences low back pain, and its incidence is increasing in developing countries as well; low back pain is therefore a global problem 1) . As varied and complex psychological and social factors are involved, the pain has no definite cause in about 85% of patients 2) . Low back pain is divided into acute or chronic according to its duration; if the pain lasts for over 12 weeks, it is defined as chronic low back pain, and 5-10% of all acute low back pain patients develop chronic low back pain 3) . Once low back pain becomes serious, physical activities are restricted, and ultimately muscle atrophy occurs as the muscles remain unused for long durations 4) . Particularly in chronic low back pain patients, the worsening of low back pain caused by reduction in the muscle area and muscular atrophy leads to secondary damage and relapse 5) . Many low back pain patients develop recurring low back pain due to lumbar instability caused by muscular weakness around the spine and soft tissue damage in the trunk. Finally, the reduction in muscular strength, muscular endurance, and flexibility in the lumbar spine and limited range of motion of joints cause relapse of chronic low back pain 6) . In particular, chronic low back pain patients have more severe atrophy of low back extensors than of flexors 7) . Additionally, the aerobic exercise capacity tends to decrease in patients with chronic low back pain owing to the limitation on motionthis must be considered in rehabilitation exercise 8) . The reasons for low back pain are still under discussion, and over many years, various exercise methods have been suggested for treating low back pain, including flexion exercise, isometric flexion exercise, extension exercise, passive extension exercise, and intensive dynamic back exercise 9,10) . Although these exercise programs have been recommended and developed over several years in low back pain discussions and rehabilitation centers, a satisfactory outcome has not yet emerged. Despite greatly differ-ent opinions on the pathogenesis of low back pain among researchers, therapeutic exercise is known to be effective in general 11) . In addition, therapeutic exercise in which patients participate actively is effective for low back pain after the acute phase, and in particular, different types of exercises applied by physiotherapists show very positive outcomes 12) . Rehabilitation programs for the relief of low back pain include complex exercises such as selective exercises involving control of trunk muscles, muscle strengthening, muscle stretching, and aerobic exercises 13) . Recent studies report that aerobic exercise is effective in reducing depression, pain, and dysfunction in low back pain patients 14) . However, some studies report that aerobic exercise combined with traditional physical therapy has no additional effect on pain and dysfunction 15) . Therefore, there are conflicting results concerning the effects of aerobic exercise in low back pain patients, and there is also a lack of clear evidence on this matter. Therefore, the purpose of this study was to determine the additional effect of treadmill exercise combined with a low back pain rehabilitation exercise program on low back extensor strength, pain, and dysfunction in patients with chronic low back pain. SUBJECTS AND METHODS This study involved 20 men aged 22-36 years with chronic low back pain who visited a hospital rehabilitation medicine department; they were randomly divided into experimental group and a control group, with 10 patients in each group. Patients who complained of low back pain lasting for over 3 months were selected as subjects, and patients with musculoskeletal diseases that impaired gait, heart diseases, neurological disorders, or structural spine deformity were excluded. All the subjects of this study signed the consent form for this study voluntarily after the intent and purposes of the study were explained to them. Kyungwoon University approved this study, which complies with the ethical standards of the Declaration of Helsinki. The physical characteristics of the subjects in each group are shown in Table 1. The low back pain rehabilitation program was conducted for 30 minutes, thrice a week for 8 weeks for the control and experimental groups. The program consisted of 14 exercises including flexion and extension, and it was conducted under the continuous guidance and supervision of an expert in a low back pain treatment room. The experimental group performed an additional 30 min of treadmill exercise. Treadmill exercise was conducted without a slope at a speed of 3.0-3.5 km/h, which allowed patients to walk comfortably, and the patients were instructed to straighten their back and make initial contact with their heel. For the low back extensor strength test, the Medx TM lumbar extension machine (H-10000; MedX, Ocala, FL, USA) was used, and measurement were taken before and after the experiment for both groups. Before starting the low back extensor strength test, the tester moved the machine manually and conducted the range of motion (ROM) exercise 3 times to check that there were no limitations in ROM at the designated degree of flexion for the lumbar spine. The test was conducted to measure the maximum isometric torque of the lumbar spine extensors according to degree of flexion (72°, 60°, 48°, 36°, 24°, 12°, 0°) starting from the 72° position (Table 2). To evaluate the level of low back pain in the patient group, the visual analog scale (VAS) was used. On a 10-cmlong horizontal straight line with "No pain" and "Very severe pain" written on the left and right end respectively, the patients marked the pain intensity they were currently feeling directly with a pen. The Oswestry Low Back Pain Disability Questionnaire was developed by Fairbank et al. 16) to measure the relief and aggravation of the symptoms of low back pain patients. In this study, a revised questionnaire was used 17) , and this tool consisted of a total of 10 detailed questions concerning such things as low back pain intensity, self-management, lifting motion, sitting, standing, sleeping, social life, traveling, and job performance. For each question, a 6-point scale was provided for scoring the functional performance ability. The total number of points given ranges from a maximum of 60 points to a minimum of 10 points, and the higher the score, the more severe the dysfunction. Two-way repeated measure analysis of variance (ANOVA) was conducted for all data analyses in order to express the average and standard deviation values and to test the group trends before and after the exercise program. Comparisons of changes in the average values of measurement variables before and after the 8-week exercise program were analyzed using the paired samples t-test. In addition, the variation (∆) between baseline measurements and measurements after 8 weeks of exercise was calculated (∆ score = changed score between before and after 8 weeks of exercise). All statistical analyses were conducted using the PC version of the Statistical Package for the Social Sciences (SPSS version 21.0; IBM Corporation, Armonk, NY, USA), and statistical significance was set at p <0.05. RESULTS According to angle, the changes in low back extensor strength showed significant interaction effects of measurement time and group at 12° (p = 0.034), 24° (p = 0.029), and 36° (p = 0.011). However, at 48°, 60°, and 72°, both the control group and experimental group showed no interaction effect despite showing an increasing trend in low back extensor strength after the experiment compared with before Data are means±SD the experiment. The VAS and Oswestry Low Back Pain Disability Questionnaire scores showed a decreasing trend after the experiment compared with before the experiment, both in the control group and the experimental group. However, there was no interaction effect of the additional treadmill exercise in the experimental group compared to the control group (Table 3). DISCUSSION Low back pain is one of the most widespread diseases in modern society. Rapid economic development and a sedentary lifestyle have brought about a reduction in physical activity and changes in physical function and posture, leading to the incidence of low back pain. In most low back pain patients, muscular weakness and imbalance around the lumbar spine caused by lack of exercise are the major factors that cause activity impairment. This is because muscular weakness and imbalance lead to low back pain, and this pain limits the range of motion and prevents the proper exertion of muscular strength. Generally, low back pain patients reduce their physical activity to avoid the pain caused during the performance of their daily routine activities 18) , leading to a vicious circle of relapse and chronicization of low back pain, which in turn reduces aerobic exercise capacity (cardiorespiratory fitness) 19) . The reduction of activities of daily living (ADL) after the occurrence of low back pain causes sarcopenia and reduces muscular strength, muscular endurance, and cardiopulmonary function, leading to an increase in the likelihood of development of metabolic risk factors 20) . Therefore, performing aerobic exercise to improve low back extensor strength is important. In this study, additional treadmill exercise was conducted by patients performing a low back pain rehabilitation program for 8 weeks to determine if the addition of this exercise was effective. In this study, changes in low back extensor strength showed a significant interaction effect by angle at 12°, 24°, and 36°, which was considered to be the effect of the treadmill exercise conducted under the supervision of experts who ensured that the patients performed the exercise with their backs straight and made initial contact with their heel. In other words, low back extensors are activated and muscular strength increases by walking properly and improving lordosis; these findings are attributed to the effect of training the supporting muscles involved in the angle changes of extensors often used in the lumbar region when performing walking exercises. Kankaanpää et al. 21) found that the low back extensors of chronic low back pain patients were weak and got easily 22) concluded that walking training should be considered in rehabilitation programs for chronic low back pain patients, since the stability between their lumbar sections was low and the coordination between the low back extensors was poor. From a rehabilitative perspective, it can be said that walking programs are helpful for simultaneously stabilizing the trunk and controlling the motion of the limbs by strengthening the waist muscles and abdominal muscles supporting the spine. In this study, the VAS and Oswestry Low Back Pain Disability Questionnaire scores of the experimental group showed no interaction effect of the additional treadmill exercise as compared with those of the control group. Tritilanunt et al. 23) found that chronic low back pain patients that performed an aerobic exercise program for 3 months showed a significantly reduced pain index compared with those who performed only lumbar flexion exercise. Also, Chatzitheodorou et al. 14) found that chronic low back pain patients that performed high-intensity aerobic exercise for 12 weeks showed significantly reduced pain, dysfunction, and psychological burdens compared with those who received only conservative physical therapy. However, Chan et al. 15) found that aerobic exercise combined with an 8-week physical therapy had no additional effect on the alleviation of pain and dysfunction, similar to the findings of this study. The lack of an additional effect of treadmill exercise in this study is attributed to the fact that both groups underwent back mobilization, performed abdominal stabilization exercise, and received back care advice based on ergonomic principles in addition to traditional physical therapy. However, as many studies have reported an additional effect of aerobic exercise, further systematic long-term studies with many subjects are required to confirm the findings of the present study. Considering all the outcomes of this study, the changes in low back extensor strength by angle showed an additional effect of treadmill exercise at 12°, 24°, and 36°, and pain level and dysfunction index showed a decreasing trend after the experiment in both groups, with no significant difference between groups. It is thought that treadmill exercise combined with traditional physical therapy has a partial effect on improvement of low back extensor strength.
2018-04-03T02:32:36.234Z
2015-04-01T00:00:00.000
{ "year": 2015, "sha1": "375097b27e9676af0ce22fadc68249aca5d4b7bc", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jpts/27/4/27_jpts-2014-718/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "375097b27e9676af0ce22fadc68249aca5d4b7bc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15013492
pes2o/s2orc
v3-fos-license
Cosmic-Ray Positrons: Are There Primary Sources? Cosmic rays at the Earth include a secondary component originating in collisions of primary particles with the diffuse interstellar gas. The secondary cosmic rays are relatively rare but carry important information on the Galactic propagation of the primary particles. The secondary component includes a small fraction of antimatter particles, positrons and antiprotons. In addition, positrons and antiprotons may also come from unusual sources and possibly provide insight into new physics. For instance, the annihilation of heavy supersymmetric dark matter particles within the Galactic halo could lead to positrons or antiprotons with distinctive energy signatures. With the High-Energy Antimatter Telescope (HEAT) balloon-borne instrument, we have measured the abundances of positrons and electrons at energies between 1 and 50 GeV. The data suggest that indeed a small additional antimatter component may be present that cannot be explained by a purely secondary production mechanism. Here we describe the signature of the effect and discuss its possible origin. Introduction Over the last 30 years, a number of efforts have been aimed at the study of cosmic-ray electrons and positrons with balloon-borne instruments. At energies between about 1 and 10 GeV, early measurements [1,2] found that the "positron fraction" e + /(e + +e − ) is essentially in agreement with a prediction [3] where all positrons are assumed to be of secondary origin, and propagate according to the prescription of the simple "leaky-box" model of the Galaxy. This is illustrated in Figure 1a, which shows a compilation of measurements [1,2,[4][5][6][7][8][9][10][11][12][13] of the positron fraction as a function of energy between 0.05 GeV and 50 GeV. The leaky-box prediction is shown as a solid curve. At energies around 10 GeV and above, as shown in Figure 1a, several measurements [4,6,7] reported a significant excess of positrons over the fraction expected from secondary sources. This spurred a number of interpretations, ranging from inefficient production of primary electrons at high energies [14,15], to hypothetical new sources of positrons [16][17][18][19][20][21][22][23]. The HEAT balloon-borne instrument was designed and optimized to improve the accuracy with which cosmic-ray electrons and positrons at energies from about 1 to 50 GeV can be detected. The instrument and its performance during two balloon flights in 1994 and 1995, respectively, are described in detail elsewhere [13,[24][25][26]. A compilation of positron fraction measurements is shown in Figure 1a, where the HEAT results from the two flights combined are shown as filled squares. The overall proton rejection factor achieved was nearly 10 5 . Backgrounds due to atmospheric secondary electrons and positrons were estimated by Monte-Carlo techniques, and compared with measured growth curves [24]. Such backgrounds amounted to 1-2%, and 20-30% of the total electron and positron signals, respectively. The uncertainty in the secondary corrections translated in a systematic uncertainty of ∼ 0.01 in the positron fraction, comparable to the statistical uncertainty; however, any systematic error in the correction would affect all data similarly, resulting in an overall normalization shift in the positron fraction distribution, preserving any structure observed. Secondary Production The HEAT results shown in Figure 1a did not confirm the previously-reported rise in the positron fraction starting at about 10 GeV. However, the data deviate from the predictions of a purely secondary production mechanism in two ways. First, at energies below about 5 GeV the positron fraction was in excess of the expectations. For this low-energy energy region, another recent measurement [12] also reported a positron fraction that was significantly higher than measured in the 1960's and 1970's. A possible explanation of this effect would come from a solar modulation mechanism that depends on the charge sign of the particle and changes from one solar cycle to the next [10]. The second feature of the HEAT results is an indication of some structure in the energy dependence of the positron fraction above 7 GeV. This cannot be easily explained in terms of conventional secondary production mechanisms. As shown in Figure 1b, a slight enhancement in the positron fraction between about 7 and 20 GeV is observed, which may suggest a primary source of high-energy positrons. This feature appears in the HEAT data from each flight taken separately. Figure 1b shows two predictions for interstellar secondary production in the energy region of interest. First, the leaky-box prediction [3] is shown as a solid red curve. A band of uncertainty in this prediction due to the various uncertainties on the parameters of the model and that of the overall normalization is indicated (hatched area). In this model, the spectrum of cosmic-ray positrons from secondary sources is calculated in the leaky-box approximation from: where t(E) is the mean cosmic-ray age at energy E, related to the rigidity-dependent mean Galactic escape length, n is the mean density of interstellar nuclei, P e (E) is the rate of production of positrons in interstellar nuclear interactions, and (dE/dt) is the rate of energy loss from synchrotron, inverse Compton, bremsstrahlung, and ionization processes. The positron fraction is obtained by dividing the predicted positron spectrum by the measured all-electron spectrum. A more recent calculation [27], shown as a dashed curve in Figure 1b, uses a more realistic Galactic diffusion model to predict the positron fraction from secondary production. Qualitatively, it predicts the same behavior, a smooth, monotonic decrease of the positron fraction without spectral features. The HEAT data cannot be well fit by the secondary-production curves of Figure 1b. The confidence level for the leaky-box prediction is essentially zero (χ 2 =96.5 for 9 degrees of freedom), while that for the diffusion prediction is 0.9% (after adjustments to take into account statistical runs in the data [28]). Although the band of uncertainty in the predictions is wide, all smooth curves within it yield a similarly poor agreement with the data. If the structure seen in the data is real, it would indicate the onset of something new, such as an exotic source of high-energy positrons. Here we consider several possible models. Annihilating Dark Matter WIMPs First, it has been proposed that annihilating Galactic-halo dark-matter WIMPs (Weakly-Interacting Massive Particles) are a source of high-energy positrons [20,22,23,29,30]. As most dark matter candidates are Majorana particles, direct annihilation into e + e − pairs is suppressed. In order to account for an observable e + e − line, a large total WIMP annihilation cross-section is required. The WIMP density would then likely be low and not a major contributor to the present-day cosmological mass density [23]. One exception is a model by Kamionkowski and Turner (hereafter referred to as KT) [20] in which WIMPs with mass mχ greater than 80 GeV/c 2 or 91 GeV/c 2 can annihilate through resonant production of W + W − or Z 0 Z 0 pairs. The resulting electrons and positrons are propagated in a leaky-box model. The model predicts enhancements in the positron fraction near energies of mχ/2 (due to direct decays of the gauge bosons into e ± ), and mχ/20 (continuum radiation due to more complex decay chains through intermediate production of τ ± , π ± , quarks, etc.). If the experimental feature we observe is real, it could be a signature for the low-energy continuum radiation peak at around mχ/20. Figure 2a shows Table 1. The resulting confidence level of 74% is markedly better than for fits to strictly secondary production models. In recent work by Baltz and Edsjö (hereafter referred to as BE) [30], positron production by annihilating dark matter neutralinos is revisited, and a large fraction of the Minimal Supersymmetric Standard Model (MSSM) parameter space is sampled. Again, decays and/or hadronization of the annihilation products are simulated, and positron fluxes calculated, but a more complex diffusion model than in the KT scenario is used for the propagation of the electrons and positrons. Here again, the predicted enhancement in the positron fraction is allowed to be renormalized by a factor that is obtained by fitting to the HEAT data. Two typical resulting best fit curves are shown in Figure 2a, as dotted and dot-dash curves for 336 or 130 GeV/c 2 neutralinos, respectively (for details of assumed MSSM parameters for these and other models, see [30]), and the fit results are summarized in Table 1. Once again, an improvement is obtained compared to secondary models, but the resulting confidence levels of 22% and 42% are not as high as the best-fit KT model; this is mainly a result of the different propagation model used. Pair Creation Near Discrete Sources Second, primary positrons could arise when e + e − pairs are created by electromagnetic processes, for instance through the conversion of high-energy γ rays in the polar cap region of Galactic radio pulsars [19]. In this model, the positron production rate P e (E) of equation (1) is replaced by: where k is given in terms of the Galactic pulsar birth rate b 30 (in units of 30 yr), the effective time t max during which the pulsar emits γ rays (in units of 10 4 yr), the ratio f + of positrons escaping the pulsar per γ ray produced, and the total interstellar mass M (in units of 5 × 10 9 solar masses), by: By enhancing the baseline positron fraction from secondary sources with this kind of contribution, with k left as a fit parameter, we obtain the best-fit curve shown as a dashed line in Figure 2b, with k = 0.15, comparable to reasonable expectations [19]. The fit results are summarized in Table 1. Although the resulting confidence level of 50% is larger than that for purely secondary sources, the shape of the enhancement, a slow monotonic rise with energy, is not as compatible with the data as the local-enhancement effect obtained with some WIMP-annihilation models. This model predicts that the positron fraction should rise with energy beyond 10 GeV, reaching an asymptotic value of 0.5. This could be verified with measurements extending to higher energies. Another electromagnetic process would be the interaction of very high-energy γ rays with optical and/or UV radiation in the vicinity of discrete sources [16], resulting in e ± pair production. In this scenario, the positron production rate P e (E) of equation (1) becomes: where E th = (m e c 2 ) 2 /ǫ 0 is the threshold energy for gamma rays interacting with ambient photons with characteristic energy ǫ 0 , x = 4E/E th , τ γγ is the optical depth accumulated by the gamma ray before escaping the source (a free parameter), and the numerical factor is calculated from formulas assuming a gamma-ray power-law index α ∼ 2.1. By adding a primary positron component from this effect to the baseline from secondary sources, assuming various mean values for the parameter ǫ 0 and allowing the strength of the source to remain a free parameter, we obtain the best-fit curve shown as a dotted line in Figure 2b. The best fit occurs for ǫ 0 = 30 eV, as summarized in Table 1, which is in agreement with reasonable expectations [16], and requires a relatively weak source strength. The resulting confidence level of 75% is once again better than that for purely secondary sources. Positron Production in Giant Molecular Clouds A third possibility is the generation of electrons and positrons in hadronic processes. In one model [18], hadronic cosmic rays can enter and interact within giant molecular gas clouds, resulting in the secondary generation of mostly π ± and K ± , which ultimately decay into muons, and thereafter into electrons and positrons. Fermi reacceleration due to fluctuations in the magnetic field in the turbulent gas could then boost the energy of the e ± . In this model, if the typical field strength in the cloud is B and the minimum turbulence scale is L min , a characteristic magnetization momentum p * = eBc/L min is defined. Particles with momentum greater than p * tend to escape the cloud, so that the spectrum of particles accelerated inside the cloud shows an enhancement near p * . For reasonable choices for the parameters in the model, it is possible to obtain p * = 10 GeV/c, and a positron fraction curve is obtained with an enhancement starting near 10 GeV. If we add to the baseline secondary positron fraction such a primary component from giant molecular clouds, and allow the strength of the effect to be a free parameter, the resulting best fit is the solid curve of Figure 2b. A relatively weak source is sufficient (see Table 1) to fit the data with a confidence level of 80%. Other Positron Sources Other primary positron sources have been suggested as well. For example, e + e − pair production in the magnetosphere of pulsars could be followed by particle acceleration to relativistic energies in the pulsar wind driven by low-frequency electromagnetic waves [17]. Or else β + radionuclei such as 56 Co ejected during a supernova blast, possibly followed by shock acceleration in the envelope [21], could result in an enhanced high-energy positron population. The uncertainties in the models and in the data are such that none of these models can yet be ruled out. Conclusions In The "source amplitude factor" is an arbitrary normalization that indicates the best-fit strength of the effect compared to the one predicted by the authors of the model. and 50 GeV. The solid curve is a model calculation [3] assuming that all positrons are from secondary sources, and propagate according to a simple Galactic leaky-box model. "HEATcombined" refers to the combination [13] of the data sets from the two HEAT flights. b The positron fraction measured with the HEAT instrument, shown on a vertical linear scale. The solid curve is a leaky-box secondary model prediction [3], surrounded by an estimated band of uncertainty shown as the cross-hatching. The dashed curve is a secondary model prediction using Galactic diffusion [27]. 130 GeV/c 2 neutralinos, respectively, in the model of Baltz and Edsjö [30]. b The HEAT positron fraction compared with best-fit model predictions from astrophysical sources of positrons that are in addition to secondary production mechanisms. The dashed curve is the positron enhancement resulting from high-energy γ rays converting to e + e − pairs near the magnetic poles of pulsars [19]. The dotted curve represents a positron enhancement due to high-energy γ rays interacting with low-energy optical or UV photon fields [16]. The solid curve shows the enhancement from cosmic-ray interactions within giant molecular clouds [18].
2014-10-01T00:00:00.000Z
1999-02-10T00:00:00.000
{ "year": 1999, "sha1": "b4d7fba5f334b53e70ee97ee444cab05d037ad4b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/9902162", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0fd7ddb4fd53df48da6da560ffb06eb5a7271804", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
139625466
pes2o/s2orc
v3-fos-license
Research on the Qualities of Cellulosic Yarn Cellulosic fibre is a kind of renewable fibre that has attracted more and more attention in textile processing recently. Yarn spinning is the first fundamental process in textile processing. Therefore, in this paper, taking viscose fibre and tencel fibre as examples, the qualities of cellulosic yarn were studied. Three kinds of pure viscose and tencel yarn: 14.6 tex (40S), 9.7 tex (60S) and 7.3 tex (80S), were spun on a ring spinning system modified with lattice apron compact spinning (LACS) and complete condensing spinning (CCS), respectively. The spun yarn qualities, yarn evenness, breaking strength and hairiness, were tested and comparatively analysed. Then two kinds of cellulosic blend yarn including 14.6 tex, 9.7 tex and 7.3 tex JC/R 60/40 yarn, and 14.6 tex, 9.7 tex and 7.3 tex JC/T 70/30 yarns were spun on a ring spinning system modified with CCS. The spun yarn evenness, breaking strength and hairiness were tested, and the cross sections of the spun yarns were presented using a Y172 Hardy’s thin cross-section sampling device. The results show that for both the pure viscose and tencel yarn, compared with LACS, CCS has better yarn evenness, a little lower yarn breaking strength and a little more hairiness, while the uniformity of yarn qualities are all improved. For the cellulosic blend yarn, compared with the pure cellulosic yarn, yarn evenness is worse, especially for the cotton and tencel blend yarn. to decrease the spinning triangle, due to which the spun yarn quality shows great improvement [7][8][9]. The pneumatic compacting system is the most widely used compact system at present, which is implemented by air flow to condense the fibre bundle [10]. Compact spinning with a perforated drum and lattice apron are the two main kinds of pneumatic compacting devices, in which the perforated drum and lattice apron are used for fibre condensing, respectively [11]. Complete condensing spinning (CCS) is a kind of compact spinning with a perforated drum, in which a kind of perforated drum with a strip groove structure on the surface is equipped [12]. Lattice apron compact spinning (LACS) is one of the most widely used compact spinning systems at present [13,15]. Therefore, in this paper, taking viscose fibre and tencel fibre as examples, the qualities of cellulosic yarn spun by compact spinning CCS and LACS were investigated and comparatively analysed. Pure viscose yarn CCS is a kind of pneumatic compact spinning, in which the front roller of the original ring spinning frame is replaced by a kind of perforated drum with a strip groove structure on the surface (Figure 1). The perforated drum is made of stainless steel with 56HRC hardness, and the diameter is 50 mm, which is different from that used in COM4. There are two top rollers above the perforated drum: the front top roller and output top roller, and a corresponding front nip and output nip can be produced. The area between the front nip and output nip is the condensing zone. In the spinning, airflow can be directed on fibres located in the condensing zone. Meanwhile, for improving the condensing effect, a kind of guiding device is installed above the perforated drum. In the following section, pure viscose yarns were spun on the ring spinning system modified with LACS and CCS, respectively. The spun yarn qualities were tested and comparatively analysed. Combed viscose roving of 512 tex was used as raw material. Then 14.6 tex, 9.7 tex and 7.3 tex pure viscose yarns were spun on the ring spinning system modified with LACS and CCS, respectively. The detailed spinning parameters are shown in Table 1. Taking ten bobbin yarns as measuring samples, all were conditioned for at least 48 hours under standard conditions (65 ± 2% RH and 20 ± 2 °C). The evenness (CV), hairiness index and breaking strength of the spun yarn were tested. For each bobbin yarn, the hairiness was tested ten times using a YG173A hairiness tester (Suzhou Changfeng Textile Electromechanical Technology Co., Ltd., China) at a speed of 100 m/min, for a test time of 1 minute. Then the average value of ten test results for the hairiness of one bobbin yarn was taken. The breaking force of yarns was also tested ten times on a YG068C fully automatic single yarn strength tester (Suzhou Changfeng Textile Electromechanical Technology Co., Ltd., China) at a speed of 500 mm/min, and the average value of ten test results Introduction Cellulosic fibre is a kind of renewable fibre which is produced by using cellulose [1]. Recently cellulosic fibre has attracted more and more attention in textile processing due to its advantages with respect to healthy environmental protection. The cellulosic fibres commonly used are tencel, viscose, model, and so on. Yarn spinning is the first fundamental process in textile processing [2,16]. At present, ring spinning is the most widely used spinning method [3]. In ring spinning, yarn is formed on a spinning triangle, the geometry of which influences the distribution of fibre tension in the spinning triangle and affects the properties of spun yarn directly [4]. On the spinning triangle, yarn hairiness will be produced since the head and tail of the border fibre cannot be rolled into the yarn body easily. Therefore, taking appropriate measures to change the spinning triangle geometry actively and thus improve the quality of yarn has attracted great interest in the spinning of yarn [5,6,14]. Compact spinning is one of the most widely used improvements in traditional ring spinning at present, in which one kind of fibre condensing device is employed on a ring spinning frame in order for the breaking force of one bobbin yarn was taken. Meanwhile the evenness was obtained once by an Uster tester 5-S800 (Uster Technologies, Switzerland) evenness tester at a speed of 400 m/min. for a test time of 1 minute, and the average value of the evenness for one bobbin yarn was taken. Finally the average values of ten bobbin yarns were taken as the corresponding qualities of the spun yarn, corresponding results for which are given in Tables 2-4. From the spinning parameters in Table 1, we can see that, compared with the LACS, the negative pressure in the CCS is lower for the same spun yarn count; that is, compared with LACS, CCS is beneficial for reducing energy consumption. Evenness is one of the most important properties of yarn. The results of spun yarn evenness measured are presented in Table 2. According to Uster Statistics 2013, results of spun yarn evenness can be achieved at a 25% significance level. As shown in Table 2, it is obvious that compared with LACS, CCS has better evenness CVm and CVb, and corresponding -50% thin places, +50% thick places and +200% neps were also a little less; that is, compared with LACS, CCS is beneficial for improving viscose yarn evenness. Yarn strength is another of the most significant properties of yarn. The results of spun yarn breaking strength measured are given in Table 3. According to Uster Statistics 2013, results of spun yarn breaking strength can be achieved at a 25% significance level. As shown in Table 3, it is obvious that compared with LACS, CCS has a little lower yarn breaking strength, but the breaking strength CV of yarn is a little better. For the elongation at break of yarn, the difference between the LACS and CCS is tiny, but the elongation at break CV of CCS is a little lower; that is, compared with LACS, although the average breaking strength value of yarn spun using the CCS is decreased, the uniformity is increased. Hairiness is also one of the most important properties of spun yarn. The results of spun yarn hairiness measured are given in Table 4. According to Uster Statistics 2013, the results of spun yarn breaking strength can be achieved at a 50% significance level. From Table 4, it is evident that compared with LACS, CCS has a little more harmful long hairiness (≥3 mm) and beneficial short hairiness (1-2 mm), but the hairiness CV is improved a little. That is, compared with CCS, LACS is more beneficial for reducing yarn hairiness, including 1-2 mm short hairiness and ≥3 mm long hairiness. Pure tencel yarn In the next section, pure tencel yarns were spun on the ring spinning system modified with LACS and CCS, respectively. Combed tencel roving of 512 tex was used as raw material. Then 14.6 tex, 9.7 tex and 7.3 tex pure tencel yarns were spun. The detailed spinning parameters are shown in Table 1. The results of spun tencel yarn evenness measured are presented in Table 5. According to Uster Statistics 2013, the results of spun yarn evenness can be achieved at a 25% significance level. From Table 5, it is obvious that compared with LACS, CCS has a little better evenness CVm and CVb, and corresponding -50% thin places, +50% thick places and +200% neps were also a little less. That is, compared with LACS, CCS is beneficial for improving tencel yarn evenness. The results of spun tencel yarn breaking strength measured are given in Table 6. According to Uster Statistics 2013, the results of spun yarn breaking strength can be achieved at a 25% significance level. From Table 6, it is obvious that for the breaking strength, compared with LACS, the yarn spun on CCS is a little lower, but the breaking strength CV is a little better. For the elongation at break of yarn, the difference between the LACS and CCS is tiny, but the elongation at break CV of CCS is a little lower. The results of spun yarn hairiness measured are given in Table 7. According to Uster Statistics 2013, the results of spun yarn breaking strength can be achieved at a 50% significance level. From Table 7, it is evident that compared with LACS, CCS has a little more harmful long hairiness (≥3 mm) and beneficial short hairiness (1-2 mm), but the hairiness CV is improved a little. In a word, the test results of pure viscose and tencel yarn qualities show that compared with LACS, CCS has better yarn evenness, a little lower yarn breaking strength and a little more hairiness, with the uniformity of yarn qualities all being improved. The possible reason is that in the condensing zone of LACS, the wever, in the condensing zone of CCS, the fibres are made to cling to the surface of the rforated drum and are arranged parallel to each other. Then the fibres move forward along with e rotation of the perforated drum, and the width of the strand decreases gradually under the nsverse condensing force, making the fibre strand compact, see Fig.2 (a), which is beneficial r improving the uniformity of spun yarn qualities. For viscose fibre, the length of the fibre is a tle larger, and the friction between fibres is also larger, which makes fibre compaction towards e center of the yarn body a little more difficult. Therefore, compared with the twist condensing LACS, the transverse condensing force of the parallel condensing in CCS is a little smaller, aking it more difficult for border viscose fibre to roll into the yarn body, and producing a little ore hairiness correspondingly. For the yarn breaking strength, in the condensing process of CS, weak twist is produced, making the twist of the final yarn also increase and possibly e yarn strength larger. For the yarn evenness, the parallel condensing in CCS can make the fibre ndensed more stable, which is beneficial for improving yarn evenness. Cellulosic blend yarn In this section, cellulosic blend yarns were spun on a ring spinning system modified with CCS. /R 60/40 yarn (60% combed cotton fibre and 40% viscose fibre) and JC/T 70/30 yarn (70% mbed cotton fibre and 30% tencel fibre) were spun. For the JC/R 60/40 yarn, combed roving of 7tex was used as raw material, and 14.6tex, 9.7tex and 7.3tex yarns were spun. For the JC/T fibres are made to cling to the surface of the lattice apron under the force of negative airflow, and the right border fibres begin to flip to the left and cover the left border fibres, producing weak twisting and making the fibre strand compact, see Figure 2.b. However, in the condensing zone of CCS, the fibres are made to cling to the surface of the perforated drum and are arranged parallel to each other. Then the fibres move forward along with the rotation of the perforated drum, and the width of the strand decreases gradually under the transverse condensing force, making the fibre strand compact, see Figure 2.a, which is beneficial for improving the uniformity of spun yarn qualities. For viscose fibre, the length of the fibre is a little larger, and the friction between fibres is also larger, which makes fibre compaction towards the center of the yarn body a little more difficult. Therefore, compared with the twist condensing in LACS, the transverse condensing force of the parallel condensing in CCS is a little smaller, making it more difficult for border viscose fibre to roll into the yarn body, and producing a little more hairiness correspondingly. For the yarn breaking strength, in the condensing process of LACS, weak twist is produced, making the twist of the final yarn also increase and possibly the yarn strength larger. For the yarn evenness, the parallel condensing in CCS can make the fibre condensed more stable, which is beneficial for improving yarn evenness. Cellulosic blend yarn In this section, cellulosic blend yarns were spun on a ring spinning system modified with CCS. JC/R 60/40 yarn (60% combed cotton fibre and 40% viscose fibre) and JC/T 70/30 yarn (70% combed cotton fibre and 30% tencel fibre) were spun. For the JC/R 60/40 yarn, combed roving of 517 tex was used as raw material, and 14.6 tex, 9.7 tex and 7.3 tex yarns were spun. For the JC/T 70/30 yarn, combed roving of 478 tex was used as raw material, and 14.6 tex, 9.7 tex and 7.3 tex yarns were spun. Details of the spinning parameters are shown in Table 8. The results of spun yarn qualities measured are given in Tables 9-11. The results of the cellulosic blend spun yarn qualities tested, i.e. yarn evenness, breaking strength and hairiness are given in Tables 9-11, respectively. From the test results, we can see that compared with the pure viscose and tencel yarn, the blend yarn evenness is worse, especially for cotton and tencel blend yarn. The fibres properties used are presented in Table 12. from which we can see that com-pared with the viscose and cotton fibre, the wet initial modulus of tencel fibre is the largest, but the dry and wet elongation at break is the smallest, which makes tencel fibre in the drafting zone brittle fracture easily, produces more neps, and possibly makes the yarn evenness worse. Meanwhile compared with the pure yarn, the blend yarn breaking strength is also slightly worse. For the hairiness, the difference between the pure cellulosic yarn and cellulosic blend yarn is tiny. To study the blend yarn qualities further, using a Y172 Hardy thin cross-section sampling device, cross sections of the spun yarns are presented in Figures 3, 4. From the figures, we can see that in the yarn body, the arrangements of two fibres are centralised, that is, fibre migration in the yarn body is less. The possible reason is that in the CCS, the spinning triangle is reduced greatly, and the difference in fibre tensions in the spinning triangle is decreased greatly. Therefore the uniformity of fibres in the yarn body is improved greatly, which is beneficial for improving yarn qualities. Conslusions In this paper, taking viscose fibre and tencel fibre as examples, the qualities of cellulosic yarn were studied. Three kinds of pure viscose and tencel yarn, 14.6 tex, 9.7 tex and 7.3 tex, were spun on a ring spinning system modified with LACS and CCS, respectively. The spun yarn qualities, yarn evenness, breaking strength and hairiness were tested. The results show that for both pure viscose and tencel yarn, as compared with LACS, CCS has better yarn evenness, a little lower yarn breaking strength, and a little more hairiness, with the uniformity of yarn qualities is improved for all. The possible reason is that in the LACS, weak twisting is produced, which makes the fibre strand compact, while in the CCS, the transverse condensing force of the parallel condensing affects the fibre and makes the fibre strand compact, which is beneficial for improving the uniformity of spun yarn qualities. Two kinds of cellulosic blend yarn, including 14. 6tex, 9.7 tex and 7.3 tex JC/R 60/40 yarn as well as 14.6 tex, 9.7 tex and 7.3 tex JC/T 70/30 yarn, were spun on a ring spinning system modified with CCS. The spun yarn evenness, breaking strength and hairiness were tested It is shown that compared with pure cellulosic yarn, the cellulosic blend yarn evenness is worse, especially for the cotton and tencel blend yarn. The possible reason is that compared with the viscose and cotton fibre, the wet initial modulus of tencel fibre is the largest, but the dry and wet elongation at break is the smallest, which makes the tencel fibre in the drafting zone brittle fracture easily, produces more neps, and makes the yarn evenness worse. Then by using a Y172 Hardy thin cross-section sampling device, cross sections of the spun yarns were obtained. It is shown that in the compact yarn body, the arrangements of two fibres are centralised, that fibre migration in the yarn body is less, and that the uniformity of fibres in the yarn body can be improved greatly.
2019-04-30T13:06:54.489Z
2018-02-28T00:00:00.000
{ "year": 2018, "sha1": "3517634472e3ddcc3acedc32449e727d4a85ad9e", "oa_license": null, "oa_url": "https://doi.org/10.5604/01.3001.0010.7793", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "cdb44fdbaabc50f71db73af9efc23d558f1475eb", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Materials Science" ] }
255218103
pes2o/s2orc
v3-fos-license
The Main Physicochemical Characteristics and Nutrient Composition during Fruit Ripening of Stauntonia obovatifoliola Subsp. Urophylla (Lardizabalaceae) : Stauntonia obovatifoliola Hayata subsp. urophylla is a novel edible and healthy fruit in China, commonly known as “Jiuyuehuang” (September yellow). The fully ripe fruit of S. obovatifoliola subsp. urophylla has a soft fruit pulp texture, golden flesh, and sweet flavor which is very popular with the locals. In this paper, we have investigated the fruit appearance quality, physiochemical quality, and nutritional quality of S. obovatifoliola subsp. urophylla that was harvested at six stages (S1: 60 DAFB, S2: 90 DAFB, S3: 130 DAFB, S4: 160 DAFB, S5: 190 DAFB, S6: 205 DAFB). An increase in fruit size (including single fruit weight, fruit length, and fruit diameter) was related to the ripeness stage of fruit development. The total soluble solids, firmness, dry matter, sugar and starch showed remarkable changes as the fruit approached ripening (S5–S6 stage). The main sugar components in the fruit were fructose, glucose, and maltose. The contents of fructose, glucose, and total sugars in S. obovatifoliola subsp. urophylla fruit progressively increased from the S1 to the S6 stage while increasing sharply from the S4 to the S5 stage. As for the content of maltose and starch, they both showed an increasing trend from the S1 to the S4 stage but decreased sharply at the S5 stage. The vitamin B, vitamin C, total phenolics, total flavonoids, and amino acid levels showed an overall downward trend during fruit development. To our knowledge, this is the first study to compare the phytochemical characteristics, nutrient composition, and antioxidant content during the different fruit development stages. The results of this study may provide a scientific basis for clarifying the growth and development characteristics of S. obovatifoliola subsp. urophylla fruit and the further utilization of these excellent medicinal and edible germplasm resources. Introduction Stauntonia obovatifoliola Hayata subsp.urophylla (Hand.-Mazz.)H. N. Qin is a perennial woody liana.It belongs to the Stauntonia genus, has large and edible fruits, and is endemic to China [1].The species is widely distributed in the Yangtze River basin provinces of China (Guangdong, Guangxi, Hunan, Jiangxi, Fujian, Zhejiang, etc.) (Figure 1), normally occurring at an altitude of 500-1500 m in forest edges, roadsides, or along streams in valleys [2].Its fruits are usually called September yellow (Jiuyuehuang in Chinese) by local villagers, which is due to the fruit appearing yellow in color when ripe in Chinese lunar September [1].The fully ripe fruit of S. obovatifoliola subsp.urophylla has a soft fruit pulp texture, golden flesh, and a sweet flavor, tasting like a mixture of persimmon and litchi.S. obovatifoliola subsp.urophylla fruits are rich in sugars, crude proteins, vitamins, amino acids, mineral elements, fat, total dietary fiber, and reducing sugars [3,4].In addition to its edible value, S. obovatifoliola subsp.urophylla is also a traditional Chinese medicinal material.Its stems and fruits are rich in phenolics, flavonoids, triterpenes, sterides, and polysaccharides, and is usually used as an important medicinal material to treat rheumatic arthralgia, neuralgia, headache, heat strangury, trigeminal neuralgia, sciatica, and trauma pain [5][6][7][8][9].These excellent nutritional and chemical compositions indicate that S. obovatifoliola subsp.urophylla is an alternative functional fruit that is worthy of development and utilization.In recent years, S. obovatifoliola subsp.urophylla, as a characteristic health fruit, has been widely cultivated in China.Even so, S. obovatifoliola subsp.urophylla is still an underexploited wild fruit in the infancy stage of domestication.Only a few studies on S. obovatifoliola subsp.urophylla breeding, germplasm evaluation and cultivation have been attempted, with even most efforts on phytochemical analyses [1,3,5,10,11].Moreover, most of the reports about S. obovatifoliola subsp.urophylla were found on the local flora with only a brief description [12][13][14][15][16].The lack of information on the change patterns of the fruit's physiological quality and physicochemical characteristics during fruit ripening impedes the development and utilization of S. obovatifoliola subsp.urophylla. Fruit ripening is a very important physiological process in the later stage of fruit development which is usually accompanied by tremendous changes in the physical and chemical characteristics, such as sugar composition, fruit texture and color change, aromatic substances release, cell wall degradation, and so on, which ultimately affects the quality of fruit [17][18][19][20].The analysis of these physical and chemical changes occurring during the development and ripening of fruit gives an insight into the underlying biochemical and physiological processes taking place [21] and is especially useful for determining fruit maturity.Although ripening is the process by which fruits attain their desirable color, flavor, nutritional quality, and textural properties, appropriate ripeness can effectively guarantee the commercial value of the fruit and prolong the storage time of the fruit.For instance, yellow peach will have a poor flavor if harvested too early, while the fruit can present a strong characteristic flavor when harvested late; however, if the olive fruit is harvested too late, the sugar content in the pulp will decrease and be prone to fibrosis [22,23].In another example, in order to prolong the shelf life, winter jujube is usually harvested at the white maturity stage [24].However, few studies are relevant to the change patterns in physicochemical indicators and nutrient composition of S. obovatifoliola subsp.urophylla at different fruit maturity stages.Therefore, it is necessary to explore the change patterns of S. obovatifoliola subsp.urophylla fruit's physiological parameters and physicochemical characteristics during fruit ripening, which provide basic physiological information for a better understanding of the ripening process of S. obovatifoliola subsp.urophylla fruit. Since the optimum harvesting period is a considerable evaluation criterion for both food industries and consumers, the aim of the present study was to investigate the changes in the physicochemical characteristics, antioxidant content, and nutritional composition of S. obovatifoliola subsp.urophylla fruit at different maturity stages.The results of this work provide the basic dynamic change patterns of the fruit's physicochemical characteristics and explore a proper maturity stage at the harvest of S. obovatifoliola subsp.urophylla fruit with better quality, longer shelf life, and better market acceptability simultaneously. Plant Materials The plant materials were grown from seeds which were collected from Jiangxi Province, China, with relatively good comprehensive traits.A total of 56 plant materials were planted in 3 × 1.5 m in a fruit garden in Jiujiang City (29 • 38 N, 115 • 59 E), Jiangxi Province, China.The fruits of S. obovatifoliola subsp.urophylla free from insects, pests, and diseases were randomly harvested at six developmental stages (S1: 60 days after full bloom (DAFB), S2: 90 DAFB, S3: 130 DAFB, S4: 160 DAFB, S5: 190 DAFB, S6: 205 DAFB) until the fruits became fully ripe (S6 stage) in November (Figure 2).The S. obovatifoliola subsp.urophylla trees were four years old.Open pollination and a sprinkling irrigation system were used in the experimental farm.Due to a single tree of S. obovatifoliola subsp.urophylla being unable to provide enough fruit samples for all six stages, different trees were used at each stage in this experiment.In total, twenty fruit trees were used in this study, and ten fruits were harvested from each tree; thirty fruits were harvested at each stage.The fruits harvested from the same tree were considered as a sample at each stage.Three fruits' pulp was mixed into one biological replicate for sugars, vitamin B, vitamin C, total phenolics, total flavonoids, starch, and amino acid content analysis.We cut the fruit into slices with a knife and removed the peel and seeds to separate the pulp, and then the pulp was quickly treated with liquid nitrogen and stored at −80 • C until further analysis.In total, three biological replicates were conducted for each stage.The first sampling time was in June 2021, and the last sampling time was in November 2021.In this sampling period, the average daily temperature in Jiujiang was 20-28 • C, and the total precipitation was 471.80 mm. Determination of Fruit Physical Parameters The single fruit weight, fruit length, fruit diameter, and fruit shape index were evaluated immediately after the fruit samples were picked.The single fruit weight was recorded using an electronic balance.The fruit length and fruit diameter were measured using a digital caliper, and the ratio of fruit length to fruit diameter was the fruit shape index.The firmness of S. obovatifoliola subsp.urophylla fruit was determined by using a digital fruit firmness tester (GY-4, Zhejiang, China) equipped with a 3.5 mm cylinder probe, and the results were expressed as kg/cm 2 .The dry matter content and moisture content in S. obovatifoliola subsp.urophylla fruit were measured by the drying-weighing method.Thin slices of fresh fruit were weighed using an electronic balance and then dried at 70 • C in an oven until their weights became constant.The dry matter content is calculated as the percentage of dry weight to fresh weight.Conversely, the moisture content is the percentage of lost weight to fresh weight. Determination of Biochemical Parameters The total soluble solids content (TSS) of S. obovatifoliola subsp.urophylla fruit was measured by an ATAGO (PAL-1) handheld digital refractometer, and the results were expressed in • Brix.The content of titratable acid (TA) was measured by the acid-base neutralization titration method, and the values were calculated as % of citric acid.The protein content in S. obovatifoliola subsp.urophylla fruit pulp was measured by the Kjeldahl method, and the results of protein content were expressed as N × 6.25 [25]. Determination of Carbohydrates The sugar components (glucose, fructose, maltose, and sucrose) were measured by high-performance liquid chromatography (HPLC-E2695, Waters, Milford, MA, USA) equipped with a differential refraction detector, and the sample preparation and chromatographic conditions were conducted as described in the previous study [26]. The starch content of S. obovatifoliola subsp.urophylla fruit pulp was also determined by the method described in our previous study [26].Briefly, approximately 1 g of fruit pulp samples and 20 mL of 80% ethanol were added together and vortex mixed to promote particle dispersion.Subsequently, the sample was incubated in the water bath at 80 • C for 30 min and stirred continuously until cooling.Then, the sample was filtered with 80% ethanol, and the filter residue was washed in a centrifuge tube by using hot distilled water and placed in a boiling water bath for gelatinization.Then, we added 2 mL of cold 9.2 mol/L perchloric acid and incubated for 15 min in a boiling water bath, and then filtered the mixture.The filtrate was poured into a 100 mL volumetric flask, and the filtrate residue was washed with distilled water.The absorbance was read at 490 nm by a differential refraction detector. Determination of Vitamin B, Vitamin C, Total Phenolics, and Total Flavonoids The Vitamin B 1 , Vitamin B 2 , Vitamin B 3 , Vitamin B 6 , and Vitamin C contents of S. obovatifoliola subsp.urophylla fruit pulp were measured by HPLC (HPLC-E2695, Waters, Milford, MA, USA) equipped with a diode array detector.Briefly, approximately 0.5 g of fruit pulp samples were weighed into a 15 mL centrifuge tube, and 5 mL of 0.1% hydrochloric acid aqueous solution was added.After 2 min of shock, the sample was extracted by ultrasonic wave (20 kHz, 30 • C) for 20 min and then centrifuged at a low temperature ( 4• C) at a rate of 8000 r/min for 10 min.After the supernatant was filtered by a 0.45 µm filter membrane, then the sample was detected by HPLC.The detection conditions were as follows: a C18 chromatographic column (4.6 mm × 250 mm, 5 µm) was used for analysis; the mobile phase consisted of monopotassium phosphate and acetonitrile (80% and 20%, respectively) at a flow rate of 1.00 mL/min.The column temperature was 30 • C, and the injection volume was 10 µL. The total phenolics content in the S. obovatifoliola subsp.urophylla fruit pulp was determined with Folin-Ciocalteu reagent following the method of Razzaq et al. [27].Additionally, the total flavonoids content was measured following the method of Zhao et al. [28].The detailed detection methods could be found in our previous study [26]. Determination of Amino Acids The amino acid content in the S. obovatifoliola subsp.urophylla fruit pulp was measured by HPLC.Briefly, approximately 0.5 g of ground sample was put into the hydrolysis tube, 6 mL of 6 mol/L hydrochloric acid was added into the hydrolysis tube, and 3-4 drops of phenol were added, and then filled with high-purity nitrogen and sealed.The hydrolysis tube was placed in a constant temperature drying oven and hydrolyzed at 115 • C for 23 h.Then, we took out the hydrolysis tube from the oven and cooled it, adjusted the PH = 7 with sodium hydroxide solution, and added water to 7 mL for further analysis.Subsequently, 10 µL of extract solution was absorbed into the derivative tube, and 70 µL AccQ-Fluor borate buffer was added for vortex mixing.Then, another 20 µL AccQ-Fluor derivator was added to the tube, and vortex mixing was maintained for 10 s.We placed the tube at room temperature for one minute and then heated it in the oven at 55 • C for 10 min.Finally, we took out the hydrolysis tube from the oven and cooled it to ambient temperature, then transferred it to the fully recovered sample bottle for machine determination.AccQ•Tag column (3.9 × 150 mm) for amino acid analysis was used.The detection conditions were as follows, Mobile phase A: sodium acetate buffer solution; Mobile phase B: acetonitrile; Mobile phase C: pure water.Gradient elution was performed according to the Table 1.The column temperature was 37 • C, and the injection volume was 10 µL.The excitation wavelength of the fluorescence detector was 250 nm, and the emission wavelength was 395 nm. Statistical Analysis SPSS v20.0 software was used to analyze the difference significance of the data (oneway ANOVA, LSD test, p < 0.05), and the results were expressed as means ± standard errors. Dynamic Changes of Appearance Quality during Fruit Development The measurement results of the appearance quality of S. obovatifoliola subsp.urophylla fruit at different development stages were shown in Table 2.The single fruit weight began to increase rapidly at 60 DAFB, and the growth trend slowed down after 160 DAFB, entering the slow growth stage, and the single fruit weight increased to 192.72 ± 10.51 g during the period of the S1 to the S4 stages.As the fruit was close to ripening, the single fruit weight increased more slowly and had the highest weight value at the S6 stage.The growth patterns of the fruit length and fruit diameter were basically the same, and the rapid growth period of them was from 60 DAFB to 130 DAFB, and then they entered the slow growth period.The fruit shape index did not change much throughout the development period; the fruit length was about twice as long as the fruit diameter, indicating that the fruit length and fruit diameter developed simultaneously during the fruit growth.The total soluble solids content fluctuated below 2 • Brix in the early stages (S1-S4).Whereas there were no significant differences between those stages, thereafter, the total soluble solids content of the fruit pulp increased sharply during the S5 and S6 stages and reached the maximum value of 13.52 • Brix in the ripening stage S6 (Figure 3A).Although the titratable acidity content maintained an increasing trend during the whole fruit development period, the titratable acidity content of the S. obovatifoliola subsp.urophylla fruit maintained at a low level, and the values fluctuated in the range of 0.62-0.84%during the period of the S1 to the S6 maturity stage (Figure 3B). The content of dry matter content increased continuously in the early stage of fruit development (S1-S4) and reached the maximum value of 27.90% at the S4 stage but decreased slightly during the period from S5 to S6 (Figure 3C). The fruit firmness of S. obovatifoliola subsp.urophylla was significantly (p < 0.05) affected by the fruit development stage.The fruit firmness also maintained an increasing trend during the early fruit development period and reached the maximum value of 127.53 kg/cm 2 at the S4 stage but decreased sharply during the period from S5 to S6, and had a minimum value of 15.75 kg/cm 2 at the S6 stage (Figure 3D). Dynamic Changes of Carbohydrate Contents during Fruit Development Carbohydrate contents in S. obovatifoliola subsp.urophylla fruit showed significant changes during fruit development and ripening (Figure 4).Only three (fructose, glucose, and maltose) of the four tested soluble sugars were detected in the fruit pulp of S. obovatifoliola subsp.urophylla, of which sucrose was not detected by HPLC.The fructose contents were very low before the S5 stage and fluctuated from 0.10 g/100 g FW to 0.33 g/100 g FW.As Figure 4A showed, the fructose content accumulated rapidly during the S5 and S6 stages and reached the maximum value of 5.35 g/100 g FW at the mature stage (S6).Similarly, the glucose showed the same accumulation pattern as the fructose, mainly accumulated during the S5 and S6 stages and reaching the maximum value of 7.02 g/100 g FW at the S6 stage (Figure 4B).The maltose content of S. obovatifoliola subsp.urophylla fruit continuously accumulated from the S1 to the S3 stage (0.72-1.22 g/100 g FW) and reached the maximum value at the S3 stage, although it had a slight decrease at the S4 stage, whereafter, a sharp and significant decline during the S5 and S6 stages were detected (0.16 g/100 g FW and 0.08 g/100 g FW, respectively) (Figure 4C).The contents of the total sugars of the S. obovatifoliola subsp.urophylla fruit were relatively low and changed little in the early stages of fruit development (S1-S4) and began to increase significantly from the S5 stage and reached the maximum value at the S6 stage (12.45 g/100 g FW) (Figure 4D).The starch content of S. obovatifoliola subsp.urophylla fruit continuously accumulated from the S1 to the S4 stage (0.21-1.09 g/100 g FW) and reached the maximum value at the S4 stage, but had a sharp decrease at the S5 stage, and finally reached the minimum value at the mature stage (S6) (0.07 g/100 g FW) (Figure 4E). Dynamic Changes of Vitamin B, Vitamin C, Total Phenolics, Total Flavonoids, and Protein Contents during Fruit Development The changes in antioxidant component and protein contents during S. obovatifoliola subsp.urophylla fruit development were shown in Table 3.The vitamin B1 content increased from the S1 to the S2 stage and reached the maximum value at the S2 stage (20.24 ± 1.68 mg/100 g FW); thereafter, the content of vitamin B1 continued to decline from the S2 to the S6 stage; particularly, between the S5 and S6 stages, a sharp and significant decline has been observed.The content of vitamin B2 has the maximum value at the S1 stage (191.21± 2.99 mg/100 g FW), then fluctuated between 31.08 ± 1.27 mg/100 g FW and 38.65 ± 1.57 mg/100 g FW; subsequently, the vitamin B2 content declined slightly and tended to be stable during the S5 and S6 stages.The vitamin B3 content also has the maximum value at the S1 stage (10.53 ± 0.71 mg/100 g FW), then fluctuated between 3.08 ± 0.16 mg/100 g FW and 4.68 ± 0.28 mg/100 g FW, and finally reached the minimum value at the S6 stage (2.03 ± 0.30 mg/100 g FW).As displayed in Table 3, the content of vitamin B6 at the six maturity stages declined continuously with the delaying of the fruit's development, and finally reached the minimum value at the mature stage (S6) (0.13 ± 0.01 mg/100 g FW).The content of vitamin C indicated the same accumulation pattern as vitamin B1.That is, the content of vitamin C increased from the S1 to the S2 stage and reached the maximum value at the S2 stage (608.58± 7.28 mg/100 g FW).Thereafter, the content of vitamin C continued to decline from the S2 to the S6 stage; particularly, between the S4 and S5 stages, a sharp and significant decline was observed, and it finally tended to be stable during the S5 and S6 stages. As shown in Table 3, the protein content continuously declined during the whole development period and reached the minimum value at the S6 stage (0.48 ± 0.00 g/100 g FW).The content of total phenolics declined from the S1 to the S2 stage, then increased at the S3 stage, and declined again at the S4 stage, whereas a significant rise was found from the S4 to the S5 stage; thereafter, there was no significant difference between the S5 and S6 stages.Similarly, the total flavonoids indicated the same accumulation pattern as the total phenolics in the early stage of fruit development (S1-S3) and then showed a continuing downward trend until reaching the minimum value at the mature stage (13.85 ± 0.80 mg/100 g FW). Dynamic Changes of Amino Acid Composition during Fruit Development We detected the content of 17 common free amino acids in the fruit pulp of S. obovatifoliola subsp.urophylla at different development stages, and the results were shown in Table 4.In general, the content of amino acids at the six development stages declined continuously with the fruit development and has the minimum value at the mature stage, including glutamic acid (Glu), glycine (Gly), alanine (Ala), tyrosine (Tyr), valine (Val), methionine (Met), lysine (Lys), isoleucine (Ile), leucine (Leu), proline (Pro), and phenylalanine (Phe).However, some amino acids showed a spike and then declined in an overall decline trend, such as aspartic acid (Asp) showing a spike at the S2 stage, threonine (Thr) showing a spike at the S4 stage, and cysteine (Cys) showing a spike at the S5 stage, whereas the content of serine (Ser), histidine (His), and arginine (Arg) have maximum values in the S1 stage and showed a spike at the S5 stage.The content of total amino acids (TAAs) declined continuously with the fruit development and has the minimum value at the mature stage (336.89± 11.98 mg/100 g).Similarly, essential amino acids (EAAs) indicated the same accumulation pattern as the TAAs.In particular, the proportion of EAAs, including Thr, Val, Met, Lys, Ile, Leu, and Phe, has a maximum value at the mature stage (38.94%). Discussion Stauntonia obovatifoliola subsp.urophylla is a novel edible and healthy fruit and has tremendous potential for exploitation and utilization.In this study, the dynamic changes in the fruit quality, sugar composition and content, antioxidant component and content, and amino acids content of S. obovatifoliola subsp.urophylla during fruit development were detected and analyzed. The detection results of the fruit's appearance quality showed that the fruit increased rapidly in the early stages of development.The single fruit weight of S. obovatifoliola subsp.urophylla at the S2, S3, S4, S5, and S6 stages increased by 495.60%, 59.49%, 44.09%, 19.64%, and 4.26%, respectively.For the fruit length, the growth rates at each stage were 76.27%, 14.43%, 15.79%, 2.32%, and 3.99%, respectively.The highest growth rate of fruit diameter also occurred at the S2 stage (63.93%), and the lowest growth rate was found at the S4 stage (2.66%), whereafter, the growth rate showed a spike at the S5 stage (S5-S6, 10.44%, and 5.52%, respectively).The results of the fruit shape index showed that the fruit shape index of S. obovatifoliola subsp.urophylla was basically stable between 1.8-2.2during the fruit development period, and the difference between each stage was not obvious.In particular, the values of fruit weight, fruit length, and fruit diameter at the mature stage were much higher than those that grew in the wild [3].Moreover, most of the wild S. obovatifoliola subsp.urophylla fruits are small and have a low edible ratio, low stress resistance, and poor appearance quality.Fortunately, the wide range of geographical distribution of S. obovatifoliola subsp.urophylla provides substantial genetic diversity and rich wild germplasm resources for breeders to select superior genotypes with excellent comprehensive characteristics through resource exploration and evaluation. Total soluble solids and titratable acid are the main components of fruit flavor quality and nutritional composition which were also considered as crucial parameters of fruit ripening [29].Additionally, the importance of detecting the change of TSS, TA, and TSS/TA has also been demonstrated by many studies in different fruits, such as strawberry, sweet cherry, orange, mulberry, etc. [30][31][32].In this experiment, the TSS content of the S. obovatifoliola subsp.urophylla fruit showed an increasing trend during the whole fruit development period.Exactly, the TSS content kept at low levels during the early fruit development period (S1-S4), while the TSS content accumulated sharply during the S5-S6 stages, which was mainly due to the breakdown of starch and the accumulation of sugars.Meanwhile, the TA content of the S. obovatifoliola subsp.urophylla fruit remained at low levels during the fruit ripening process, which is consistent with previous reports on S. obovatifoliola subsp.urophylla fruit [3].Thus, the high ratio of TSS to TA resulted in the sweet flavor of S. obovatifoliola subsp.urophylla fruit.Moreover, the TSS content changed sharply when the fruit neared ripening, which could be considered as an alternative maturity indicator of obovatifoliola subsp.urophylla fruit.The dry matter content not only is a significant parameter to evaluate the carbon incorporation at different development stages of fruits but is also an important indicator of fruit flavor quality and texture [33,34].The dry matter content of the S. obovatifoliola subsp.urophylla fruit continuously increased in the early stages and then declined significantly at the mature stage, which was mainly due to the direct relation between an increase in fruit size and the degradation of starch.Fruit firmness usually has a significant impact on fruits' market value, consumer acceptance, and shelf life [26,35].The firmness declined sharply when the fruit of S. obovatifoliola subsp.urophylla neared maturity and softening.Unlike Akebia fruits (a relative genus of Stauntonia), the S. obovatifoliola subsp.urophylla fruit did not crack when fully ripe; thus, it could be picked after full maturity (S6 stage), and it tasted better as a fresh fruit.However, considering the long-distance transportation, we suggest that the best time point to harvest the fruit may be at the S5 stage as the fruit is maintained at a suitable hardness that could bear long-distance transportation. Sugars are not only important components of fruit flavor and nutritional composition but also play important signaling factor roles during plant growth and are also involved in regulating gene expression during plant development [36,37].The major sugars identified in S. obovatifoliola subsp.urophylla fruit pulp are fructose, glucose, and maltose.The contents of fructose, glucose, and total sugars showed the same accumulation trend, which firstly increased slightly in the early fruit development periods (S1-S4), thereafter increased sharply at the S5 stage, and there were no significant dynamic changes at the S5 and S6 stages.The changing trends of maltose and starch during the fruit's development and ripening were basically similar; both continuously accumulated during the early fruit development periods but declined sharply as the fruit approached ripening.This opposite change trend of monosaccharides and polysaccharides was mainly due to the hydrolysis of carbohydrates as the fruit ripened, similar to strawberry, sweet cherry, bananas, etc. [30,38,39]. Vitamins, apart from being an important nutrient in daily diet, are also potent antioxidant components [40][41][42].The results of this study showed that the fruit of S. obovatifoliola subsp.urophylla was rich in vitamin B (B 1 , B 2 , B 3 , B 6 ), vitamin C, phenolics, and flavonoids.In particular, the content of vitamin B 2 was the most abundant vitamin B at the mature stage, while the vitamin C content at the mature stage was in accordance with a previous report [6].The vitamin content of S. obovatifoliola subsp.urophylla fruit showed an overall downward trend during fruit development, but there was a large fluctuation in this process, especially the vitamin C content, which decreased sharply when the fruit was near ripening (S5-S6).Phenolics and flavonoids compounds are important secondary metabolites of plants, which can help plants' resistance to bacteria and protect cells from oxidative damage [43,44].In this study, the content of phenolics declined from the S1 to the S2 stage, followed by a subsequent increase in fluctuation.The flavonoids content declined firstly from the S1 to the S2 stage and increased at the S3 stage, followed by a subsequent sustained decrease in further fruit development in S. obovatifoliola subsp.urophylla fruit. The dynamic change of antioxidant content was closely related to fruit development and ripening; the content of antioxidants could also be influenced by structural genes, temperature, light intensity, etc. [45,46].The general decrease in the content of vitamins, phenolics, and flavonoids is mainly due to the oxidation of oxidase during fruit maturity or the effect of dilution as fruit increases in size [47,48]. Amino acids are important nutrient elements for most of life and also play some key roles in various biological reactions, such as signaling pathways, ATP generation, redox balance, nucleotide synthesis, cellular immunity, etc. [49][50][51][52][53].In this study, 17 common free amino acids and 7 essential amino acids were detected by HPLC.The content of amino acids showed an overall downward trend during fruit development, which might be related to the conversion and expenditure of amino acids.Another reason may be the dilution effect of increasing fruit size.The abundance of amino acids in the fruit of S. obovatifoliola subsp.urophylla documented its excellent medicinal and edible value. Compared to some common fruits, S. obovatifoliola subsp.urophylla has more advantages in certain nutrients (Table 5).For example, the total soluble solids, total sugars, fructose, and glucose are higher than apple, peach, kiwifruit, and strawberry, indicating that the S. obovatifoliola subsp.urophylla fruit has a sweeter flavor than some common fruits.S. obovatifoliola subsp.urophylla fruit has more protein than apple and cherry but less than banana, grape, peach, kiwifruit, and strawberry.As for the content of total amino acids, S. obovatifoliola subsp.urophylla is on par with grape, kiwifruit, strawberry, and cherry whereas less than apple, banana, and peach.Surprisingly, the contents of vitamin B 1 , vitamin B 2 , and vitamin B 3 of S. obovatifoliola subsp.urophylla are much higher than those common fruits, whereas it has less vitamin C than those common fruits.It should be pointed out that the determination of the nutritional composition of S. obovatifoliola subsp.urophylla in this paper is not comprehensive, but it can still prove that S. obovatifoliola subsp.urophylla is worthy of being exploited and utilized as a medicinal and edible fruit crop. Conclusions In conclusion, the dynamic changes in the fruit's appearance, physiochemical quality, and nutritional quality during the development of S. obovatifoliola subsp.urophylla were detected and analyzed.This study suggested that the maturity stage had a significant effect on nutritional properties and physiochemical parameters during S. obovatifoliola subsp.urophylla fruit ripening and softening.Particularly, the values of TSS, firmness, dry matter, fructose, glucose, maltose, starch, vitamin B, vitamin C, total phenolics, and total flavonoids of S. obovatifoliola subsp.urophylla fruit showed significant changes during the transition to physiological maturity.In view of the nutrient content and pulp texture, the fruit of S. obovatifoliola subsp.urophylla picked at the S5 maturity stage was more suitable for long-distance transportation.The results of this study laid a foundation for clarifying the growth and development characteristics of S. obovatifoliola subsp.urophylla fruit and the further utilization of these excellent medicinal and edible germplasm resources. Figure 1 . Figure 1.The geographic distribution of Stauntonia obovatifoliola subsp.urophylla.The distribution heat map was made based on the specimen data of S. obovatifoliola subsp.urophylla in a Chinese virtual herbarium. Table 1 . The elution gradient of mobile phase. Table 2 . Dynamic changes in appearance quality during S. obovatifoliola subsp.urophylla fruit development. Table 3 . Dynamic changes of antioxidant component and protein content during S. obovatifoliola subsp.urophylla fruit development. Table 4 . Dynamic changes of amino acid composition during S. obovatifoliola subsp.urophylla fruit development.
2022-12-29T16:11:24.447Z
2022-12-26T00:00:00.000
{ "year": 2022, "sha1": "159ab2c1fc1c2eb6bc3d4675d3c527c8a92af14a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2311-7524/9/1/29/pdf?version=1673258738", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1f4582bd65067976d405b80b61ec871e14dd223d", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
2977724
pes2o/s2orc
v3-fos-license
The Odderon at RHIC and LHC The Odderon remains an elusive object, 33 years after its invention. The Odderon is now a fundamental object in QCD and CGC and it has to be found experimentally if QCD and CGC are right. In the present talk, we show how to find it at RHIC and LHC. The most spectacular signature of the Odderon is the predicted difference between the differential cross-sections for proton-proton and antiproton-proton at high s and moderate t. This experiment can be done by using the STAR detector at RHIC and by combining these future data with the already present UA4/2 data. The Odderon could also be found by ATLAS experiment at LHC by performing a high-precision measurement of the real part of the hadron elastic scattering amplitude at small t. Introduction This contribution to EDS07 is based upon work done in collaboration with Regina F. Avila and Pierre Gauron [1]. The Odderon is defined as a singularity in the complex J-plane, located at J = 1 when t = 0 and which contributes to the odd-under-crossing amplitude F − .The concept of Odderon first emerged in 1973 in the context of asymptotic theorems [2].7 years later, it was possibly connected with 3-gluon exchanges in perturbative QCD [3][4][5], but it took 27 years to firmly rediscover it in the context of pQCD [6].The Odderon was also rediscovered recently in the Color Glass Condensate (CGC) approach [7,8] and in the dipole picture [9].One can therefore assert that the Odderon is a crucial test of QCD. On experimental level, there is a strong evidence for the non-perturbative Odderon: the discovery, in 1985, of a difference between (dσ/dt) pp and (dσ/dt) pp in the dip-shoulder region 1.1 < |t| < 1.5 GeV 2 at √ s = 52.8GeV [10,11].Unfortunately, these data were obtained in one week, just before ISR was closed and therefore the evidence, even if it is strong (99,9 % confidence level), is not totally convincing. The maximal Odderon [2,12], is a special case (tripole) corresponding to the maximal asymptotic (s → ∞) behavior allowed by the general principles of strong interactions: and ∆σ(s Interestingly enough, an important stream of theoretical papers concern precisely the maximal behavior [2], which was first discovered by Heisenberg in 1952 [13] and later proved, in a more rigorous way by Froissart and Martin [14,15].Half a century after the discovery of Heisenberg, this maximal behavior (1) was also proved in the context of the AdS/CFT dual string-gravity theory [16] and of the Color Glass Condensate approach [17]. It was also shown to provide the best description of the present experimental data on total cross-sections [18,19].The maximal behavior of ImF + (s, t = 0) ∝ ln 2 s is naturally associated with the maximal behavior ImF − (s, t = 0) ∝ ln s.In other words, strong interactions should be as strong as possible. Strategy In the present paper we will consider a very general form of the hadron amplitudes compatible with both the maximal behavior of strong interaction at asymptotic energies and with the well established Regge behavior at moderate energies, i.e. at pre-ISR and ISR energies [20,21]. Our strategy is the following: 1. We will consider two cases: one in which the Odderon is absent and one in which the Odderon is present. 2. We will use the two respective forms in order to describe the 832 experimental points for pp and pp scattering, from PDG Tables, for σ T (s), ρ(s) and dσ/dt(s, t), in the s-range 4.539 GeV √ s 1800 GeV (3) and in the t-range The best form will be chosen. 3. In order to make predictions at RHIC and LHC energies, we will insist on the best possible quantitative description of the data. 4. From the study of the interference between F + (s, t) and F − (s, t) amplitudes we will conclude which are the best experiments to be done in order to detect in a clear way the Odderon. 3 The form of the amplitudes and are normalized so that The F + (s, t) amplitude is written as a sum of the Regge poles and cuts in standard form [1] and the Heisenberg component F H + (s, t) representing the contribution of a 3/2 -cut collapsing, at t = 0, to a triple pole located at J = 1 and which satisfies the Auberson-Kinoshita-Martin asymptotic theorem [22]: where and In its turn, the F − (s, t) amplitude is written as a sum of the Regge poles and cuts in standard form [1] and F M O − (s, t) representing the maximal Odderon contribution, resulting from two complex conjugate poles collapsing, at t = 0, to a dipole located at J = 1 and which satisfies the Auberson-Kinoshita-Martin asymptotic theorem: where ) and K − are constants. Numerical results Let us first consider the case without the Odderon.In this case, one has 23 free parameters. In spite of the quite impressive number of free parameters, the χ 2 -value is inacceptably bad: A closer examination of the results reveals however an interesting fact: the no-Odderon case describes nicely the data in the t-region 0 |t| 0.6 GeV 2 but totally fails to describe the data for higher t-values.This failure does not mean the failure of the Regge model, which is a basic ingredient of the approach presented in this paper.It simply means the need for the Odderon. In the case with the Odderon, we have 12 supplementary free parameters.The total of 35 free parameters of our approach could be considered, at a superficial glance, as too big.However, one has to realize that the 23 free parameters associated with the dominant F + (s, t) amplitude and with the component of F − (s, t) responsible for describing the data for ∆σ(s) (see eq. ( 2)) and ∆ρ(s, t = 0), where ∆ρ(s, t = 0) ≡ ρ pp (s, t = 0) − ρ pp (s, t = 0) (13) are, almost all of them, well constrained.Moreover, the discrepancy between he no-Odderon model and the experimental data in the moderate-t region (especially at √ s = 52.8GeV and √ s = 541 GeV) is so big that, in their turn, the supplementary 12 free parameters (at least, most of them) are also well constrained. Let us also note that the above -mentioned discrepancy in the region of t defined by 0.6 < |t| 2.6 GeV 2 (14) can not come, as one could thing, from the contributions induced by perturbative QCD.The region ( 14) is fully in the domain of validity of the non-perturbative Regge pole model and the respective values of t are too small in order to make pQCD calculations.The resulting value of χ 2 is an excellent value if we consider the fact that we did not take into account the systematic errors of the experimental data.The partial value of χ 2 , corresponding only to t = 0 (σ T and ρ) data is an acceptable value (276 experimental forward points took into account).Of course, better χ 2 values can be obtained in fitting only the t = 0 data, as it is in often made in phenomenological papers.However, it is obvious that, in a global fit including nonforward data, the corresponding t = 0 parameters will be modified and therefore a higher χ 2 value will be obtained.The t = 0 and t = 0 data are certainly independent but the parameter values are obviously correlated in a global fit. the Odderon is to detect a non-zero difference between pp and pp differential cross-sections at √ s = 500 GeV, as described above.RHIC is an ideal place for discovering the Odderon and therefore testing QCD and CGC [24]. A ρ pp -measurement at LHC would be certainly a very important test of the maximal Odderon, given the fact that our prediction is sufficiently lower than what dispersion relations without Odderon contributions could predict (ρ ≃ 0.12 − 0.14). There are several other proposals for detecting the Odderon, summarized in the nice review written by Ewerz [25]. Conclusions There are very rare cases in the history of physics that a scientific and testable idea is neither proved nor disproved 33 years after its invention.The Odderon remains an elusive object in spite of intensive research for its experimental evidence. The main reason for this apparent puzzle is that most of the efforts were concentrated in the study of pp and pp scattering, where the F − (s, t) amplitude is hidden by the overwhelming F + (s, t) amplitude.The most spectacular signature of the Odderon is the predicted difference between pp and pp scattering at high s and relatively small t.However, it happens that, after the closure of ISR, which offered the first strong hint for the existence of the Odderon, there is no place in the world where pp and pp scattering are or will be measured at the same time.This is the main reason of the non-observation till now of the Odderon. We show that we can escape from this unpleasant situation by performing a highprecision measurement of dσ/dt at RHIC, at √ s = 500 GeV, and by combining these future data with the already present high-precision UA4/2 data at √ s = 541 GeV.There is no doubt about the theoretical evidence for the Odderon both in QCD and CGC.The Odderon is a fundamental object of these two approaches and it has to be found at RHIC and LHC if QCD and CGC are right.I dedicate this talk to the memory of Leszek Lukaszuk (1938Lukaszuk ( -2007)), who was not only a brilliant physicist but also an extraordinary human being and an incomparable friend. Figure 2 : Figure 2: Oscillations in the difference between the pp and pp differential cross-sections ∆ dσ dt (s, t) ≡ dσ dt pp (s, t) − dσ dt pp
2007-07-19T15:14:02.000Z
2007-07-19T00:00:00.000
{ "year": 2007, "sha1": "8db850610bf7de292c8839348c47d61d40eb517b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8db850610bf7de292c8839348c47d61d40eb517b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55963713
pes2o/s2orc
v3-fos-license
The Impact of Firm Specific Factors on the Stock Prices: Empirical Evidence from Belgrade Stock Exchange This study aims to examine the impact of certain determinants of stock prices on the capital market in development, with a special focus on companies in the Financial and Insurance Activities Sector whose shares are listed on the Belgrade Stock Exchange. The study uses data from individual companies from 2008 to 2014 and employed ordinary least squares method. Generally speaking, the study found that accounting information, in particular, return on assets, book value per share, trust rate and company size measured by market capitalization, are relevant for explaining the stocks prices in Serbia. This study contributes to the current discussion of specific factors affecting the price of stocks in the emerging market with a special focus on the Belgrade Stock Exchange. Cluster analysis of companies according to key determinants of share prices suggests to investors that there is the possibility of portfolio diversification. The reliability in the study is ensured by including nearly 95% of the companies listed on the Belgrade Stock Exchange, ie shares of companies classified in K Financial and Insurance Activities Sector on the Belgrade Stock Exchange. Introduction The capital market in Serbia is at an early stage of development, but it represents an opportunity for individual investors to achieve high capital gains.Emerging markets are considered to be volatile and risky, but also lucrative capital markets.In developing countries, such as Serbia, stock markets can stimulate economic growth by ensuring that companies increase capital at lower prices.In addition, emerging capital markets depend primarily on financing from bank loans, which increases credit risk.Therefore, there is a huge importance, but also a need for the study and development of capital markets in order to strengthen alternative sources of financing.Factors that affect stock prices are numerous and inexhaustible.The research aims to examine the key internal determinants of the company and their impact on the prices of shares of companies listed on the Belgrade Stock Exchange.Such findings represent a strategic analysis that helps analysts in predicting future value of the company.This implies that investors will be able to make wise investment decisions if they consider these determinants which have emerged as the significant contributors to the market price of shares in Serbia. It is concluded that each sector should have its own model that explains the variation in stock prices.It is important for investors to carefully consider and understand the sectors, as well as how these sectors react to changes in accounting indicators when investing in the stock market. Literature review Plenty of published studies deal with the relationship between financial statements data and the stock market.There are at least four reasons why there is a huge demand for capital markets research in Accounting and for the popularity of these research: (i) fundamental analysis and valuation; (ii) capital market efficiency tests; (iii) role of accounting in making contracts and in the political process; and (iv) disclosure regulation (Kothari, 2001). In their study, Aveh, Awunyo-Vitor and McMillan (2017) applied the panel regression analysis to examine all companies listed on the Ghana Stock Exchange from 2008 to 2014.In general, the author came up with the conclusion that accounting data, specifically earning per share, return on equity, book value and market capitalization of the firms, are applicable in describing stock prices in Ghana.Mirza, Rahat & Reddy (2016) examined the propositions on leverage pricing in returns on stock by analyzing an extensive set of companies listed on the Karachi Stock Exchange (KSE) over a period of 13 years.They pointed out that although size, value and, more importantly, financial leverage are systematic in nature, market risk premium can not be considered as a related factor.Issah and Ngmenipuo (2015) have found a positive linear relationship between ROA, ROE, ROI and the market price of shares of banking financial institutions quoted on the Ghana Stock Exchange (GSE).Furthermore, the positive signs acquired for the independent variable coefficients are in line with the theoretical framework.Sharif, Purohit and Pillai (2015) analyzed the essential factors influencing share prices in the Bahrain financial market.The study involved a panel data set of 41 companies registered on the Bahrain stock market from 2006 to 2010.The results indicated that the variables return on equity, book value per share, dividend per share, dividend yield, price earnings ratio, and firm size are important share prices determinants on the Bahrain market. Almumani (2014) identify the quantitative factors that influence share prices for the listed banks in Amman Stock Exchange over the period 2005-2011.Appertaining to the empirical results, the variables earning per share, book value per share, price earnings ratio, and size are significant determinants of share prices for all the banks under consideration.Thus, in his study, the author succeeds to prove that the study of financial factors is highly beneficial for the investors in Jordan, as these factors bear strong explanatory power and hence, can be used to make authentic predictors of future stock prices. Chughta, Azeem and Ali (2014) were interested into the relation between chosen companies' distinctive factors and stock prices.As per the analysis, dividend per share and earning per share are found to have a positive and significant effect on share prices of companies.On the other hand, capital investment and retained earnings are found to have no significant consequence on stock prices.It indicates that stock prices are not affected to changes in these two variables. Numerous studies in different capital markets and different observation times give various results.Although there are key factors that affect stock prices in most countries, generalization of results is not possible due to differences in business environment, business regulations, political situation and the number and type of investors.It can be concluded that the existing literature advocates the belief that the stock price change is due to certain internal aspects of the company. Data In order to examine the hypothesis, the research made use of secondary data.The sample was formed on the basis of companies listed on the Belgrade Stock Exchange.The annual data and a sample of 19 companies from the sector K-Financial and Insurance Activities were used.Selected companies can serve as a representative research sample and therefore the obtained results for the observed sector can be generalized.The research period was from 01. 01.2008. to 31.12.2014.Internal factors that were the subject of research are company-specific factors -size, financial indebtedness coefficients, profitability indicators, market ratio indicators, book value per share.Data for internal factors were obtained as rational numbers using the items from the balance sheet and the profit and loss account of the observed companies. In accordance with previous studies examining the relationship between stock prices and internal factors of the company, a table with the variables used in the regression analysis was formed.The following table gives an overview of the company performances that were analyzed as potential determinants of stock prices.In the column titled Impact on the stock price a positive or negative effect of the observed independent variables (predictors) on the dependent variable is assumed.The basis for selected independent indicators were the previously conducted research by numerous authors on this topic both in developed capital markets and in emerging capital markets.Descriptive statistics (Table 2) indicate the maximum, minimum and mean values of the observed variables.It is worth mentioning that with P/E ratio there is a wide variation between minimum and maximum values.This points to the fact that investors are willing to pay a high premium for well performing businesses.The analysis of the earnings per share (EPS) statistics indicates that the median value is not a convincing figure for investors and for this reason a large number of companies, both small and large, liquid, less liquid and illiquid in was included the analysis.Also, low average value of earnings per variable in the Financial and Insurance Activities Sector can be attributed to the effect of the global financial crisis. Regression analysis Multiple regression represents a family of techniques by which the relationship between dependent variables and multiple independent variables or predictors can be investigated.There are solid theoretical and empirical reasons for applying this analysis.Preliminary analyses have shown that the assumptions of normality, homogeneity, linearity and multicollinearity are not distorted. Cluster analysis Cluster analysis is a statistical technique for determining relatively homogeneous groups of observed units (companies).Grouping of the companies according to similarities was done according to the fundamental determinants of the prices of stocks, obtained on the basis of regression analysis.This division should have the following characteristics, homogeneity within groups and heterogeneity between groups, or clusters.Since the period of analysis and observation was from 2008 to 2014, for the needs of the cluster analysis, the mean values of the observed variables of the company were used.Large ranges in the values of the observed variables indicate the need for grouping the company according to the similarities in performance, in order to obtain homogeneous subgroups that are characterized by similar performances. Such segmentation can be significant for investors both in terms of diversification of their own portfolio, but also for the companies themselves in order to recognize their position on the market.The transformation (editing) of data into a standardized form was made, transformation ranging from -1 to +1.Transformation is a normalization that aims to enable the use of input data.The Euclidean distance was used for distance measurements.In the formula, the standard Euclidean distance of two objects X and Y is calculated as the square root of the sum of the square value difference for each variable Xi, Yi respectively.The smaller the Euclidean distance is, the greater the similarity of the observed featuresthe stock price of the companies is. Before conducting a cluster analysis, it is necessary to determine the level of correlation between the observed variables. Results and discussion Using non-stationary series in a regression model can create the effect of the so-called "spurious regression" and indicate the wrong, biased, conclusion about the significance of the model.It is therefore necessary to check that the observed series meet the requirement of stationarity.The Schwarz criterion was used to determine the optimal number of lags of the auto regression model.In Table 3, the results of the ADF test and PP test show that all tested variables are stationary at level I (0) with a level of significance of 1%, which is the basis for the use of multiple regression analysis.The mean value of VIF is 1.5931, less than 10, which confirms the absence of any multicollinearity, and the VIF test for individual variables does not exceed the maximum value of 10, and therefore there is no need to eliminate certain variables from the regression analysis (Table 4).This indicates that the trend of growth of these variables will automatically lead to an increase in the market price of shares and that investors show a tendency for these stocks.Size (measured by market capitalization) is found to be one of the most significant internal determinant of the stock price movements as the previous literature has reported (Duy and Phuoc, 2016;Gan et al. 2013). In order to create a clearer picture of key determinants of stock prices, the following table presents the results of multiple regression analysis results by years, as well as the results of the residual diagnostic tests.As presented in the table above the key determinant of the stock prices, in the sector Financial and insurance activities for all observed years are size (+) and book value per share (+).Apart from the above-mentioned two determinants of stock prices, trust rate, P/E ratio, return on equity and return on assets have a significant positive impact, which indicates that investors prefer companies that are earning more and investing funds in profitable projects.Earnings per share (EPS) indicate a statistically significant positive impact on the stock price in post-crisis period, in 2011.EPS is found to be the significant internal determinant of the stock price movements as the previous literature has reported (Pushpa Bhatt and Sumangala, 2012.)The first step in the cluster analysis is the implementation of the correlation of the explanatory variables, i.e. the separated statistically significant accounting performances.In the analysis of Pearson's correlation (Table 7) it was noticeable that the level of correlation between the two variables did not exceed the threshold of 0.90, and therefore all the variables could be included in further analysis.Size (market capitalization), book value per share, return on assets and trust rate were chosen as the key performances for the cluster analysis.The next dendrogram provides a visual insight into cluster formation.On a dendrogram, Fig. 1, objects (companies) are divided on a vertical axis, while the horizontal axis indicates the distance between the connecting objects.The hierarchical cluster analysis points to the similarity of most of the observed companies, while the differences increase with the growth of the company. Based on descriptive statistics and three clusters (set up in advance), it is noticeable that sixteen companies make up a cluster 1, two companies represent cluster 2 and one company is in a cluster 3. Analyzing the mean and minimum values of the observed variables per clusters, significant differences can be noticed (Table 8).An additional statistical test (ANOVA) should answer the question of whether there is a statistically significant difference in performance between the three clusters.The results of the test are presented in the following table. Conclusions and recommendations The research was conducted in order to test the influence of business operations on the prices of the shares of the observed companies in the period from 2008 to 2014.Based on empirical analysis, variables such as size (market capitalization), book value per share, returns on assets and trust rate have a positive impact on the price of stocks in the Financial and Insurance Activities Sector.It is recommended that the directors of the firms listed on the Belgrade Stock Exchange introduce policies which would have a positive impact on their financial performances to significantly influence their stock prices positively. The results of this study confirm the usefulness of the analysis of performance of specific variables of companies and can be used to help investors in making rational decisions with regard to their stock portfolio. However, this research has certain limitations.In case of the selected companies from the Financial and Insurance Activities Sector, the research only takes into consideration companies' specific factors and ignores impact of macro-economic factors on stock price.Also, the research was conducted on a set of data obtained from the financial statements.Therefore, the reliability and accuracy of these data will affect the applicability of the research results. Future research should take into account macro and micro factors as well as companies from other sectors to define factors influencing the stock price. Figure 1 . Figure 1.Dendrogram of companies belonging to the sector K -Financial and Insurance Activities Table 1 . Definition of variables used in regression analysis Table 3 . Unit root test Sig) 37.437 (.000) R Square .714 5% of variations in the market price of stocks can be explained by variables included in this study.The p value for the F test is .000,and shows a value of 1% indicating that the model is acceptable.The standardized variation coefficient -beta indicates the number of standard deviations for which stock prices will change to predict changes for one standard deviation unit (deviation).The variation coefficient (p value) explains the positive direction of variability for book value, size (market capitalization), trust rate, return on assets and amounts to .679(.000), .481(.000), .181(.002) and .156(.047), respectively.The results indicate that the market value of one share is affected by the increase in book value, size, trust rate and return on assets. Table 6 . Regression analysis of stock prices of companies in the Financial and Insurance Activities Sector by year for the period from 2008 to 2014 . It is concluded that the model does not have a serial correlation (LM test) which is desirable for the validity of the model.The model has a high value of the coefficient of determination (R2) and at the same time the F statistics is significant.The heteroskedasticity test indicates the stability of variance (based on F statistics and Breusch-Pagan-Godfrey). Durbin-Watson test indicates the absence of a serial autocorrelation residual.The residual diagnostic tests indicate that the multiple regression analysis model is acceptable over the years. Table 7 . Correlation of key variables of cluster analysis -Financial and Table 8 . Descriptive statistics for variables included in cluster analysis -Financial and Insurance Activities SectorBased on the obtained results in Table9, it can be noticed that the companies ZIF Fima SEE Activist a.d., Beograd (cluster III) and Metalac a.d., Gornji Milanovac and Energoprojekt garant a.d., Novi Beograd (cluster II) significantly differ from the clusters of other enterprises (cluster I) by return on assets, as well as by trust rate, while the indicator of book value per share and size does not indicate a statistically significant difference between clusters. Table 9 . ANOVA -Statistical significance of differences in observed performances among clusters for the model Financial and insurance activities
2018-12-07T08:31:36.107Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "270096c90a1c6a6ee62269b908f0e76f44cce685", "oa_license": "CCBYSA", "oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/0350-0373/2018/0350-03731802007M.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "270096c90a1c6a6ee62269b908f0e76f44cce685", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
229497065
pes2o/s2orc
v3-fos-license
Analysis of Ideological and Political Education in Colleges and Universities from the Perspective of New Media With the development of educational science and technology, educational methods are diversified and educational results are becoming more and more significant. In the context of new media, ideological and political education in colleges and universities is also presenting a brand new picture. It is moving in a better direction and situation. Introduction With the development of science and technology, great changes have taken place in the media field. New media based on online media platforms have rapidly become the second media form that keeps pace, with traditional paper media. The emergence of new media has changed the original rules of news event occurrence and reporting, and achieved the maximum reflection of realtime and authenticity. As for the political and ideological work in colleges and universities, it has two functions. The good point is that it can timely release and transmit the latest political thoughts. The trickier one is that it forms new pressure and challenge for the public opinion control and public opinion work at the present stage. Highly interactive experience A huge difference between new media and traditional media lies in the interactive difference between new media and traditional media. For new media, the spread of it is diverse, the party and the receiver for the traditional media, after the occurrence of a piece of news, from news to writing, to publish and real-time reports, the occurrence of this process, often has experienced a period of time, so has no real-time news event, the receiver is the only for right to know this news event, unable to make effective evaluation about the event, more can't participate; For new media, after a news event is happening, often are not media information transfer out first, but the broad masses, the masses through their mobile phone put forward his opinion on media platform to again, through the channel, be amplified the news events, in the process, each involved in the evaluation in different depth in participate in this news event, the final result for this incident, truth and justice, people are able to have been involved in judgment. Therefore, the interactive difference between new media and traditional media determines that in the era of new media, media no longer have all the right to speak about news events. Diversified interaction of content and form The difference between new media and traditional media lies in that new media is different from traditional media in that it only transmits through print media and radio and TELEVISION media, but carries out multi-dimensional reports in more channels and forms. The audience can use a variety of media terminals such as TELEVISIONS and computers and smartphones. The form of receiving information includes the content in the form of pictures and pictures and videos. The ideological and political work in colleges and universities is facing the present situation and problems Since the reform and opening up, China has been opening up to the outside world in more and more fields, and the level is getting deeper and deeper. In particular, the emergence of instant messaging software such as WeChat, QQ, Momo and other software has a great impact on the values of college students. In addition, the software on the one hand, can with the advantage of Challenges faced by ideological and political education in universities in the context of new media New media appear as update faster media type greatly facilitate the way people access to information, the number of people access to information and update the speed of information is becoming more and more fast, which leads to a problem is a significant number of people just blindly accept information not to distinguish the good or bad, the most obvious example is that people in some received for itself and is particularly interested in some of the things, not the reason, will be easy to let more people know. This habit of casual communication is the manifestation of the gap in information discrimination ability of people in the new media era. Therefore, the propaganda of colleges and universities are still applicable to the propaganda pattern of traditional is more formal, can not arbitrarily change, or the present direction of the trend of the development of the new media as well as emotional form is not applicable to the development direction of current political and ideological education in colleges and universities. And a plane is a new media has brought all kinds of ideas, people's pursuit of style and way of life, the hobby also presents the diversified development, for political and ideological education in colleges and universities, because of its form Ways to realize ideological and political education in colleges and universities under the new media environment At the present stage, most of the ideological and political education in colleges and universities is carried out through theoretical courses, such as "MAO Jie" and "Ma Zhe", which are required courses of political theory for college students. As a ideological and political theory course, it is important to make students have the desire to explore this political thought through the teacher's explanation, and at the same time to understand this political thought and absorb it subtly in the further exploration. Platform to understand the students' thoughts, seriously to solve practical problems encountered in student life, ideological work in colleges and universities to do is by updating and concern to the students' practical problems, answer the students' practical problems, can attract the students truly, truly helpful to student's thought. School is the place to cultivate students' thought at the present stage, and also the necessary way to cultivate successors of socialism with Chinese characteristics. Establish the concept of active intervention and build a ideological and political education platform Due to its particularity and the characteristics of the new situation, the new media environment can directly exert a farreaching influence on the thoughts of college students. Therefore, under the new situation, the ideological and political work in colleges and universities is faced with new challenges. The ideological education in colleges and universities should give full play to the advantages of talents, and enhance the pertinence and appeal of ideological and political education by using the tools of the new media era, so as to achieve the comprehensive coverage and effective arrival of educational information. Establish the concept of adjustment and adaptation, and update the work content of ideological and political education new media The new media environment because of its network virtual sex easily to form students contact the public opinion field, for college students, their relative to prefer in access to information and release information in the public opinion field and dominant public opinion field are generally refers to the official news, relative to the official news, communication effect and publicity channels are different. So that meet the needs of college ideological work to change working ideas, to adapt to the new situation of the new media environment, not only to make a corresponding change in the hardware facilities, but also to strengthen the education of new media platform for the construction of the party and government, to ensure all information to meet students' demand point and excitement, what really makes ideological work with appeal and attraction. Really let the students take the initiative to accept the change of thought. Conclusion The arrival of the new media era is both an opportunity and a challenge for the ideological and political education in colleges and universities. We should grasp the pulse of the development of the new media era and do a good job in improving the ideological and political work in colleges and universities by integrating it with the current situation.
2020-12-03T09:04:19.562Z
2020-11-10T00:00:00.000
{ "year": 2020, "sha1": "5ca832e9f78836df45e5b7af3743e90ba92b8bb0", "oa_license": "CCBYNC", "oa_url": "http://ojs.piscomed.com/index.php/L-E/article/download/1417/1295", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b19924860ffa5d4a82e63355acfdae7525314341", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Political Science" ] }
115635497
pes2o/s2orc
v3-fos-license
Simple Design of VTOL Hexacopter for Simple Navigation -- The aims of this research are to determine how to create a simple hexacopter, the flying robot using six propellers. By using dynamic analysis it will be obtained the lifting force. To generate the lifting of each motor each UAV the doing simple maneuver of robot. Because each motor and propeller have the same type, it is enough to test one motor alone to determine the thrust and torque generated. Experiment of Hexacopter will then be done to fly through the path that has been made. Deviations will be measured to determine the amount of error generated. From the result of design, hexacopter successfully made from aluminum plate material and successfully tested fly. I. INTRODUCTION Unmanned aerial vehicles or unmanned aerial vehicle (UAV) has many benefits for human. For example the UAV can do searching victims of natural disasters, mapping, aerial photography, reconnaissance, and the delivery of goods. Hexacopter is one popular UAV because of the simple mechanism. Hexacopter also has the ability to land and take off vertically or vertical take-off landing (VTOL), do the hovering, and even fly indoors easily [1], [ 3]. Hexacopter has advantages compared to the conventional helicopters. The hexacopter that allows placement of the payload that is more and more easy to manage. Another reasn is the hexacopter has a minimum risk for falls due to the failure of the rotor [2]. II. MODELING OF HEXACOPTER Hexacopter is multirotors which have six motors. All mounted on an arm that is connected symmetrically to the center. At one end of each arm mounted propeller powered by an electric motor. All propeller blades has a slope that is permanent, meaning the angle of the propeller blades cannot be changed. Unlike the classic helicopter which movement is controlled by changing the slope of the main propeller blades [3]. It makes hexacopter mechanically simpler than a traditional helicopter. To know the position and orientation of hexacopter then we will use two frame coordinate axis, i.e. the axis of the immobile frame that stem from the earth and the framework that stem axis point of the center of gravity on the body hexacopter. Order immovable axis (earth frame) are denoted by E = {XE, yE, Ze} with Xe that leads to the north, yE which leads to the east and zB axis pointing up. Order axis of the body (body frame) are denoted by B = {xB, yB, zB xB} to lead to the next, the yB and zB leftward pointing up [1]. The attitude of hexacopter is determined by the orientation of the axis of the body to the axis of the earth. Attitude is shown in rotation on the axis x, y, and z are comprised of roll, pitch, and yaw. Attitude is controlled by changing the motor rotation. Yaw is rotation on the z-axis is denoted by R (ѱ, z). This rotation seize the moment generated each propeller. For example if you want to play hexacopter clockwise with stable, then round three motor anticlockwise (motor 2, 4, and 6) accelerated and at the same time the rotation of the three motors is clockwise (motor 1, 3, and 5) slowed. For matrix rotation respect to z-axis can be follow as below equation: Pitch is rotation on the y-axis is denoted by R (θ, y). Pitch obtained by subtracting / adding the speed of the motor 1 and 6 as well as reduces / increases the speed of the motor 3 and 4 at the same time. This will cause the torque on the y axis. As for the y-axis rotation matrix is shown in equation (2). Roll is rotation along the x-axis is denoted by R (ф, x). Roll obtained to reduce / increase the speed of the motor 1, 2, and 3 as well as reduce / increase the speed of the motor 6, 5 and 4 at the same time. This will produce a moment on the x axis. The matrix rotation on the x-axis is shown in the equation below The total Matrix R B , transformation from the body frame to earth frame, can be obtained by multiply R(ѱ,z), R(θ,y) and R(ф,x), so that Because R B is matrix ortogonal, then transformation from the earth frame to body frame can be obtained with (6) By using Newton-Euler formula for describe of dynamics equation can be follow as = moment of hexacopter on the body frame (Nm), 0 3x3 = matrix 0 (null) with size 3x3, I 3x3 = matrix identity with size 3x3. Because of cross vector can be obtained vector as in the multiply simteris matrix and the vector And the inertia of tensor is diagonal matrix On the equation (7.a) can be widely use as Gravitation on the center of gravity of the hexacopter is on the direction of z axis. At the body frame, contributed of gravitation force is If g is acceleration of gravity and driving force is the total lift force generated the entire propeller and always directed along the z-axis. On-the-fly, this style can be obtained by equation (9), b is a constant thrust (Ns 2 ). There are obstacles (drag) that occurs at the time hexacopter fly. This drag will affect the acceleration in the x and y on the body frame. On-the-fly, this drag can be obtained by equation (10) with μ is constant (kg/s). Air resistance is proportional to the square of the speed, size and shape of the object in accordance with equation (11). C is the coefficient of friction (dimensionless), Ai is the area affected by obstacles (m 2 ) and ρ is the density of air (kg / m 3 ). The moment is is a force multiplied by the distance to the axis of rotation. At equations (12) By reducing Ω1, Ω6¬ and add Ω3, Ω4 a positive pitch can be obtained. Differences acceleration rotation of the propeller generates a counter torque on the yaw inertia By combining the equations of motion, the final equation for the motion is hexacopter III. DESIGN OF HEXACOPTER Hexacopter created order consists of several parts, namely the middle, arms, and legs battery holder. Six arms connected symmetrically to the center. Holder battery is placed under the middle, separated by a spacer to make room for the cables of the ESC. The leg of hexacopter made higher in order to avoid the battery, the motor and propeller by other objects such as grass, water, dust, or other particles when taking off or landing. The frame consists of a central part made of acrylic plate 5 mm, hexagonal shaped with a diameter of 190 mm. While the legs and arms are made of aluminum hollow along 240 mm, with a square cross-section 20x10 mm. Because hexacopter using the motor and propeller of the same type. Then simply examine one bike only to find out the thrust force and torque is generated. The resulting thrust force on the propeller and the motor comes in line with the direction of the force adopted. To know the major thrust is generated, the motor will be placed on a digital scale. Propeller will be installed upside down such that the resulting thrust will push the motor down. To determine the motor torque generated when the rotating propeller. The motor will be mounted on an arm with a certain length. This arm will hit digital scales when the motor rotates. . Hexacopter is flown to follow the path that has been made. Deviation made by hexacopter be measured as an error value. IV. BASIC MOVEMENT AND EXPERIMENTAL RESULT One way to control hexacopter is through the propeller. Each propeller generates thrust upward by pushing air downward. Because the source of the thrust is located outside the center of mass, differential changes in lift force can be used to play hexacopter. Round of the motor also produces a reaction torque in the opposite direction to the direction of rotation. Since most propellers rotate in one direction, the number of moments for all the motors have the same velocity is zero. There are four basic movements: throttle, roll, pitch, and yaw. Controlling is done by changing the movement speed propeller. Hexacopter linear motion (fly along the surface) is controlled through the roll and pitch angles. To fly forward, hexacopter tilted forward. This will result in acceleration toward the front. The major control of hexacopter is the throttle, it used to control movement in the vertical direction of the body. Because of the slope of the propeller fixed, then the direction of rotation of the thrust is fixed. The biggest task of the throttle is to defy gravity. When adding or reducing throttle, hexacopter will move upwards or downwards. If hexacopter is in a tilted state, the thrust will move hexacopter towards tilt. The driving force in multirotor caused by the lifting force generated propeller. Because of the angle of attack (pitch) of the propeller is fixed, then to raise or lower the lift force is by raising or lowering the angular velocity of the propeller. Because of the propeller rotated dimensions and mass large enough, then to be able to provide high corner speeds, the motor requires power (watts) is quite large. Because the measurements are made with tools that lack of precision, such as the tachometer is still using the system touch. The result in of accurate data retrieval can be seen in the constant measurement of thrust b (Ns 2 ). Then the motor will produce a moment which is directed opposite to the direction of rotation of the propeller. In hexacopter, the moment is utilized to rotate the body hexacopter on the z axis. The larger the engine speed, the greater the torque generated. Just like the measurement of thrust, torque measurement is also using a tachometer with the touch system. So that reading becomes less accurate propeller rotation. This resulted in a lack of accurate data on the drag factor measurements d (Nms 2 ). A. Kinematics Calculation Hexacopter fly with a trajectory as in figure 12 Takeoff and then move up and move forward and then move to the right. It is assumed that hexacopter move slowly with obstacles ignored. Hexacopter fly in the right direction with a roll angle of 0.175 radians and angular acceleration of 1.75 rad / s. Linear acceleration and angular acceleration experienced is. B. Flight Experiments Hexacopter fly with a trajectory as in figure 12. The height of the fly is limited to 5-10cm above floor level. Starting point is flown from point A to point B. Once flown, the movement trajectory hexacopter measured and recorded. Hexacopter trajectory can be seen in Figure 12. The value of RMS (root mean square) of errors noted is 2:35 cm, farthest experienced deviation is 6:30 cm. This value is quite large. This happens because hexacopter fly in a state that is less stable and difficult to control. Some factors That Can Interfere Stability Hexacopter Propeller is unbalance because of the frame is nonsymmetrical. V. CONCLUSIONS Hexacopter successfully made in accordance with the design. Specifications hexacopter as described earlier in this part. The maximum lifting force that can be generated motor at a maximum rotation 521.34 rad / s was 5.4 N. The maximum moment that can be generated is 0:16 Nm. RMS of the error generated hexacopter during flight can be minimized.
2019-04-16T13:21:42.510Z
2017-03-30T00:00:00.000
{ "year": 2017, "sha1": "329b1b5452b5a5f10da11707ffa9c9162a91179d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.20342/ijsmm.4.1.237", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "6585204417bdcd86aeec1fdc49d72e29faa29a2c", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Computer Science" ] }
15673177
pes2o/s2orc
v3-fos-license
Riemann-Hilbert problem and the discrete Bessel kernel We use discrete analogs of Riemann-Hilbert problem's methods to derive the discrete Bessel kernel which describes the poissonized Plancherel measures for symmetric groups. To do this we define discrete analogs of a Riemann-Hilbert problem and of an integrable integral operator and show that computing the resolvent of a discrete integrable operator can be reduced to solving a corresponding discrete Riemann-Hilbert problem. We also give an example, explicitly solvable in terms of classical special functions, when a discrete Riemann-Hilbert problem converges in a certain scaling limit to a conventional one; the example originates from the representation theory of the infinite symmetric group. Introduction Let S(n) be the symmetric group of degree n. It is well known that its irreducible representations can be naturally parametrized by all the partitions λ = (λ 1 ≥ λ 2 ≥ · · · ≥ λ k ) of the number n into natural summands: n = λ 1 + λ 2 + · · · + λ k , see, e.g., [JK]. We shall denote the set of all partitions of n by P n . For every partition λ, let dim λ be the dimension of the irreducible representation corresponding to λ. The well-known Burnside formula for finite groups implies that λ∈P n dim 2 λ = n!. Thus, we can construct a probability distribution M n on P n by setting the weight of λ ∈ P n equal to M n (λ) = dim 2 λ n! . The M n 's are called the Plancherel distributions. These distributions can be obtained from the uniform distributions on symmetric groups via the Robinson-Schensted correspondence, see, e.g., [Sch]. In particular, the distribution of the largest part λ 1 of the random partition of n with respect to the Plancherel distribution coincides with that of the longest increasing subsequence of random uniformly distributed permutation of degree n, see, e.g., [BDJ1,Appendix]. Let us organize a new probability distribution M θ on the set P = ⊔ n>1 P n of all partitions depending on a positive parameter θ > 0 as follows. Let λ ∈ P n . Set M θ (λ) = θ n n! e −θ · M n (λ). As was shown in [BOO], the correlation functions of a point process naturally attached to M θ have determinantal form with a certain kernel expressed through the J Bessel functions. This powerful fact allowed us to study asymptotic properties of the Plancherel distributions and, in particular, to prove the Baik-Deift-Johansson conjecture [BDJ1], [BDJ2], about the asymptotic behavior of finitely many first parts of the large random partition. 1 The same correlation kernel has also arisen in [J] in the asymptotics of orthogonal polynomial ensembles related to the Plancherel measures. See also [BOl3] for a discussion of connections between the approaches of [BOO] and [J]. The appearance of Bessel functions in [BOO] seemed rather mysterious. A nice representation theoretic explanation was suggested in [Ok] -for a much wider class of measures on partitions it was shown that the correlation functions are given by determinantal formulas with a kernel which has a certain integral representation. In the particular case of Plancherel distributions this approach leads to the integral representation of Bessel functions, see [BOk,§4]. In this note we will obtain the Bessel functions in another way. We will show that the correlation kernel can be defined as a solution of a certain "discrete Riemann-Hilbert problem". First, we will define discrete analogs of Riemann-Hilbert problems and integrable integral operators and show how the resolvent of a discrete integrable operator can be obtained from a solution of the corresponding discrete Riemann-Hilbert problem. The result is parallel to the known result in the continuous case, see [IIKS], [D2]. As was pointed out by P. Deift and A. Its, our setting of the discrete Riemann-Hilbert problems is similar to the pure soliton case in the inverse scattering method, see [BC], [BDT], [NMPZ, Ch. III]. Then, applying discrete analogs of standard methods of the conventional Riemann-Hilbert problem's techniques, we will obtain a system of differential equations on the matrix elements of the solution of our concrete discrete Riemann-Hilbert problem which will lead to the Bessel equation. We will also demonstrate an explicitly solvable example of a discrete Riemann-Hilbert problem (more general then the one that arises from Plancherel measures) and a continuous Riemann-Hilbert problem such that the discrete problem converges to the continuous one in a certain scaling limit. The solutions of these problems are expressed through classical special functions. The example has originated from a problem of harmonic analysis on the infinite symmetric group, see [KOV], [BOl1], [BOl2], [BOl3]. The paper is organized as follows. §1 is the introduction. In §2 we describe a problem whose solution provides the correlation kernel for the poissonized Plancherel measures (the discrete Bessel kernel). In §3 we review general facts about integrable integral operators and Riemann-Hilbert problems. In §4 we define discrete analogs of Riemann-Hilbert problems and integrable integral operators and show how computing the resolvent of a discrete integrable operator is reduced to solving a discrete Riemann-Hilbert problem. In §5 we apply the general approach discussed in §3 to a special case which is relevant for us. In §6 we do the same in the discrete situation. In §7 we solve the discrete Riemann-Hilbert problem attached to the problem of §2 using discrete analogs of standard methods of continuous problems. In §8 we discuss the example of explicitly solvable discrete and continuous Riemann-Hilbert problems mentioned above. In the preprint version of this paper there was no general setting for discrete Riemann-Hilbert problems and discrete integrable operators. Only the special case of §6 was worked out. After the preprint had appeared, Percy Deift suggested a general approach to the discrete situation which we present in §4. I am very grateful to him for allowing me to reproduce his results in this text and for a number of valuable suggestions. I would like to thank Grigori Olshanski. Without his constant support and stimulating discussions, this work would never be done. I would also like to thank Alexander Its for explaining the basics of Riemann-Hilbert problems to me and for helpful comments. Setting of the problem We refer the reader to [BOO] for a detailed exposition of the material of this section. Let us associate to any Young diagram λ a finite point configuration X(λ) in Z ′ as follows. Denote by (p 1 , . . . , p d | q 1 , . . . , q d ) the Frobenius coordinates of λ and set We define the correlation functions ρ k (x 1 , . . . , x k ), k = 1, 2, . . . , of the measure M θ by As was shown in [BOO], the correlation functions have determinantal form, with a certain kernel K(x, y) on Z ′ . This kernel can be defined by the formula K = L(1 + L) −1 , where the kernel L (also on Z ′ ) has a particularly simple form, see [BOO,Proposition 2.3]. Namely, its block form corresponding to the partition of Z ′ into the set of positive half-integers (2.1) The notation means that L(x, y) = 0 if x and y are of the same sign; This fact is quite elementary. It is more difficult to obtain an explicit formula for K. In [BOO] we gave several such formulas. Here is one of them (Theorem 1 is the Bessel function of order x and argument 2 √ θ, and the diagonal entries are determined by the L'Hospital rule. Then where L is defined by (2.1). We call K(x, y) the discrete Bessel kernel. In [BOO] we gave no explanation why the Bessel functions appear in the picture, there we have just verified that the relation K = L(1 + L) −1 holds. Several such explanations exist by now. Originally, we obtained the discrete Bessel kernel as a limit of the hypergeometric kernel introduced in [BOl2], see also §8. The hypergeometric kernel describes a certain two-parametric family of measures on partitions called z-measures which can be degenerated to the Plancherel measures. K. Johansson independently obtained the restriction of the discrete Bessel kernel to Z ′ + as a limit of Christoffel-Darboux kernels for the Charlier polynomials [J]. A. Okounkov showed that the Plancherel measures as well as the z-measures fit into a much more general infinite-parametric family of measures on partitions, and he gave an integral representation for the corresponding correlation kernels [Ok]. In the particular case of the Plancherel measures one obtains the integral representation of the Bessel function in this way, see [BOk,§4,Ex. 1]. In this note we suggest yet another way to obtain the discrete Bessel kernel. We will consider L and K as discrete analogs of integrable operators in the sense of [IIKS], see also [D2]. To my best knowledge, such an object has not been discussed in the literature before. 2 It is known that if L is an integrable operator in the usual sense then the operator L(1 + L) −1 is also integrable, and it can be expressed through a solution of a certain Riemann-Hilbert problem (RHP, for short) associated with L, see [IIKS], [D2], and §3. We will define a discrete analog of a Riemann-Hilbert problem and we will show that solving a certain discrete Riemann-Hilbert problem (DRHP, for short) is equivalent to computing the resolvent of a discrete integrable operator. Furthermore, we will show how to obtain the discrete Bessel kernel by employing discrete analogs of standard methods used in handling conventional (continuous) Riemann-Hilbert problems. Integrable operators and Riemann-Hilbert problems. General approach This section gives a brief review of the formalism of integrable operators and corresponding Riemann-Hilbert problems. We shall follow [D2] in our description of the material. Let Σ be an oriented contour in C. We call an operator L acting in L 2 (Σ, |dζ|) integrable if its kernel has the form for some functions f j , g j , j = 1, . . . , N . We shall always assume that so that the kernel L(ζ, ζ ′ ) is nonsingular (this assumption is not necessary for the general theory). We did not impose any restrictions on the functions f i , g i and on the contour Σ here, these restrictions depend on a particular problem. For example, one can demand that f i , g i ∈ L 2 (Σ, |dζ|) ∩ L ∞ (Σ, |dζ|), and the contour Σ is such that the Cauchy Principal Value operator H, is L 2 -bounded. These restrictions imply, in particular, that the operator L is a bounded operator in L 2 (Σ, |dζ|), see [D2]. The concept of an integrable operator was first introduced in [IIKS]. It turns out that for an integrable operator L such that (1 + L) −1 exists the operator K = L(1 + L) −1 is also integrable. Proposition 3.1 [IIKS]. Let L be an integrable operator as described above and K = L(1 + L) −1 . Then the kernel K(ζ, ζ ′ ) has the form A remarkable fact is that F j and G j can be expressed through a solution of an associated Riemann-Hilbert problem. As we move along Σ in the positive direction, we agree that the (+)-side (respectively, (−)-side) lies to the left (respectively, right). Let v be a map from Σ to Mat(k, C), k is a fixed integer. We shall say that a matrix function m : Proposition 3.2 [IIKS]. Let L be an integrable operator as described above and m(ζ) be a solution of the RHP Then the kernel of the operator K = L(1 + L) −1 has the form . Discrete integrable operators and discrete Riemann-Hilbert problems. General approach In this section we prove discrete analogs of the two results stated in the previous section. The material of this section is due to Percy Deift. Let X be a discrete locally finite subset of C. We call an operator L acting in ℓ 2 (X) integrable if its matrix has the form for some functions f j , g j on X, j = 1, . . . , N , satisfying the relation We will assume that f j , g j ∈ ℓ 2 (X) for all j. We will also call an operator in ℓ 2 (X) integrable if its matrix has the form (4.1) for x = x ′ and has arbitrary (not necessarily zero) diagonal values. Set Then (4.2) can be rewritten as g t (x)f (x) = 0. We will also assume that the operator is a bounded operator in ℓ 2 (X) 3 . These restrictions guarantee that L is a bounded operator in ℓ 2 (X); a proof of this fact is contained in the proof of Proposition 4.3 below. As in the continuous case, it turns out that if the operator (1 + L) is invertible then K = L(1 + L) −1 = 1 − (1 + L) −1 is also an integrable operator with possibly nonzero diagonal entries. Proposition 4.1. Let L be an integrable operator as described above and K = L(1 + L) −1 . Then the matrix K(x, x ′ ) has the form Proof. We will follow the lines of the proof of Proposition 3.1 from [KBI,XIV.1]. The defining relation K + LK = L reads Assume x = y. Multiplying both sides by (x − y) and using the relation Thanks to (4.2), the restriction t = x in the second summation above can be removed, and we obtain . This proves the first claim of the proposition. Since the diagonal entries of L are zeros, the relation K + LK = L implies and this proves the second claim of the proposition. The relation F t (x)G(x) = 0 will be proved later, see (4.12). Remark 4.2. It is not difficult to see that for an integrable operator V with arbitrary diagonal entries bounded from −1, and such that (1 + V ) is invertible, then it is easily seen that Similarly to the continuous case, F j and G j from Proposition 4.1 can be expressed through a solution of an associated discrete Riemann-Hilbert problem. Let w be a map from X to Mat(k, C), k is a fixed integer. We shall say that a matrix function m : C \ X → Mat(k, C) with simple poles at the points x ∈ X is a solution of the DRHP (X, w) if the following conditions are satisfied • m(ζ) → I as ζ → ∞. We will also call w(x) the jump matrix. If the set X is infinite, the last condition must be made more precise. Indeed, a function with poles accumulating at infinity cannot have asymptotics at infinity. One way to precise the condition is to require the uniform asymptotics on a sequence of expanding contours, for example, on a sequence of circles |ζ| = a k , a k → +∞. In order to guarantee the uniqueness of solutions of the DRHPs considered below we will always assume that there exists a sequence of expanding contours such that the distance from these contours to the set X is bounded from zero, and we will require a solution m(ζ) to have the proper asymptotic behavior on these contours. The setting of the DRHP above is very similar to the pure soliton case in the inverse scattering method, see [BC], [BDT], [NMPZ, Ch. III]. Proposition 4.3. Let L be an integrable operator as described above and m(ζ) be a solution of the DRHP (X, w) with Then the matrix K = L(1 + L) −1 has the form . Lemma 4.4. If m(ζ) is a solution of the DRHP (X, w), and w 2 (x) = 0 for some x ∈ X then the function is analytic in a neighborhood of x. Proof. In a neighborhood of x we have (m(ζ) has a simple pole at x) where A x and B x are constant matrices. Then The residue condition of the DRHP implies that Hence, and, recalling that w 2 (x) = 0, we get which is analytic near x. Note that the jump matrix w(x) = −f (x)g(x) t satisfies the condition w 2 (x) = 0 at every point x ∈ X because of (4.2). Proof. Since w 2 (x) = 0, the determinant of the matrix I + w(x) ζ−x is identically equal to 1. In particular, this matrix is invertible. If m 1 (ζ) and m 2 (ζ) are solutions of DRHPs with the same jump matrix w(x) and m 2 (ζ) is invertible then Corollary 4.6. A DRHP with the jump matrix of the form w(x), w 2 (x) = 0 for all x ∈ X, and arbitrary invertible asymptotics at infinity whose any solution is invertible in C \ X has at most one solution. Lemma 4.7. If m(ζ) is a solution of a DRHP with the jump matrix w(x), w 2 (x) = 0 for all x ∈ X, and asymptotics I at infinity then det m(ζ) ≡ 1. Hence, such DRHP has at most one solution. Proof of Proposition 4.3. The proof is similar to that of Proposition 3.1 from [D2]. It is based on the following commutation formula: if B 1 and B 2 are Banach spaces then for any bounded operators D : B 1 → B 2 and E : in the sense that if (−λ) = 0 lies in the resolvent set of ED then (−λ) lies in the resolvent set of DE and (DE + λ) −1 = λ −1 (1 − D(ED + λ) −1 E), see e.g. [Sak]. This simple result turns out to have a large number of applications in mathematical physics, see [D1]. Let R f denote the map of right multiplication by the column N -vector f taking row N -vector functions to scalar functions: and let R g t denote the map of right multiplication by the row N -vector g t taking scalar functions to row N -vector functions: By applying R f to a N × N matrix valued function we will mean the application of R f to every row of the matrix and getting a column N -vector function as the result. Similarly, by applying R g t to a column N -vector function means the application of R g t to every coordinate of the vector and getting an N × N matrix valued function as the result. Now observe that the operator L is of the form L = DE, where D = R f , E = T R g t , and T was introduced in (4.3). The operator D maps the space B 1 of row N -vector functions on X with coordinates from ℓ 2 (X) to B 2 = ℓ 2 (X), and E maps This implies that L is a bounded operator in ℓ 2 (X) if the functions f j , g j are bounded and the operator T is bounded. Note that for the boundedness of L instead of the boundedness of T we could require the boundedness of E = T R g t . Integrable operators and Riemann-Hilbert problems. Special case In this section we apply the general formalism of §3 to a much more special situation. Let Σ = Σ I ∪ Σ II be a union of two contours, and assume that the operator L in the block form corresponding to this splitting is as follows x−y 0 (5.1) for some functions h I ( · ) and h II ( · ) defined on Σ I and Σ II , respectively. Then the operator L is integrable with N = 2. Indeed, Then the jump matrix v(x) of the corresponding RHP has the form It can be easily seen that the RHP in such a situation is equivalent to the following set of conditions: • matrix elements m 11 and m 21 are holomorphic in C \ Σ II ; • matrix elements m 12 and m 22 are holomorphic in C \ Σ I ; • on Σ II the following relations hold According to Proposition 3.2, the kernel K(x, y) in the block form corresponding to the splitting Σ = Σ I ∪ Σ II looks as follows Discrete integrable operators and discrete Riemann-Hilbert problems. Special case Similarly to §5, we apply the general approach of §4 to a special case. Let X be a discrete locally finite subset of C and let X = X I ⊔ X II be its splitting into two disjoint parts. Let h I ( · ), h II ( · ) be two functions defined on X I and X II , respectively. We will assume that h I ∈ ℓ 2 (X I ), h II ∈ ℓ 2 (X II ). Consider a matrix L of size X × X which in the block form corresponding to the splitting X = X I ⊔ X II looks as follows, cf. (5.1), This matrix defines a bounded operator in ℓ 2 (X) if, for example, the operator T defined in (4.3) is ℓ 2 -bounded. Let us assume that the operator (1 + L) is invertible. This is automatically true if h I and h II are real valued, then L * = −L, and −1 cannot belong to the spectrum of L. As in §5, the operator L is integrable with N = 2. Indeed, The jump matrix w(x) = −f (x)g(x) t of the corresponding DRHP has the form It is readily seen that the DRHP is equivalent to the following conditions: • m 21 and m 22 have simple poles at the points of X I , and for x ∈ X I • m(u) ∼ 1 as u → ∞. Let us indicate how this setting of the problem can be obtained from the continuous case. A continuous Riemann-Hilbert problem is equivalent to a system of integral equations. For the special type of Riemann-Hilbert problems described in §5 the system takes the form For the first two equations y belongs to C \ Σ I , and for the last two y belongs to C \ Σ II . A natural discrete analog of this system is the following one which is equivalent to our DRHP. This analogy suggests that continuous RHPs can be obtained as limits of DRHPs when X "approximate" Σ, or, vice versa, DRHPs can be obtained as limits of RHPs when the contour Σ split into increasingly small parts passing through the points of X (the last observation is due to P. Deift). In §8 we will provide an explicit example of a continuous RHP and a DRHP such that the discrete problem converges to the continuous one in a certain limit. Proposition 4.3 in our special situation takes the following form, cf. §5. Proposition 6.1. Let m be a solution of the DRHP stated above. Then the matrix K = L(1 + L) −1 has the form (with respect to the splitting X = X I ∪ X II ) where the indeterminacies of type 0 0 on the diagonal are removed by the L'Hospital rule: Proof. A direct application of Proposition 4.3. Note that det m(ζ) ≡ 1 by Lemma 4.7, and The discrete Bessel kernel Now we return to the problem stated in §2. As was explained in the previous section, the kernel K = L(1 + L) −1 , where L is given by (2.1), can be expressed via the solution of DRHP of the special form discussed above with . It will be more convenient for us to use the parameter η = √ θ instead of θ; then . (7.1) One way to extract the information about a solution of a RHP is to reduce the problem to a RHP with a constant (or not depending on a certain parameter) jump matrix. Then one can compare the solution with its derivatives with respect to the complex variable or with respect to a parameter, see, e.g., [KBI, Ch. XV]. We will employ this approach for our DRHP. The asymptotic condition of the DRHP requires that m(u) ∼ 1 as u → ∞. Assume now that the next terms of the asymptotic expansion of the solution m(u) at infinity are given by the relation Here α, β, γ, δ are some functions of η. The asymptotics here and below is understood in the sense that |u| → ∞ so that dist(u, X) is bounded from zero. Note that the DRHP has an obvious symmetry: since X I = −X II and h I (x) = h II (−x), the changes do not affect the problem (the minus signs in the off-diagonal blocks appeared because of the relation Res u=x f (u) = − Res u=−x f (−u)). This immediately implies that α = −δ, β = γ, so that Consider a new matrix Observe that n(u) is invertible for u ∈ C \ X, because m(u) is invertible by Lemma If m(u) is the solution of the DRHP with h's given by (7.1) and asymptotic behavior given by (7.3) then n(u) is the solution of the DRHP with and asymptotics Note now that the jump matrix data of the new problem given by (7.4) do not depend on η, so that the derivative ∂n ∂η = ∂ η n satisfies the same DRHP with possibly different asymptotics. By Lemma 4.5, the matrix (∂ η n)n −1 has no singularities in C. The asymptotics of (∂ η n)n −1 can be easily computed from (7.5): Note that α has disappeared from the asymptotics. By the Liouville theorem we conclude that This relation already provides certain information about n. Indeed, it easily implies that the matrix elements of n satisfy second order linear differential equations in η. However, the coefficients of this equation are still unknown -they are expressed in terms of the function β(η). The second trick which is commonly used in conventional RHPs in such a situation is differentiation with respect to the variable, if the jump matrix does not depend on the variable. Unfortunately, we cannot reduce our DRHP to one with jump matrix not depending on u. However, this obstacle can be overcome in the following way. Introduce a new matrix . If m(u) satisfies the original DRHP, it can be easily seen that p(u) is holomorphic in C. The residue condition of the DRHP and the asymptotics (7.5) in terms of p(u) take the form Note that the relation (7.7) implies that the matrix p(x) is degenerate at the points x ∈ X. It is easily verified that this matrix satisfies the original DRHP (the verification uses the symmetry relation J −n = (−1) n J n , n ∈ Z, for the Bessel functions). Now it is immediately seen that the kernel K(x, y) of Proposition 6.1 constructed from the matrix m above coincides with the discrete Bessel kernel (2.2), and the Theorem 2.1 is proved. It is worth noting what happens if we choose β = η above. Then we get , and this matrix does not satisfy the DRHP. However, it satisfies the problem up to the change of sign in the residue condition. This change of sign is equivalent to the multiplication of h I and h II by √ −1, or to the change of sign of L. Thus, the kernel K constructed from the matrix m is connected with L by the relation K = L L − 1 . Z-measures It seems that the DRHP considered in §7 does not admit a scaling limit transition to a continuous RHP. The purpose of this section is to provide another DRHP for which such a limit exists. This DRHP and its scaling limit describe the so-called z-measures on partitions and the spectral decomposition of the generalized regular representations of the infinite symmetric group. We refer the reader to the papers [KOV], [BOl1], [BOl2], [BOl3], where this material is thoroughly explained. We start with the description of the DRHP. As in §7, we take X = Z ′ , X I = Z ′ + , X II = Z ′ − . The functions h I , h II are as follows (we add the superscript 'd' to the notation for the discrete problem and the superscript 'c' to the notation for the continuous problem) . Here z and z ′ are two complex parameters such that (z + k)(z ′ + k) > 0 for all k ∈ Z; ξ ∈ (0, 1) is also a parameter; (a) k = Γ(a + k)/Γ(a) is the Pochhammer symbol. Note that the limit ′ √ brings us to the DRHP considered in §7. The continuous RHP in the notation of §5 has the form It is not difficult to see that Here [y] denotes the integer which is closest to y. In such a situation it is natural to say that the DRHP approximates the continuous RHP as ξ → 1. The solution for the DRHP was obtained in [BOl2]: Here F (a, b; c; v) is the Gauss hypergeometric function. The solution for the continuous RHP was computed in different language in [B1], [B2], see also [BOl1]: where W κ,µ (v) is the Whittaker function. The correlation kernels K corresponding to m d (u) and m c (u) are called the hypergeometric kernel and the Whittaker kernel, respectively. The convergence of the DRHP to the continuous RHP is established immediately using the relation We have lim ξ→1 m d ((1 − ξ)u) = m c (u), see [BOl2], [BOl3] for an explanation of the representation theoretic meaning of this limit transition. It is worth noting that for the discrete problem described in this section the relation K = L(1 + L) −1 holds for all values of parameters, and L and K are bounded operators in ℓ 2 (Z ′ ). However, in the continuous case the operator L becomes unbounded if |ℜ(z + z ′ )| ≥ 1, and one should be careful to define (1 + L) −1 . The spectral analysis of the kernels L and K in the continuous case has been done in
2014-10-01T00:00:00.000Z
1999-12-12T00:00:00.000
{ "year": 1999, "sha1": "a8d81056f4472fcc9698e2c955fee09de95578e1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "be9ad4bfeb9a32673ab43e143d9d1c52c5d478e8", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
261883397
pes2o/s2orc
v3-fos-license
Beyond epithelial damage: vascular and endothelial contributions to idiopathic pulmonary fibrosis Idiopathic pulmonary fibrosis (IPF) is a progressive scarring disease of the lung with poor survival. The incidence and mortality of IPF are rising, but treatment remains limited. Currently, two drugs can slow the scarring process but often at the expense of intolerable side effects, and without substantially changing overall survival. A better understanding of mechanisms underlying IPF is likely to lead to improved therapies. The current paradigm proposes that repetitive alveolar epithelial injury from noxious stimuli in a genetically primed individual is followed by abnormal wound healing, including aberrant activity of extracellular matrix–secreting cells, with resultant tissue fibrosis and parenchymal damage. However, this may underplay the importance of the vascular contribution to fibrogenesis. The lungs receive 100% of the cardiac output, and vascular abnormalities in IPF include (a) heterogeneous vessel formation throughout fibrotic lung, including the development of abnormal dilated vessels and anastomoses; (b) abnormal spatially distributed populations of endothelial cells (ECs); (c) dysregulation of endothelial protective pathways such as prostacyclin signaling; and (d) an increased frequency of common vascular and metabolic comorbidities. Here, we propose that vascular and EC abnormalities are both causal and consequential in the pathobiology of IPF and that fuller evaluation of dysregulated pathways may lead to effective therapies and a cure for this devastating disease. Introduction Idiopathic pulmonary fibrosis (IPF) (1) is a progressive fibrotic lung disease with a poor prognosis (2).As fibrosis progresses there is worsening gas exchange with consequent increasing dyspnea, development of respiratory failure, and ultimately death.The adjusted incidence and prevalence of IPF globally are estimated to be between 0.09 and 1.30 and between 0.33 and 4.51 per 10,000 persons, respectively (3), and the incidence is rising (4).The poor prognosis of IPF equates to a median survival of just 3 to 4 years, although survival may improve with antifibrotic therapies (5,6).At present, there are only two licensed therapies, pirfenidone and nintedanib, that slow the rate of decline in forced vital capacity (FVC) and increase progression-free survival (7,8). The pathogenesis of IPF is considered to result from injury to the alveolar epithelium by a range of insults including cigarette smoke, gastric acid, air pollution, and viruses in a genetically primed individual, driving aberrant activity of mesenchymal cell populations (9).Subsequent production of extracellular matrix (ECM) components including collagens and fibronectin results in interstitial fibrosis, alveolar collapse, and loss of effective lung tissue (10).Many of the genes that confer increased risk of disease development are associated with alveolar epithelial cells (11).Similarly, telomere shortening in alveolar epithelial cells is sufficient to promote the development of pulmonary fibrosis in mice (12).Furthermore, biomarkers that reflect epithelial injury are associated with poorer outcomes in patients with IPF (13).Because of the intimate anatomical proximity of the alveolar epithelium and endothelium, injury to the alveoli will also result in vascular damage.Indeed, Margaret Turner-Warwick highlighted the importance of the vasculature over 50 years ago and first described the development of digital clubbing, an important vascular phenomenon associated with IPF (14,15).Furthermore, one of the major targets of nintedanib is the receptor for VEGF, a potently angiogenic molecule.Consistently, recent trials have identified potential antifibrotic effects of treprostinil, which mimics the cardioprotective hormone prostacyclin, in IPF (16).Finally, the importance of vascular comorbidities and their treatment in pulmonary fibrosis is emerging (17)(18)(19).It is, therefore, likely that the pulmonary vasculature may have a prominent role in the pathogenesis of IPF and the systemic vasculature in cardiovascular comorbidities. The vasculature in IPF Vascular abnormalities in IPF include the development of vascular connections known as pulmonary-bronchial artery anastomoses and are accompanied by abnormal populations of endothelial cells (ECs) and vascular comorbidities such as pulmonary hypertension (PH) and coronary artery disease (20) (Figure 1).However, targeting vascular-relevant pathways has yielded conflicting results (1,21,22) (Table 1).Whether vascular abnormalities Idiopathic pulmonary fibrosis (IPF) is a progressive scarring disease of the lung with poor survival.The incidence and mortality of IPF are rising, but treatment remains limited.Currently, two drugs can slow the scarring process but often at the expense of intolerable side effects, and without substantially changing overall survival.A better understanding of mechanisms underlying IPF is likely to lead to improved therapies.The current paradigm proposes that repetitive alveolar epithelial injury from noxious stimuli in a genetically primed individual is followed by abnormal wound healing, including aberrant activity of extracellular matrix-secreting cells, with resultant tissue fibrosis and parenchymal damage.However, this may underplay the importance of the vascular contribution to fibrogenesis.The lungs receive 100% of the cardiac output, and vascular abnormalities in IPF include (a) heterogeneous vessel formation throughout fibrotic lung, including the development of abnormal dilated vessels and anastomoses; (b) abnormal spatially distributed populations of endothelial cells (ECs); (c) dysregulation of endothelial protective pathways such as prostacyclin signaling; and (d) an increased frequency of common vascular and metabolic comorbidities.Here, we propose that vascular and EC abnormalities are both causal and consequential in the pathobiology of IPF and that fuller evaluation of dysregulated pathways may lead to effective therapies and a cure for this devastating disease.increased frequency of PH.This loss of pulmonary vascular surface area is associated with distinct cellular changes in ECs, vascular smooth muscle cells (VSMCs), and pericytes, all of which may directly contribute to the pathogenesis of IPF (Figure 2). Endothelial cells.Alveolar injury is likely to directly injure ECs (Figure 2A).ECs share a basement membrane with the alveolar epithelial cells and are key components of the alveolus, comprising approximately 30% of pulmonary cells in normal lung (29).Bleomycin studies modeling lung injury in mice have demonstrated that ECs upregulate profibrotic molecules such as plasminogen activator inhibitor-1 (PAI-1), TGF-β, and PDGF and lose their ability to generate nitric oxide synthase (NOS) and prostacyclin (30).In IPF, there is evidence that epithelial cell injury results in release of active TGF-β, which can activate ECs, causing an imbalance of angiostatic and angiogenic mediators including VEGF, resulting in abnormal EC proliferation and apoptosis (31) (Figure 2). In IPF, an abnormal, ectopic population of COL151A-expressing ECs have been identified using single-cell RNA sequencing (32).These COL151A cells are normally confined to the peribronchial and subpleural regions of large airways but are never embedded within parenchyma in healthy states.However, they are found in larger numbers in more distal lung tissue and especially areas of fibrosis in patients with IPF.Antibodies targeting ECs that can induce microvascular injury and accelerate EC necrosis have also been identified in the sera of patients with IPF (33).Similarly, circulating ECs, serving as markers of EC damage, are found in higher concentrations in the sera of patients with IPF, while myeloid progenitor cells are found at lower concentration (34). promote IPF or are a consequence of progressive fibrosis remains controversial.This Review summarizes the current understanding of the vasculature in IPF and provides support for the concept that vascular injury promotes fibrosis. Cellular mechanisms Vascular remodeling.Margaret Turner-Warwick described the expansion of the vasculature with numerous pulmonary-bronchial arterial anastomoses in diffuse pulmonary fibrosis as early as 1963 (23) (Figure 1).These abnormally dilated large vessels are found in the periphery of honeycomb cysts.However, there is marked heterogeneity in fibrotic tissue, and, in particular, areas of minimal fibrosis have been shown to have dense capillary networks whereas the most scarred tissue, rich in fibroblastic foci, almost complete lacks vasculature (Figure 1) (24).These vasculature changes correspond with a relevant growth factor gradient, with VEGF having almost undetectable levels in the regions of worst scarring (25) and showing higher concentrations in areas of relatively preserved lung tissue (26).In some animal models, inhibition of VEGF protects against bleomycin-induced lung injury (27).Conversely, and somewhat contradictorily, VEGF overexpression in transgenic mice attenuated the fibrotic effect of bleomycin, and in vitro application of VEGF to apoptotic ECs prevented an injurious signal to epithelial cells (28). It is, therefore, unclear whether the VEGF-inhibiting effects of nintedanib are undermining or augmenting its antifibrotic effects.In advanced disease states, the overall effect is a net reduction in pulmonary vascular surface area, which, in part, explains the In healthy tissue, vessels communicate through the capillary bed, and there is an absence of these larger, tortuous communications (23).(B) Fibroblastic foci from fibrotic parenchyma lack ECs (as indicated by an absence of staining for CD34), confirming a lack of vascularity.In healthy tissue, ECs line the vessel walls and are distributed throughout lung tissue.Reproduced with permission from the American Thoracic Society (159).(C) Vascular comorbidities associate with IPF, suggesting that ECs contribute to and are affected by fibrosis. tion of connective tissue growth factor (CTGF) that enhances the activity of neighboring fibroblasts that drive tissue remodeling. ECs may also be directly involved in fibrogenesis through endothelial-mesenchymal transition (EndMT) whereby they develop a contractile, α-smooth muscle actin-expressing phenotype (39).This effect can be induced by the profibrotic mediator TGF-β and is exaggerated when the cells are exposed to hypoxic conditions with consequent induction of HIF-2α (40).EndMT has been further evaluated in mouse models of fibrosis.In a transgenic model that tracked cells following bleomycin injury, accumulated cells identified as having endothelial origin contributed to nearly 20% of the total cell population in areas of active fibrosis (41).A proportion of these cells, 16%, coexpressed collagen I, suggesting they were in a state of transition to a myofibroblast-like cell.Another study has shown that EndMT is dependent on the binding of sterol regulatory element-binding protein 2 (SREBP2), a key protein in cholesterol homeostasis, to specific promoter regions (42).Notably, SREBP2 is upregulated in ECs from bleomycin and IPF models. These data suggest that following lung injury, damaged ECs secrete profibrotic mediators and are capable of migrating to the parenchymal regions of lung and transdifferentiating into pathogenic Numerous animal models have shed light on the role of the endothelium in pulmonary fibrosis.In one mouse model, pulmonary capillary ECs have been shown to support epithelial cell proliferation and alveolar regeneration through the secretion of factors including MMP-14, a process that is dependent on the activation of VEGF receptors present on ECs (35).Bleomycin-induced lung injury downregulates protective EC signals and recruits profibrotic macrophages that signal through ECs via the Wnt/β-catenin pathway to secrete profibrotic factors including Jag1 (36).These factors activate Notch signaling in neighboring fibroblasts and drive fibrogenesis (36).In similar models, EC autocrine signaling, through sphingosine-1-phosphate (SIP1) GPCRs, is critical in maintaining endothelial barrier integrity and tight junction formation.Disrupted barrier integrity results in increased vascular permeability, exuberant coagulation, and worsened fibrosis following bleomycin administration (37).The endothelial transcription factor ETS-related gene (ERG) is implicated in the generation of regenerative capillary ECs following injury.In bleomycin models, there is evidence that the dysregulated ERG pathway in aging cells is associated with accelerated fibrosis (38).Abnormal ERG signaling is additionally associated with endothelial paracrine signaling through the secre- fibrosis and reduces VSMC activation and proliferation.Further, genetic defects in BMPR2 are well recognized in pulmonary arterial hypertension (PAH) and are present in IPF, supporting a link between the two disease processes.Pericytes.Pericytes are mesenchymal cells, closely related to fibroblasts, that form extensive physical contacts with ECs (e.g., within capillaries) with which they share a basement membrane (49).Pericytes are activated by cyclical mechanical stretch and are fundamental to normal alveolar development (50).Lineage tracing in bleomycin lung injury models demonstrated that this cell population may contribute substantially to the myofibroblast pool in pulmonary fibrosis (51).In particular, there is evidence that signaling disruption in the Wnt pathway occurs simultaneously in ECs and pericytes (52).In IPF, the changes in pulmonary mechanobiology and the phenotypic shift of ECs and VSMCs may have a profound profibrotic effect on pericytes, leading to a rapid expansion of the potently fibrogenic myofibroblasts. The alveolar capillary basement membrane.The alveolar epithelial-capillary basement membrane (BM) is a vital scaffold structure that supports normal repair of the alveolar parenchyma, and loss of BM integrity has been known for many years to be a fundamental pathogenic change in IPF (53).Loss of the BM permits direct alveolar epithelial-mesenchymal cell interaction and promotes TGF-β activation and myofibroblast activation (54).Collagen IV is a major component of the BM, and is the main collagen synthesized by ECs, promoting cell adhesion and migration, and serving as a cofactor in NO-dependent angiogenesis (55).Collagen IV is also synthesized by alveolar type 1 (AT1) epithelial cells, which are lost in IPF.Thus, the primary source of collagen IV in IPF are ECs and pericytes, which are positioned to profoundly affect the anatomical location and function of the BM. IPF fibroblasts can also synthesize collagen IV; and human fibroblasts exposed to TGF-β stimulation deposit abnormal α 1 and α 2 collagen IV chains.This change in BM composition limits myofibroblast migration and promotes their survival, which could contribute to myofibroblast persistence and sustained activity in fibroblastic foci (56)(57)(58).The persisting cells provide an additional source of abnormal collagen in patients with IPF.Abnormal collagen IV is associated with aberrant angiogenesis, likely due to disrupted EC binding via a compromised integrin-collagen IV interaction (55).The collagen IV-integrin interaction may, therefore, be a key process in the endothelial contribution to IPF.Indeed, the stiff matrix associated with IPF has been shown to lead to integrin-mediated mechanosensing that mediates MMP-2-dependent degradation of collagen IV, which subsequently promotes myofibroblast invasion of the alveolus (59). It is clear that key components of the pulmonary vasculature become fundamentally altered during fibrogenesis and are likely to contribute to the ongoing progression of fibrosis in IPF.Therefore, understanding the molecular mechanisms that contribute to these cellular and structural changes may lead to the development of novel therapeutic targets. Modifiable vascular signaling pathways The key vascular signaling pathways that promote fibrogenesis involve NO or GPCR signaling via cAMP or cGMP to promote transcriptional events (Figure 3A) or inside-out signaling via the myofibroblasts.This model implies that ECs play a fundamental role in pathogenesis, serving as much more than mere bystander cells. Vascular smooth muscle cells.VSMCs are highly contractile cells that constitute the structural form of blood vessels.There is abnormal proliferation and distribution of smooth muscle cells in IPF tissue (43), and under the influence of the tissue microenvironment and growth factors including PDGF and TGF-β, VSMCs may develop a synthetic phenotype capable of producing multiple components of the blood vessel wall, including collagen (44).In vitro studies show upregulation of markers representing a switch from contractile to synthetic phenotype in response to TGF-β stimulation (45).In addition, TGF-β from damaged epithelium and PDGF released from apoptotic ECs stimulate the proliferation of VSMCs (as well as fibroblasts).The cell expansion thickens the intimal layer of the pulmonary artery and arterioles, generating resistance and subsequent PH.VSMC proliferation is additionally driven by other mediators, including CTGF as demonstrated in bleomycin models (46). VSMCs isolated from patients with IPF show a hyperproliferative state and produce more collagen I via induction of reactive oxygen species compared with cells from control donors.This effect can be blunted by the antifibrotic therapeutic pirfenidone (47).The proliferative, contractile, and synthetic properties of VSMCs are dysregulated in IPF and, therefore, have the potential to contribute to parenchymal fibrosis as well as associated PH. Adenoviral overexpression of TGF-β1 in the lungs of mice leads to signaling crosstalk between ECs, VSMCs, and fibroblasts, which not only results in increased rates of apoptosis of ECs but also induces activation and proliferation of VSMCs.Importantly, defective bone morphogenetic protein receptor 2 (BMPR2) signaling has been implicated in VSMC activation and proliferation (48).In particular, restoration of the BMPR2 pathway attenuates α v β 1 integrin (Figure 3B).Several therapeutics targeting various components of these signaling pathways have been assessed with varying degrees of success in IPF trials (Figure 3 and Table 1). Endothelial NOS.NO is a free radical second messenger that is generated by three isoforms of synthase: neuronal, inducible, and endothelial.Endothelial NO synthase (eNOS; also known as NOSIII) is expressed in vascular ECs and produces NO via a calcium/calmodulin pathway.NO subsequently activates soluble guanylyl cyclase (sGC) to generate cGMP, which in turn activates protein kinase G (PKG), leading to modulation/reduction of intracellular calcium concentration, leading to EC permeability, smooth muscle relaxation, and the inhibition of platelet aggregation (60).When eNOS is overexpressed in transgenic mice, the degree of bleomycin-induced subpleural fibrosis is attenuated (61).Furthermore, mice lacking all three NOS isoforms exhibit a worse fibrotic reaction to bleomycin.These fibrotic effects were attenuated in iNOS-null mice that possessed eNOS and in mice supplemented with an NO donor (62), suggesting that NO is protective against fibrosis.However, application of a NO-specific inhibitor attenuated bleomycin-induced fibrogenesis and inhibited angiogenesis by regulation of VEGF and inhibition of PAI-1, indicating that NO serves a complicated role in fibrosis (63).Resolution of bleomycin-induced fibrosis requires eNOS-dependent deactivation of myofibroblasts, and there is evidence of diminished eNOS in aged mice (64).Loss of eNOS may provide one mechanism through which aging promotes IPF. The sGC stimulator riociguat targets the NO/cGMP pathway and is licensed for use specifically in chronic thromboembolic pulmonary hypertension and also has demonstrable antifibrotic effects in animal models (65,66).In the RISE-IIP phase II randomized controlled trial (RCT) of 147 patients, riociguat was compared with placebo in patients with PH and interstitial lung disease (ILD), of which IPF was the most common subtype, representing 74% of the treatment arm (67).The trial was terminated early due to earlier mortality and an increased risk of adverse events, including worsening of ILD in the treatment group, and no evidence of any therapeutic benefit. Inhaled NO gas has recently been shown to improve exercise capacity (as assessed by ability to undertake moderate to vigorous activities) and was well tolerated in patients with ILD and PH (ILD-PH) where IPF represented the largest subgroup of ILD in a phase IIb RCT (68).However, no data on functional markers of fibrosis (e.g., FVC) were measured in follow-up.A randomized trial assessing the effects of NO on dyspnea specifically in IPF patients is under way (ClinicalTrials.govNCT05052229). Endothelin.Endothelin (ET) is a potent vasoconstrictor with vascular remodeling properties; therefore, ET antagonists are used in the treatment of idiopathic PAH.ETA and ETB receptors are expressed on alveolar epithelial cells and fibroblasts, both of which, under certain disease conditions, are capable themselves of ET synthesis (69).ET can induce fibroblast differentiation, migration, and survival; and epithelial-mesenchymal transition (EMT) and EndMT and ECM production and antagonism of ET can attenuate fibrosis in animal models (70,71).Similarly, VSMCs when stimulated with interferon in combination with TNF-α acquire the ability to synthesize ET (72)(73)(74)(75)(76)(77).ET is increased in plasma and bronchoalveolar lavage (BAL) of patients with IPF and in bleomycin animal models with evidence of increased ET expression in fibrotic tissue particularly in ECs in areas of angiogenesis (78)(79)(80). The dual ETA and ETB receptor antagonist bosentan was compared with placebo in 158 patients with IPF in the BUILD-1 study.Bosentan was well tolerated; however, the trial did not meet its primary endpoint of improvement in 6-minute walk distance (6MWD) (81).However, a trend favoring bosentan was noted toward reduced death and disease progression (although the trial was not appropriately powered to conclude either).A post hoc subgroup analysis suggested that this effect was more pronounced in patients with biopsy-proven IPF.Therefore, the BUILD-3 study was powered to meet these endpoints and enrolled 616 patients with IPF diagnosed by surgical lung biopsy.No significant difference was observed between the bosentan and treatment groups in stabilization of lung function or death (21).Macitentan, another dual receptor antagonist, also did not meet its primary endpoint of change in pulmonary function tests, disease progression, or death in a phase II RCT (22).Assessment of a potent ETA-selective receptor antagonist, ambrisentan, in IPF was terminated early due to worsening disease progression in the treatment (27%) versus control group (17%) (1).These trials demonstrate no beneficial effect of ET receptor antagonists in IPF and suggest that specific targeting of the ETA receptor is harmful.It is, therefore, possible that targeting the ETB receptor may have a beneficial effect in IPF, although specific inhibitors have not yet been developed. Cyclic nucleotides and phosphodiesterases.cAMP is an intracellular messenger formed by adenylate cyclase.In the endothelium, cAMP functions to maintain barrier junction integrity and permeability (through a combined effect with Rho) and vascular smooth muscle tone (82).In human lung fibroblasts, cAMP activation can limit proliferation and ECM differentiation (83).cAMP is degraded by several intracellular phosphodiesterases (PDEs).Dual inhibition of PDE3 and PDE4 can inhibit migration of VSMCs in rats and reverse the vascular remodeling of PH (84), and specific PDE4 inhibition causes less histological fibrosis and collagen accumulation in murine models compared with controls (85,86). The nonselective PDE4 inhibitor roflumilast is currently in use in chronic obstructive pulmonary disease and has proven effective in bleomycin mouse models in limiting fibrosis and vascular remodeling (87).However, it is limited by its side effect profile, notably substantial diarrhea.A recent phase II trial of a preferential PDE4B inhibitor, BI-1015550, in patients with IPF proved effective in stabilizing FVC after 12 weeks, and this result was independent of concurrent antifibrotic use (based on median difference in FVC between treatment and placebo groups of 62.4 mL and 88.4 mL in patients with and without concurrent antifibrotic use, respectively) (19).Thirteen percent of patients discontinued therapy in the treatment arm due to diarrhea.The proposed mechanisms of action of the PDE4B inhibitor include an inhibitory effect on fibroblast proliferation and ECM production and an anti-inflammatory component (88).These benefits have made BI-1015550 a promising agent, and it is being further assessed in large phase III studies (NCT05321069). cGMP is a parallel intracellular second messenger to cAMP with similar functional effects in some systems.cGMP is catabolized by a number of PDE enzymes, with PDE5 being the most brotic effect, with inhibition of fibroblast proliferation, reduced ECM secretion, and, importantly, a reversal of the myofibroblast phenotype.These effects were mediated by cAMP with proposed downstream mechanisms including hijacking of gene transcription from the TGF-β/SMAD canonical pathway, such as the inhibiting transcription factors YAP and TAZ, which are implicated in transcription of genes including the gene encoding connective tissue growth factor (CTGF) (101).cAMP also inhibits the MAPK pathway, which is implicated in fibrosis via a mitogenic effect of PDGF.Upregulation of PKA activity inhibits downstream effectors in this pathway, such as ERK, which is also implicated in fibrogenesis.Inhibition of the ERK pathway can be enhanced when there is a sustained cAMP activity within the cell nucleus as opposed to cAMP activity within the cytosol alone, an effect seen with treprostinil (102).This synthetic prostacyclin analog can also upregulate inhibitors of ERK, notably DUSP1, and inhibit activity of microRNA clusters involved in regulation of this pathway (103).In VSMCs, prostacyclin can inhibit cell proliferation through a cAMP/EPAC/PKA-dependent mechanism and also inhibit vascular smooth muscle cell migration via a cAMP/EPAC/ RhoA pathway, which prevents cytoskeletal reorganization (104).In addition, activation of a range of PPAR receptors has an inhibitory effect on TGF-β signaling and fibrogenesis in animal models of fibrosis (105). PGE 2 is a central component of the inflammatory response in humans.While a multitude of cells release the wider range of prostanoids, PGE 2 is considered a critical regulator of inflammation, and inhibition of PGE 2 at the site of inflammation explains much of the therapeutic benefit of NSAIDs.Animal models and in vitro experiments on human lung fibroblasts have demonstrated reduced production of COX-2-dependent PGE 2 , which may be explained by epigenetic changes in patients with IPF and the milieu of chemokines such as CCL2, which inhibits PGE 2 release (106).There is also abnormal PGE 2 receptor (EP1, 2, 3 and 4) expression in fibrotic tissue.PGE 2 can inhibit fibroblast proliferation and ECM production through the EP2 and EP4 receptors in a cAMP/PKA-dependent manner; however, higher PGE 2 concentrations can have a profibrotic effect via the EP1 receptor (through downregulation of cAMP) and the EP3 receptor (via increased intracellular calcium).PGE 2 can also inhibit the effect of TGF-β and the SMAD pathway.Administration of exogenous PGE 2 in mice has a protective effect against bleomycin-induced fibrosis (107).PGE 2 deficiency is also important in the increased apoptotic phenotype of lung epithelial cells and apoptosis resistance in fibroblasts in IPF tissue, which contributes to disordered wound healing of the disease (108).There is also evidence that the prostanoid PGD 2 has a protective effect in bleomycin-induced fibrosis in mice and reduces vascular permeability (109).Stimulation of the PGF2a receptor conversely promotes fibrogenesis (110). Inhaled treprostinil was evaluated in patients with ILD-PH in INCREASE, a phase III RCT looking primarily at treatment of PH.The trial met its primary endpoint of improvement in 6MWD (16).Interestingly, a post hoc analysis of FVCs measured found that in the 163 patients in the treatment, there was an overall improvement in FVC, and this was most pronounced in the subgroup of patients with IPF (17).A large phase III trial is currently under way investigating treprostinil specifically in IPF (111). therapeutically valuable and a standard target in the treatment of PAH.Inhibition of PDE5 in in vitro models with sildenafil prevents TGF-β-induced EndoMT and VSMC-mesenchymal transition through downstream signaling on ERK1/2 and SMAD, providing further evidence of a beneficial antifibrotic effect of these pathways (89). In a small open-label study, beneficial effects of sildenafil on 6MWD observed in patients with IPF-PH suggested that enhancing NO/cGMP pathways has therapeutic potential in this condition (90).Subsequently, the STEP-IPF double-blind RCT of sildenafil compared with placebo in 180 patients with advanced IPF (defined by diffusing capacity of carbon monoxide [DLco] of less than 35% of predicted) did not meet its primary endpoint of improvement in 6MWD, although there were improvements in DLco, which was a key secondary endpoint (91).Some subjective secondary endpoints, including quality of life, also showed improvement.The INSTAGE trial, comparing nintedanib as the standard of care combined with sildenafil versus placebo, also did not meet its primary endpoint of change in quality of life as measured by the St. George's Respiratory Questionnaire (92).However, a reduction of the rate of decline in FVC of at least 5% was observed in the nintedanib plus sildenafil group versus the control arm (31.4% vs. 50.7% of patients; HR 0.56; 95% CI 0.38-0.82).Additionally, a recent cohort study identified a survival benefit in patients with ILD-PH who were treated with sildenafil (93). Prostanoids.Prostanoids comprise a group of lipid mediators including thromboxane, prostaglandin E 2 (PGE 2 ), PGI 2 (prostacyclin), and PGD 2 , all formed from arachidonic acid (94).Arachidonic acid is liberated from membrane phospholipid in multiple cells by the action of phospholipase A 2 and converted to PGH 2 by the action of cyclooxygenase 1 (COX-1) (constitutive) or COX-2 (inducible).PGH 2 is converted to its respective prostanoid by the action of the site-specific synthase.In the vasculature, prostacyclin is the primary prostanoid produced, largely because of coexpression of COX-1 and prostacyclin synthase by ECs (95).As a counterbalance, platelets release primarily thromboxane since they coexpress COX-1 and thromboxane synthase (96).Prostacyclin is a fundamental antithrombotic mediator and induces vasodilation in some, but not other, vascular beds.Inhibition of COX-2 with NSAIDs, including COX-2-selective medications, is associated with increased risk of cardiovascular mortality due to loss of prostacyclin (97).By contrast, inhibition of COX-1 in platelets (i.e., with low-dose aspirin) is an established preventative therapy for secondary cardiovascular events.Prostacyclin and its analogs have an established role in the therapy of PAH and in the treatment of peripheral vascular disease.Therapeutic formulations that target prostacyclin receptor pathways include iloprost, selexipag, and treprostinil (98). Prostanoids have long been investigated as possible antifibrotic mediators.Reduced levels of PGE 2 are found in the BAL fluid from patients with IPF (99).In gene knockout models of fibrosis, mice deficient in the prostacyclin receptor developed worse fibrosis in response to bleomycin as measured by hydroxyproline content and measures of lung mechanics (100).This effect was dependent on COX-2 expression.A prostacyclin receptor-specific agonist applied to human IPF fibroblasts demonstrated an antifi-action on PAR receptors, particularly PAR-1.PAR receptors are present on a range of cells including ECs and fibroblasts, affecting EC barrier integrity and promoting release of PDGF and CTGF, and promoting fibroblast differentiation (126). Similarly, elevated levels of both factor VIII, a marker of EC injury that is implicated in thrombosis, and fibrin degradation products, including D-dimers, are seen in patients with IPF, suggesting exuberant coagulation (127).Factor X expression is also increased in fibrotic human lung tissue and in mouse models of fibrosis (128).This protein is implicated in myofibroblast differentiation via activation of the PAR-1 receptor, which is highly expressed in fibroblastic foci.Activation of PAR-1 also increases RhoA activity, which can activate TGF-β from its latent complex via α v β 6 integrin (115). Despite the evidence of disrupted clotting in IPF, the STEP-IPF study, comparing warfarin (an inhibitor of factors II, VII, IX, and X) with placebo, demonstrated harm, and this detriment was due to accelerated fibrosis rather than bleeding complications (129).Recent registry data support this finding, with warfarin being associated with reduced transplant-free survival; however, treatment with newer direct oral anticoagulants, which are direct factor Xa inhibitors, was not associated with a reduction in survival (130).These results suggest that selective inhibition of factor Xa may have antifibrotic potential.To provide further evidence for the benefits of targeting specific coagulation pathways, the profibrotic effects of thrombin acting via the PAR-1 receptor mediated through α v β 6 and TGF-β can be inhibited using the direct thrombin inhibitor dabigatran in a murine model (131). Other pathways.Other EC-relevant pathways that have been investigated in PF include (a) autotaxin, the enzyme responsible for generation of the profibrotic lipid mediator LPA, which also has phosphodiesterase activity and is highly expressed by ECs; (b) the integrins, including α v β 1 and α v β 6 , which are implicated in PF and have important effects on EC function (132); (d) CTGF, which is potently fibrogenic and contributes to the development of IPF-PH; and, finally, (e) the renin-angiotensin system -in particular the ATR2 receptor -agonism of which attenuates vascular remodeling in models of PH (133).Specific targeting of these pathways has been or is being trialed (Table 1 and Figure 3B). Disease associations It is evident from cell and molecular biology that the vasculature, and related signaling pathways, have an important role in the development of IPF.If the vasculature is playing a prominent role in the pathogenesis of IPF, one would hypothesize that there would be substantial systemic disease associated with IPF.While PH is a well-recognized complication of IPF, other comorbidities are gaining prominence.Indeed, systemic hypertension and diabetes are common comorbidities in IPF (134), while coronary artery disease and thromboembolic disease are common causes of death (135,136) (Table 2). Conclusions The epithelium -the primary site of initial insult in IPFsits in close proximity to the endothelial layer in the alveolus, separated only by a thin basement membrane, which itself is Rho/ROCK.RhoA is a GTPase that activates Rho-associated protein kinase (ROCK), leading to phosphorylation of myosin light chains to reorganize the actin cytoskeleton, promoting cell contraction, motility, and adhesion in inflammatory cells, smooth muscle cells (notably VSMCs that are important for regulating vascular tone), and platelets (112,113).Two isoforms have been identified: ROCK1, expressed ubiquitously, and ROCK2, expressed predominantly in cardiac tissue, pulmonary tissue, and smooth muscle (114).The RhoA/ROCK pathway is implicated in the normal lung wound healing process, facilitating fibroblast and epithelial cell migration in response to receptor signaling by TGF-β, lysophosphatidic acid (LPA), and thrombin/ PAR-1, and may also regulate profibrotic gene expression (115,116).The activity of RhoA/ROCK is enhanced in IPF tissue (117).In the endothelium, activation of RhoA/ROCK signaling by LPA is responsible for generation of vascular leak by generating cellular contraction and disrupting cell-cell and cell-matrix adhesion (118).Extravascular leak of profibrotic mediators, including thrombin (itself a ROCK activator in epithelium and fibroblasts), propagates the fibrotic process, thus driving fibrogenesis.In pulmonary ECs in response to hypoxia, ROCK may also downregulate expression of eNOS, which, as described, is involved in the generation of both fibrosis and PH (119). A mechanistic link has been demonstrated in shared pathways in RhoA activation and PDE4 via A-kinase anchoring protein 13 (AKAP13) (120).AKAP13 activates PDE4, reducing protein kinase activity in addition to having a RhoGEF function whereby it can phosphorylate and activate RhoA, suggesting that AKAP13 may be a master regulator of fibrotic responses. In animal models of fibrosis, inhibition of ROCK using fasudil (which has clinical applications in the management of subarachnoid hemorrhage due to its vasorelaxant properties) and the experimental compound Y-27632 attenuates fibrosis and vascular remodeling (112,121).In gene-deleted animal models, bleomycin-induced fibrosis is attenuated when either ROCK isoform is deleted, indicating that both enzymes are implicated in fibrogenesis (122).This is relevant as selective ROCK inhibition may be sufficient to inhibit fibrosis and avoid complications such as hypotension.Selective inhibition of ROCK2 with Slx-2119 downregulated profibrotic gene expression in a range of fibrotic effector cells including VSMCs in in vitro models (123).The selective ROCK2 inhibitor belumosudil has been evaluated in a phase I trial of patients with IPF, where it was well tolerated and slowed decline in lung function (124).Belumosudil is currently the subject of a phase II trial (NCT02688647); although initial results have not been published, the results posted on ClinicalTrials.govsuggest that FVC remains unchanged. Coagulation cascade.There are a number of potential mechanisms through which fibrosis and abnormal clotting may occur.Tissue factor initiates the extrinsic coagulation pathway, is highly expressed by alveolar epithelial cells in patients with IPF, and generates lung-tissue fibrin deposits, which serve as a platform for inflammatory cells and profibrotic cytokines, enhancing their accumulation at sites of injury (125).This environment favors an imbalance in the system toward the pro-coagulation pathway.Activation of the coagulation cascade generates multiple proteases and thrombin, which has profibrotic actions in part due to its disease and how to treat it is likely to ensue.Therefore, strategies that target endothelial repair by focusing on reprogramming abnormal metabolic responses may ultimately provide an opportunity to prevent the progression of, or potentially even reverse, the fibrotic response. Address correspondence to: R. Gisli Jenkins, Guy Scadding Building, Royal Brompton Campus, London SW3 6LY, United Kingdom.Phone: 44.0.20.7589.5111;Email: gisli.jenkins@imperial.ac.uk.abnormal in the condition.Injury to the former will undeniably affect the latter.There is compelling evidence from various in vivo, in vitro, and ex vivo models that aberrant endothelial responses occur with resultant loss of integrity of the basement membrane, vascular remodeling, and the generation of vascular signals further driving ECM deposition, fibrosis, and parenchymal lung damage.Whether it is the epithelial signal to the endothelium that initiates this process in its entirety, or whether the endothelium is the primary culprit, remains an area for development in the field of fibrosis research.However, given the findings, one cannot ignore the potential role of the circulation in the pathobiology of IPF. The numerous disrupted vascular-relevant signaling pathways present in IPF are now being explored in greater detail and mapped more clearly.The specific targeting of these pathways -especially with prostanoid agents and phosphodiesterase inhibitors -represents an expanding chapter in the treatment of this challenging disease.As closer associations between IPF and numerous more common cardiovascular and metabolic conditions are made, a deeper understanding of the • Diabetes likely a risk factor for development of IPF and, through alterations in cell metabolism, can be profibrotic and associated with IPF progression (153, 154) • Glycemic control associated with worse measures of FVC and gas exchange (155) • Animal models confirm worse histological grades of fibrosis in diabetic mice with positive staining for advanced glycation end products (AGEs) in alveolar epithelial cells (156) • Widely used antidiabetic drug metformin accelerates resolution of fibrosis in animal models (157) • However, no benefit of metformin for disease progression in large-scale trials in humans (158) mPAP, mean pulmonary artery pressure; sPAP, systolic pulmonary artery pressure. Figure 1 . Figure 1.Vascular abnormalities in IPF are reflected within cells, tissue, and the entire organism.(A) Pulmonary-bronchial anastomoses often develop in IPF and can be visualized by radiography.In healthy tissue, vessels communicate through the capillary bed, and there is an absence of these larger, tortuous communications (23).(B) Fibroblastic foci from fibrotic parenchyma lack ECs (as indicated by an absence of staining for CD34), confirming a lack of vascularity.In healthy tissue, ECs line the vessel walls and are distributed throughout lung tissue.Reproduced with permission from the American Thoracic Society (159).(C) Vascular comorbidities associate with IPF, suggesting that ECs contribute to and are affected by fibrosis. Figure 2 . Figure 2. ECs support healthy vasculature and undergo dramatic changes in IPF.(A) Damaged epithelium releases active TGF-β and other profibrotic mediators.The original injury also disrupts the BM and the neighboring endothelial layer, which responds to the profibrotic signal.ECs subsequently secrete similar profibrotic mediators and lose the ability to synthesize protective hormones such as eNOS and prostacyclin.This process can stimulate VEGF production, which drives EC proliferation, and ECs distributed throughout the lung propagate fibrosis.Compared with healthy lungs, IPF lungs have a higher proportion of apoptotic ECs, fibroblasts, pericytes, and VSMCs.Cellular proliferation and newly generated vessels expand affected lung tissue.With progressive vascular pathology there is ultimately advanced tissue destruction, and eventually vascular regression develops in the fibroblastic foci.(B) In IPF, the EC participates in several cell-cell interactions and cell transitions.Damaged ECs produce factors that signal to other ECs and promote damage or drive the transition to other cell types: (i) EC-fibroblast: ECs transition into a fibroblast-type cell via EndMT to contribute to the pool of profibrotic cells.(ii) EC-myofibroblast: Damaged ECs also secrete TGF-β, PDGF, and Jag1 to enhance fibroblast-myofibroblast transition and ECM secretion.(iii) EC-EC: Abnormal ECs secrete VEGF, which promotes EC proliferation and abnormal vessel formation, thus contributing to the pool of ECs that can propagate this process.Compromised tight junctions leak coagulation factors, driving fibrosis.(iv) EC-VSMC: EC production of TGF-β and ET1 promotes VSMC proliferation, contributing to PH and a switch to a synthetic phenotype.(v) EC-epithelial cell: Downregulation of protective factors such as MMP-14 delays epithelial repair, allowing persistent epithelial-mesenchymal crosstalk.(vi) Pericyte-myofibroblast: Disrupted Wnt signaling associated with ECs drives pericytes to transition into a myofibroblast-type cell. Figure 3 . Figure 3. Vascular signaling pathways regulate fibrosis via GPCR, NO, intracellular (PPAR) receptors, and surface integrins in IPF.(A) Drugs that may counter fibrosis can act through signaling pathways in a range of vascular cell types, including ECs, VSMCs, and fibroblasts.Fibrogenesis-promoting pathways involve GPCRs or NO and signal through cAMP or cGMP to induce fibrosis-related transcriptional events.Treprostinil (TP) acts on cell surface GPCRs to increase intracellular cAMP, which can affect transcription of actin-encoding genes that affect the cytoskeleton, cell motility, and adhesion.TP can also directly activate intracellular PPAR receptors to modulate gene expression.PDE inhibitors (BI-1015550 and sildenafil) prevent cAMP and cGMP breakdown.cGMP, generated following exposure to endogenous NO, activates PKG, which affects gene transcription, the cytoskeleton, and cell contraction.Stimulators, including riociguat, can also generate cGMP.The ET antagonists bosentan and ambrisentan block GPCRs to reduce intracellular Ca 2+ concentrations and PKC activity, again modulating gene expression.(B) Therapeutics in IPF can signal through pathways affecting TGF-β signaling or other mechanisms promoting profibrotic gene expression.Ziritaxestat blocks autotaxin, from which LPA is generated.LPA induces various profibrotic effects via GPCRs, including increased RhoA activity and actin cytoskeleton rearrangements that promote altered cell motility in a range of cells relevant to fibrosis.Belumosudil preferentially blocks the ROCK2 isoform.The cytoskeleton can activate cell surface integrins, which are implicated in TGF-β activation.Integrins can be directly blocked by bexotegrast.CTGF, which has numerous profibrotic signaling effects, can be neutralized by the monoclonal antibody pamrevlumab.ATR2 agonists affect numerous intracellular phosphatases, which affect downstream profibrotic gene expression. Table 1 . Conflicting outcomes of trials targeting vascular pathways and mechanisms in patients with IPF Randomized, phase II, open-label crossover study; 76 patients with IPF No significant effect on change in FVC (results yet to be published but available online) -137 Ziritaxestat/ISABELA-1 and -2 (autotaxin inhibitor) RCT, phase III; 525 patients No improvement in rate of decline of FVC (primary outcome) and higher mortality in treatment group X 138 Bexotegrast (α v β 1 /α v β 6 inhibitor) SGRQ, St. George's Respiratory Questionnaire, used to assess the impact of obstructive airway disease on patient health and well-being.RHC, right heart catheter. Table 2 . Summary of common vascular comorbidities observed in IPF patients and important disease features in IPF (142)osed based on mPAP >20 mmHg at right heart catheterization(142)• Physiological response to divert blood flow from poorly ventilated areas to avert hypoxemia • Discordance between severity of lung disease and severity of PH; thus other mechanisms must account for pathology • Two phenotypes -if severe (defined by mPAP >25 mmHg with a low cardiac index, i.e., <2 L/min), may respond to pulmonary vasodilator therapy (however, treatment must be individualized to avoid worsened shunting and hypoxemia, largely for symptomatic benefit) (143) • Increased mortality (1-year survival 28% vs. 5% in those being assessed for transplant [ref.144]; median survival 0.7 years if sPAP >50 mmHg by echocardiography [ref.145]) Potentially due to a common vasculopathy driving both conditions; however, confounded by other factors such as age, ease of diagnosis, and concurrent or prior corticosteroid use • Shared genetic risk profile due to enhanced TGF-β pathway activity with shared genetic locus at MAD1L1 (mitotic spindle checkpoint associated with chromosomal instability) (134, 147) Coronary artery disease (CAD) 3%-68% (148) • Higher prevalence in IPF vs. COPD despite likely higher use of tobacco in the latter group, thus suggesting unifying pathobiology (148) • Pathways common to CAD and IPF include overexpression of inflammatory mediators (IL-8, TNF-α), denudation of the coronary epithelium and alveolar epithelium, and dysregulated repair mechanisms; neovascularization observed in atheroma of CAD and digital clubbing of IPF (20) Chronic kidney disease (CKD) 30% (149) • Associated with hypertension • Worse degrees of CKD are associated with worsened survival in IPF (149) Venothromboembolism (VTE) 2%-3% (150, 151) • Presence of VTE represents an increased risk of developing IPF (152) • Conflicting data when this pathway is targeted: warfarin is harmful with acceleration of fibrosis • Directly acting oral anticoagulants safer, including benefits of dabigatran • Abnormal coagulation may represent a specific subset of IPF patients Diabetes 10%-32% (134)
2023-09-16T06:17:24.972Z
2023-09-15T00:00:00.000
{ "year": 2023, "sha1": "89e947ce5063ba432ca7da96d632c02dc8cfb062", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5fe1609a0ac3b32e936d8e1c34f2c94ff6ab092a", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
231886103
pes2o/s2orc
v3-fos-license
A Virtual, Randomized, Control Trial of a Digital Therapeutic for Speech, Language, and Cognitive Intervention in Post-stroke Persons With Aphasia Background: Post-stroke aphasia is a chronic condition that impacts people's daily functioning and communication for many years after a stroke. Even though these individuals require sustained rehabilitation, they face extra burdens to access care due to shortages in qualified clinicians, insurance limitations and geographic access. There is a need to research alternative means to access intervention remotely, such as in the case of this study using a digital therapeutic. Objective: To assess the feasibility and clinical efficacy of a virtual speech, language, and cognitive digital therapeutic for individuals with post-stroke aphasia relative to standard of care. Methods: Thirty two participants completed the study (experimental: average age 59.8 years, 7 female, 10 male, average education: 15.8 years, time post-stroke: 53 months, 15 right handed, 2 left handed; control: average age 64.2 years, 7 female, 8 male, average education: 15.3 years, time post-stroke: 36.1 months, 14 right handed, 1 left handed). Patients in the experimental group received 10 weeks of treatment using a digital therapeutic, Constant Therapy-Research (CT-R), for speech, language, and cognitive therapy, which provides evidence-based, targeted therapy with immediate feedback for users that adjusts therapy difficulty based on their performance. Patients in the control group completed standard of care (SOC) speech-language pathology workbook pages. Results: This study provides Class II evidence that with the starting baseline WAB-AQ score, adjusted by −0.69 for every year of age, and by 0.122 for every month since stroke, participants in the CT-R group had WAB-AQ scores 6.43 higher than the workbook group at the end of treatment. Additionally, secondary outcome measures included the WAB-Language Quotient, WAB-Cognitive Quotient, Brief Test of Adult Cognition by Telephone (BTACT), and Stroke and Aphasia Quality of Life Scale 39 (SAQOL-39), with significant changes in BTACT verbal fluency subtest and the SAQOL-39 communication and energy scores for both groups. Conclusions: Overall, this study demonstrates the feasibility of a fully virtual trial for patients with post-stroke aphasia, especially given the ongoing COVID19 pandemic, as well as a safe, tolerable, and efficacious digital therapeutic for language/cognitive rehabilitation. Clinical Trial Registration: www.ClinicalTrials.gov, identifier NCT04488029. INTRODUCTION According to the Centers for Disease Control and Prevention (CDC), every year an estimated 795,000 Americans will have a stroke, and more than 180,000 will be left with communication disorders such as aphasia (1,2). Aphasia can impact a person's ability to understand and follow instructions or read a prescription label. It can isolate a person from their family and friends, impacting their sense of self and bringing with it a myriad of other loneliness-related health risks (3). Aphasia is a chronic condition that requires ongoing rehabilitation (4). It was once thought that recovery only occurred in the first year of a stroke; however, a growing body of evidence shows that people with aphasia (PWA) can continue to improve with ongoing rehabilitation even many years after their injury (4,5). A recent Cochrane review suggests that functional communication significantly improves when one receives speech-language therapy at a high intensity, across several sessions, or over a long period of time (6). Despite the evidence that supports the need for ongoing therapy, there are not enough therapists who can treat post-stroke aphasia. The expectation for therapists to provide therapy five times per week during the chronic phase of care is simply not feasible. In addition to limited access to therapists, other barriers that patients experience include limited insurance coverage, lack of transportation, distant geography, schedule constraints, and fatigue (7). As a result, rehabilitation for aphasia patients is quite fragmented (8), or insufficient, especially for stroke survivors living in the community but not in active therapy (4) which ultimately leads to worse patient outcomes, especially when they can benefit from ongoing therapy post-discharge. Since the COVID19 pandemic began, individuals with aphasia have faced even greater hurdles in accessing the care they need due to safety restrictions exacerbating disparities in healthcare for these individuals (9). Teletherapy, or technology assisted/delivered therapy, provides an alternative to the brick-and-mortar approach of delivering rehabilitation services (10)(11)(12)(13)(14). In such an approach, therapy is delivered via a computer and over the internet asynchronously but follows the same basic principles of traditional person-to-person rehabilitation. A clinician can also supervise teletherapy sessions remotely. Early indications illustrate that such technology would afford PWA greater opportunity for consistent and intensive practice, especially when coming into the clinic is not feasible (15,16). Further, teletherapy may also allow long-term continued rehabilitation to be more accessible for PWA. While some aphasia research highlights the limitations in using technology with this population (17,18), other research demonstrates positive outcomes in improving language skills with technology (19)(20)(21)(22). Recent systematic reviews have examined different technologybased rehabilitation delivery options for both cognitive deficits (23,24) and language deficits (25)(26)(27)(28). Further, a recent RCT specifically compared treatment outcomes for PWA receiving self-managed computerized speech therapy relative to other control treatments (20). In this study, 278 PWA were assigned to either daily self-managed computerized speech language therapy plus usual care (experimental, CSLT group), usual care (usual care group), or attention control plus usual care (attention control group). Treatment was completed for 6 months and results showed that the experimental group receiving computerized therapy (CSLT) demonstrated significantly higher gains in trained word finding relative to the two control groups, however, there was no evidence of generalization to untrained words. Further, there were no differences in functional communication or participants' perception of their own communication or participation across the three intervention groups. Nonetheless, these results add to the emerging premise that remote or home-based computerized therapy can be a valid approach to deliver rehabilitation to individuals with post-stroke aphasia. In our prior work with teletherapy, we have examined the feasibility and clinical efficacy of Constant Therapy-Research (CT-R 1 ), a digital therapeutic software program accessible through a tablet (29)(30)(31). CT-R is a prototype based on the commercially available Constant Therapy product. In a previous study, 51 subjects (42 experimental, 9 control) utilized the Constant Therapy software platform under the systematic monitoring and guidance from their clinician during weekly in-clinic sessions (29). The experimental group had access to Constant Therapy both at home and during in-clinic sessions, while the control group only utilized the application during in-clinic sessions. After 10 weeks of intervention, experimental participants were significantly more engaged in their therapy and practiced an additional 4 h per week on average compared to the in-clinic therapy sessions, where participants received an average of 40 min per week. In addition, experimental participants showed significantly more improvements on Constant Therapy tasks and on standardized language and cognitive tests than control participants. Separately, in a retrospective analysis of Constant Therapy home users vs. clinic users (31), both home and clinic users required roughly the same amount of practice to successfully complete cognitive and language tasks, but users who had on-demand access to therapy on their tablet mastered tasks in a median of 6 days, while those with only in-clinic access mastered tasks in a median of 12 days. Further, users who had access to digital therapy at home practiced at least every 2 days, while clinic users practiced in the clinic just once every 5 days. These findings suggest that Constant Therapy users were able to practice structured therapy at home, which provided them with greater practice and greater intensity of therapy than patients receiving the same therapy in clinic. These studies also highlighted the potential for a home-based therapy program for patients who are unable to receive consistent in-clinic therapy. The primary objective of this study was to examine the efficacy of CT-R practiced under the remote-guidance of a study personnel when compared to an active control group that practiced aphasia therapy workbooks. Our rationale was that self-management of home-based therapy under remoteguidance that included an individualized therapy protocol would lead to increased adherence and compliance of home practice, and ultimately improved language and cognitive skills. We conducted a Phase II, randomized decentralized (virtual) trial, in which 36 participants (stroke survivors with aphasia) received either language therapy at home delivered through CT-R or practiced aphasia therapy workbooks at home. Both groups received baseline and follow-up assessments, as well as periodic therapy check-in sessions, through video conference sessions. The primary outcome of the study was change in the Western Aphasia Battery-R Aphasia Quotient (WAB-R AQ) (32). The primary hypothesis was that self-managed, digital therapy under remote supervision would result in systematic and structured reinforcement-based practice of impairment based therapy, which would ultimately lead to greater language outcomes, as compared to the control group that did not receive this systematic structured practice. Additionally, to the best of our knowledge, this is the first fully virtual language therapy study for individuals with aphasia. Recruitment As this was a completely virtual study, participants were recruited from the United States and Canada from March 2019 to November 2019. The following were sources of participant recruitment: (a) consumers who had downloaded the commercially available Constant Therapy app but not signed up for an account, (b) social media groups focused on recovery from aphasia, and (c) referrals from SLPs who had discharged clients from their service. Recruitment was conducted via email, video advertising, flyers, and social media posts. Participants Inclusion criteria included (a) diagnosis of stroke involving a hemorrhage or ischemic event, resulting in speech, language, and/or cognitive deficits as confirmed by medical records; (b) time post-stroke of at least 4 months prior to enrollment; (c) having been discharged from the hospital or rehabilitation center; (d) being aged 18 years or older at the time of consent; (e) being a fluent English speaker prior to stroke; (f) having confirmed aphasia based on the Western Aphasia Battery, Revised (WAB-R) (32) Aphasia Quotient with a score of 90 or lower (normal cutoff score is 93.8), and (g) the presence of a family member or caregiver willing and able to provide assistance during the duration of the study period. Exclusionary criteria included (a) comorbid neurological conditions that could impair study performance in the opinion of research staff (either a certified Speech-Language Pathologist or a trained Research Assistant), (b) requiring inpatient care or acute care at the time of the study, (c) concurrently undergoing one-on-one individual therapy at a hospital or rehabilitation facility, university, or at home, (d) presence of severe apraxia of speech or severe dysarthria of speech based on clinical screening, (e) comorbid psychiatric conditions that could impair study participation in the opinion of study staff, and (f) uncorrected vision or hearing loss impairing study participation. A pre-screening phone call was conducted by the research staff with the participant and caregiver to discuss the details of the study and participant characteristics. Then, each participant was mailed materials that included: an iPad tablet, WAB-R assessment items, informed consent and medical release forms, and a pre-addressed and stamped envelope to return consent and release forms. Following informed consent, participants were evaluated utilizing Part 1 of the WAB-R following procedures for videoconference assessment (33). If all eligibility criteria were met, the participant was enrolled in the study. Of the 58 participants that were screened against eligibility criteria, 36 were enrolled and 32 completed the study (see Figure 1). Of those who completed the study, the mean age of participants was 61 years (SD = 10), 18 participants were male, the average time post-stroke was 46 months (SD = 47), and the mean years in education was 15 years (SD = 2.6). As noted above, all participants completed all parts of the study from their homes. Primary and Secondary Outcome Measures The primary outcome measure utilized was the Western Aphasia Battery, Revised (WAB-R) Aphasia Quotient (WAB-AQ) (32). The WAB-R is a standardized tool that assesses language and cognitive skills and provides scores quantifying the impact of a stroke on those skills. The WAB-AQ from the WAB-R includes segments from Part 1 of the assessment, evaluating fluency and information content within spontaneous speech, auditory comprehension, naming, and repetition. The Language and Cortical Quotients obtained from the WAB-R (WAB-LQ and WAB-CQ) Parts 1 and 2 were utilized as secondary outcome measures. Part 2 of the WAB-R includes reading, writing, apraxia, constructional, visuospatial, and calculation sections. Additionally, secondary measures included scores on the Brief Test of Adult Cognition by Telephone (BTACT) (34,35), a brief, remote, cognitive assessment that evaluates memory for and judgments about words and numbers (including recall tasks, both immediate and short term, category fluency, and number reasoning and manipulation tasks), and the Stroke and Aphasia Quality of Life Scale 39 (SAQOL-39) (36,37). The SAQOL-39 is a structured quality of life questionnaire administered to either a patient or a caregiver to assess the impact of a stroke on daily activities, communication, emotions, and family and social life by asking patients or caregivers to complete a 5-point rating scale in response to specific questions focusing on the past week alone. All the above measures were chosen based on prior evidence for having been administered remotely, either by videoconference or by phone (33,35,38,39). Assessments Following informed consent, and the administration of the WAB-R, if the participant met eligibility criteria of an aphasia quotient of 90 or below, the remainder of the assessments were completed. For participants who were identified with potential dysarthria or apraxia, the Screen for Dysarthria and Apraxia of Speech was then completed to exclude any participant that received a "severe" score on the three features of diadochokinesis, word length, and oral apraxia. Subsequently, assessment continued on with the second portion of the WAB-R, the BTACT (34) and SAQOL-39 (39). When needed, the SAQOL-39 proxy form was provided to the caregiver to complete on behalf of the participant. As the clinician was remote, a caregiver was present with the participant during the virtual assessment to facilitate video conferencing setup and test administration. At the start of the assessment, a brief training was provided to the participant and caregiver on the videoconferencing technology. Instruction was provided to the caregiver to refrain from providing cues or hints to test items. At the conclusion of the assessment, a follow-up phone call was scheduled with the participant and caregiver within the same week to discuss next steps for participation in the study. See Table 1 for demographic information on study participants, and Table 2 for pre-treatment assessment data. Study Design Given the preliminary nature of the treatment protocol in this study, one of the purposes of this study was to generate effect sizes for future definitive clinical trials. Hence, this paper does not report apriori sample size estimates. The study participation lasted ∼14 weeks, which included: recruitment and baseline assessment (−2 to 0 weeks), treatment period (0-10 weeks), biweekly check-ins (weeks 2, 4, 6, and 8) and follow-up assessment (10-12 weeks). As noted, the entire study, including recruitment, enrollment, and study interventions was conducted remotely (i.e., at participant homes). The primary and secondary outcomes (WAB-R, BTACT, and SAQOL-39) were remotely administered at baseline (Week 0) and post-intervention (Week 10-12). After pre-assessment, stratified randomization was applied to assign participants into one of two groups (experimental or control) to balance for overall aphasia severity (WAB-AQ). Thus, the design was a parallel 1:1 allocation ratio with an initial random-numbers table to generate an allocation sequence that was then balanced for aphasia severity during assignments. Given the nature of the two interventions and the bi-weekly check-ins that relied on the nature of intervention, no attempt was made to blind participants or experimenters in the study. However, pre-treatment and posttreatment assessments were administrated by a team of study staff randomly assigned to participants from either group. Further, fidelity and reliability in testing administration was conducted and is described below. To encourage participation and retention, tablets were supplied with active cellular data plans, and training for how to use the tablet and app was provided to the participant and caregiver as needed. Experimental Group (CT-R) Participants were instructed to use a provisioned tablet with the app pre-installed. Constant Therapy (www.constanttherapy.com) provides systematic and structured therapy analogous to what is typically provided by a speechlanguage pathologist (SLP) that can be accessed by the patient from any location using a supported device. The NeuroPerformance Engine (NPE), a patented technology, enables the product to optimize therapeutic delivery (i.e., progress across tasks or reduce the level of difficulty) based on a patient's individual performance. An initial homework schedule was created and assigned by the study team according to each individual's WAB-R performance with guidelines that were standardized based on score cut-offs across participants. From that point, the individual was advanced via NPE algorithm using the library of therapy exercises within the CT-R app. Across exercises, there are over 100,000 stimuli within 350+ levels of difficulty spanning 9 different cognitive, speech and language domains (see Figure 2). Participants were instructed to use CT-R for at least 30 min a day and at least 5 days a week. CT-R tracked usage of the program so that research staff could access automated reporting of participant use to monitor participant adherence to the treatment program (29,31). Control Group (Workbooks) Participants were provided with a regime of standard, paper workbooks (40-44) used for homework practice, a substantial modification from the workbooks used in the usual care control group in the BIG CACTUS study (20) that used crossword puzzles. The progression of homework went from Workbook for Aphasia (40) to the Speech Therapy Aphasia Rehabilitation Workbooks (41)(42)(43) or the Workbook of Activities for Language and Cognition (WALC 1) (44) based on feedback about difficulty. Control participants were instructed to complete at least 1 exercise within the workbook at least 5 days a week. On a bi-weekly basis from Weeks 2 through 8, the experimental and control group participants completed a video conference check-in with a member of the research staff. During these check-ins, participants were asked to report how often they logged into CT-R to complete their exercises (experimental group) or how many workbook pages had been completed that week (control group). In addition, they were asked if they found any exercises or items too challenging or too simple. For the experimental group, as needed, the research staff modified the experimental group's homework program and documented changes. For the control group, if a participant reported that their workbook was too easy or too difficult, a correspondingly different workbook was sent to them. Details of the two interventions are provided in Table 3. Data Entry All assessments were scored utilizing hard copies of the WAB-R, BTACT, and SAQOL at the time of administration. Study personnel then checked and entered these scores into a shared spreadsheet and filed hardcopies into secure participant folders. Data Reliability All assessments were entered and checked for accuracy by study personnel. Two randomly selected raters from a group of four raters checked administration and scoring of the WAB-R (AQ, LQ, and CQ) on 11% of the total pre-and post-WAB assessments. Inter-rater reliability was high (Cronbach's Alpha = 0.997) with a difference score on the AQ scores to be 1.84 points, CQ scores to differ by 1.57 points, and LQ scores to differ by 1.52 points. Further, sections of the WAB-R including the Spontaneous Speech fluency and content rating scales and the Sequential Commands subtest, were discussed at length among study personnel to create standardized interpretations and scoring of participant responses. Consensus scoring across three raters was utilized for both of the Spontaneous Speech rating scales for all participants. Statistical Analysis Given unequal sample sizes, a linear mixed effects model was conducted on the primary and secondary outcomes. In all analyses, score on the specific test (WAB-AQ, LQ, etc.) was the dependent variable, group (CT-R vs. workbook) and time point (pre-treatment and post-treatment) were the fixed factors, age, and time post-stroke were entered as covariates (unless otherwise noted) and participants were entered as random factors. As follow-up analyses, RANOVAS were performed to further examine treatment-related effects in the two groups. Data Availability All individual anonymized participant data are provided in Supplementary Table 1. Standard Protocol Approvals, Registrations, and Patient Consents The study was reviewed, monitored, and approved by Pearl IRB 19-LNCO-102. All participants provided informed consent for this study following procedures described above. This project is registered in the ClinicalTrials.gov registry (NCT04488029). Table 1 provides baseline demographic and assessment measures, indicating that there were no pre-existing differences between the experimental (N = 17) and control (N = 15) groups. Means and standard deviations (in parenthesis). Frontiers in Neurology | www.frontiersin.org Further, Figure 3 provides histogram profiles of specific language and cognitive domain scores from the WAB-R indicating that both groups were similar prior to the beginning of treatment. Additionally, Kruskal-Wallis H-tests were used due to unequal sample sizes showing no difference between the groups on specific variables (age, p = 0.14, time since stroke, p = 0.60, baseline WAB-AQ, p = 0.77). Primary Endpoint The primary endpoint in the study was the average change on WAB-AQ. The CT-R group showed a higher mean point change WAB-AQ (M = 6.75) than the workbook group (M = 0.38). Using a linear mixed effects model, this change was significant at the 1% level. The significant group by time interaction indicated that on average, participants in the CT-R group had WAB-AQ scores of 6.36 points higher than the control group at follow-up than at pre-treatment that was significant (p < 0.01, see Tables 2, 4 and Figure 4A). Primary Endpoint With Covariates Even though there were no significant pretreatment differences between the two groups in terms of age, time since stroke and baseline WAB-AQ, controlling for these factors in a linear mixed effects model showed that being in the CT-R group was associated with a 6.43 point increase in WAB-AQ score relative to the workbook group at follow-up than at pre-treatment. Specifically, Table 4 illustrates that the starting baseline WAB-AQ score was 105.7 (intercept), adjusted by −0.69 for every year of age, and by 0.122 for every month since stroke, participants in the CT-R group had WAB-AQ scores 6.43 higher than the workbook group at the end of treatment. It is worth noting that the mean differences as a function of treatment for sub scores that comprise the WAB-AQ, were consistently higher for the experimental group than the control group (see Tables 2, 4 1. WHY Self-management of home-based therapy under remote-guidance could result in an individualized therapy protocol, increased adherence, and compliance of home practice will improve language skills. Self-management of home-based therapy under remote-guidance without the structured feedback and regimen would result in limited gains. 2. WHAT materials Constant Therapy-Research was used as a tailored home treatment program for each participant. Aphasia therapy workbooks were used for home practice. WHAT procedures For each trial in the Constant Therapy-Research software, the participant can select the answer and choose whether to use cues. Once the participant selects the response, immediate feedback is provided regarding accuracy and the participant can proceed to the next trial. Secondary Endpoints An additional secondary endpoint was average change on the WAB-LQ. The CT-R group showed a higher mean change (M = 4.51) than the workbook group (M = 0.57) points. Table 4 shows that the effects of group, age, time from stroke, and post-treatment (vs. baseline) were not significant. The significant interaction of post-treatment relative to baseline by group, controlling for other variables, was significant, indicating that on average, participants in the CT-R group had WAB-LQ score of 3.97 points higher than the workbook group at post-treatment ( Figure 4B). Again, mean differences as a function of treatment for subscores of reading were higher for the experimental group (4.00) than the control group (1.20), however, writing scores worsened for both groups (see Table 2). The mean change on the WAB-CQ, showed that the CT-R group showed a higher mean change (M = 4.69) than the workbook group (M = 0.77). Again, only the interaction of post-treatment relative to baseline by group (controlling for other factors) was significant; participants in the CT-R group had an average WAB-CQ score of 4.01 points higher than the workbook group at the end of treatment (Figure 4C). Mean differences as a function of treatment for subscores of apraxia were higher for the experimental group (1.76) than the control group (−0.33), however, mean differences as a function of treatment for subscores of constructional and visuospatial calculation were higher for the control group (3.40) than the CT-R group (1.55) ( Table 2). Finally, in addition to changes on specific subscores in the WAB, Figure 5 shows that there were qualitative changes in the aphasia subtypes (as calculated by the WAB) as a function of treatment. Specifically, in the CT-R group, while there were a range of aphasia types prior to treatment, after treatment all participants fell into one of four subtypes (Anomic, Broca's, Conduction, and Within Normal Limits). Contrastingly, the workbook group showed more subtle qualitative changes, and none of them were classified as being within normal limits. Given the small sample sizes of subcategories, no statistical analyses were computed. In addition to the WAB-R, BTACT, and the SAQOL-39 were also examined (see Figures 6, 7). The mixed effects models for these two measures were not significant for either the main effects or the interaction effects. Therefore, follow-up repeated measures ANOVAs with scores on each of the subtests as the dependent variable, and time (pre-treatment, post-treatment), group (CT-R vs. workbook group), and the interaction between time and group were conducted. Table 4 reflects all the analyses, the F-ratios and the p-values. On the BTACT, only the subtest of verbal fluency showed a significant effect of time, but no significant effects of group or interaction between group and time. On the SAQOL, the overall mean showed a significant improvement as a function of time, as the main effect of group or the interaction between group and time was not significant. Similar results were observed for SAQOL_communication and SAQOL_energy sub-scores, indicating that both groups showed improvements as a function of treatment. The remaining contrasts were not significant. Finally, to examine the potential influence of demographic variables on the primary outcome measure, bivariate correlations revealed a significant moderate negative relation between age and difference on the post-pre WAB-AQ score (r = −0.45, p < 0.01), but no significant relation between time since stroke in months and difference on the post-pre WAB-AQ score (r = −0.07, p > 0.05), and between education in years and difference on the post-pre WAB-AQ score (r = −0.09, p > 0.05). DISCUSSION Currently, standard of care (SOC) for speech therapy involves a stepped approach to rehabilitation in the days, weeks, months, and years following stroke. In general, at each phase following a stroke, there are different SOCs (45). These phases can be described as "acute" (typically the first 24-48 h after a stroke, where the priority is saving a life), "in-patient" (when the patient is recovering, often with medical monitoring, and intense multidisciplinary care), "out-patient" (when living at home but receiving periodic care from healthcare professionals), and "post-discharge" (when no longer under the care of clinical teams). It is in the post-discharge phase that SOC dictates that patients undergo self-directed maintenance. Self-directed maintenance may include the application of learned strategies to daily functional communication exchanges and/or identification of activities or exercises that will allow for practice of the skill area. As noted in the introduction, the state of today's SOC results in the overwhelming majority of patients not receiving the benefit of consistent one-on-one therapy after the first month following their stroke due to structural barriers that preclude extension of traditional one-on-one therapy at a frequency and duration likely associated with optimal outcomes. The present study was the first virtual language/cognitive rehabilitation trial for individuals with post-stroke aphasia. Further, this study joins other recent trials (20) that provide evidence for digitally-based language therapy for post-stroke patients. This Phase II trial showed that individuals who practiced CT-R at home with biweekly check-ins showed an average of 6.43 points greater change on WAB-AQ scores at the end of treatment relative to a control group that practiced workbooks at home and also received biweekly check-ins, even after controlling for age and time post-stroke for participants in the two groups. Importantly, the CT-R group showed a mean improvement of 6.75 points on the WAB-AQ, a change that is above the 5 point threshold for clinically meaningful improvement in speechlanguage ability (46)(47)(48), compared to 0.38 for conventional workbook intervention. Notably, the CT-R group outperformed the control group at the end of the treatment program on WAB-LQ (4.51 points for the CT-R group) and the WAB-CQ (4.69 points for the CT-R group). Changes on the subscores of the WAB subtests were consistently higher for the CT-R group than the workbook group, including on spontaneous speech, auditory comprehension, repetition, naming, reading and apraxia. Interestingly, writing scores worsened slightly for both groups and construction, visuospatial and calculation increased for the workbook group more than the experimental group. Decreases in the writing subtests for both groups may be reflective of the reality that CT-R writing practice is done on a tablet and is different from handwriting; and the workbook group may have not practiced writing consistently. It is not completely clear why the workbook group improved more on the constructional, visuospatial, and calculation subtests, but further inspection of participant data suggests that the workbook group improved more on the calculation sections of the WAB. Interestingly, when examining any changes in aphasia subtypes as a function of the treatment, results showed that CT-R made more discernable shifts in their aphasia subtypes, subsequent to improved WAB subscores than the workbook group. Notably, post-treatment, two participants were classified as being within normal limits as per the WAB in the CT-R group, a similar shift was not observed in the workbook group. Participants in the CT-R group logged into the software program at least 5 days per week, practiced a prescribed number of therapy exercises and received instant feedback on the accuracy for each item. In contrast, the workbook group received physical workbooks to practice, were instructed to practice multiple pages, and importantly, instant feedback was not provided. Therefore, it is possible that the impairmentbased drill therapy with feedback targeted in the CT-R software facilitated transfer of similar performance on the domains of language and cognitive function tested by WAB. Another observation is the difference in the treatment approaches between the experimental and control groups. CT-R was designed to progress the participant through targeted therapy tasks based on their performance. For example, if a participant passed an exercise easily, they would automatically be given a harder task targeting the same skill or domain in the next session. Alternatively, if they appeared to struggle with an exercise, then upon the next login, CT-R would present an easier task targeting the same skill. This automatic calibration of task delivery was designed as part of the software's algorithm with an optional oversight from a study staff. The experimental group, using the CT-R program, also had the added benefit of a study staff being able to manually modify or update their homework program based on participant feedback. The control group, while using the workbooks could provide feedback regarding the exercises, but could not have the study staff modify, update, or change the homework tasks remotely. These inherent differences in how the treatment program was tailored for each individual participant in the experimental group relative to the control group may have also contributed to differences in the primary outcomes for the two groups. Compared to the WAB, the critical interaction group by time was not significant for the SAQOL-39 or BTACT in the linear mixed effect models. Instead, repeated measures ANOVAs that compared the two groups as a function of time showed that both the BTACT_verbal fluency and specific SAQOL measures (i.e., SAQOL_mean, SAQOL_communication and SAQOL_energy) improved for both groups as a function of treatment, indicating that participation in the 10 week remote intervention, independent of the type of treatment, resulted in gains on verbal fluency on the BTACT and quality of life perception on the SAQOL. Apart from the main difference in the mode of therapy exercises practiced, the bi-weekly check-ins with the study staff and the level of flexibility in therapy session practice were identical between the two groups. Therefore, it is possible that the frequent interaction with the study staff who provided feedback about therapy progress and the consequent accountability may have had the same faciliatory effects for both groups. Relatedly, compliance with attendance at bi-weekly check-ins was high across both groups, reinforcing findings that telerehabilitation access decreased missed appointments rates (49). By decreasing barriers due to transportation, commute time, and time out of work, teletherapy provides patients with a more flexible option that ultimately improves engagement with the therapy process. It is important to note that these check-ins were completed completely virtually over videoconference; both as we handle the challenges of COVID-19 and as we look to the future of telepractice, this is encouraging data suggesting that virtual interaction continues to be motivating and engaging for patients. Nonetheless, the lack of a greater improvement on the secondary outcome measures in the CT-R group vs. the workbook group requires further discussion. It should be noted that the mean difference in the SAQOL-39 ratings for the submeasures ranged from 0.24 to 0.60 (SAQOL_Mean, SAQOL_energy, respectively) for the CT-R group relative to −0.05 to −0.36 (SAQOL_communication, SAQOL_psychosocial, respectively) for the workbook group. These differences for the CT-R group are comparable to 0.33 difference in a study examining the effect of phonomotor treatment on word retrieval (50), hence, contextualizing the gains on this measure in the CT-R group. The BTACT was selected due to its remote administrability, however, there are no studies that report BTACT as an outcome measure for treatment, thus limiting any points of comparison. Additionally, the BTACT requires auditory comprehension and verbal expression, thereby limiting its sensitivity to determine isolated improvements in cognitive function. This hypothesis is further supported by the evidence that participants in the CT-R group increased in the WAB-CQ (a more non-linguistic measure of cognitive function) by 4.97 points higher than the control at follow-up, indicating that improvement in cognitive function was observed by a more non-linguistic measure. Another interesting but secondary finding of this trial is evidence that PWA can make gains in their language and cognitive skills even in the chronic phase of rehabilitation. While most recovery is expected to occur in the first few months after the stroke (5,51), this study demonstrates that it is possible to improve language skills in this population even multiple years post-stroke. The average time post-stroke for the participants in the experimental group of this study was 46 months. Yet, there was no significant correlation between time post-stroke and the degree of gains made by patients, indicating that recovery can continue for many years post-stroke. There was a moderate negative correlation between age and improvement on the WAB-R for AQ scores, which does indicate that older patients tended to make fewer gains. Conversely, while some participants were well into their 80's, they were still able to access and manipulate the provided technology, given instruction and support from study personnel, dispelling a common myth that older adults are less able to utilize technology. While the results from this study are encouraging regarding the implementation of virtual trials, teletherapy as a service delivery model and the use of digital therapeutics like CT-R, there were some limitations to the study. Thirty-two participants were a modest sample size for a study of this patient population, and it is unclear whether these results generalize beyond this study to other similar studies, as well as to other implementations of teletherapy and digital therapeutics. Additionally, there were some practical constraints and barriers to conducting the study. First, as the target population ranged from mildsevere/profound language impairment, it was both critical and necessary for all participants to have a caregiver present during the initial onboarding into the study and pre/post assessments. Nonetheless, even participants with a severe language impairment were able to initiate and complete their homework programs once education and training was provided. Additionally, logistical considerations such as shipping and tracking of materials, and troubleshooting technology, required ongoing time and attention from the study team throughout the trial. Recruitment practices also had to be adjusted to better fit a virtual trial, and instead of the traditional recruitment through a clinical setting, social media and targeted advertising to educate potential participants were implemented recruit them into the study. While more studies are needed, these results provide encouraging data supporting the efficacy of digitally-based therapeutics, teletherapy, and virtual trial administration. Given that this is the first completely virtual, digital therapeutic treatment study with both assessments and therapy provided remotely, several conclusions can be drawn. First, completely virtual randomized control trials can be performed with checks and balances in place such as weekly check-ins with patients. Second, all the chosen assessments were verified in previous studies for administration in remote assessments and were implementable in a clinical trial. Third, the feasibility of such a trial indicates a novel approach to conduct telerehabilitation studies in an asynchronized format (i.e., participants practice their therapy when it is convenient for them, and without the presence of a clinician) with successful outcomes. Finally, this trial provides evidence that remote assessment and intervention of post-stroke aphasia is both effective and aligned with the ever-shifting needs of how people access care. Participants in this study were located across the United States and Canada and completed the study without issue, suggesting that telehealth services such as these can reduce the geographic challenges that many patients with aphasia face when seeking therapy. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
2021-02-12T14:19:04.893Z
2021-02-12T00:00:00.000
{ "year": 2021, "sha1": "463fba9f6b045d2500c10bf3e6bba3a0b3eacf6f", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2021.626780/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "463fba9f6b045d2500c10bf3e6bba3a0b3eacf6f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233924645
pes2o/s2orc
v3-fos-license
Coupled Hydrologic-Mechanical-Damage Analysis and Its Application to Diversion Tunnels of Hydropower Station Since the traditional model cannot sufficiently reflect the multifield coupling problem, this paper established an elastoplastic stress-seepage-damage analysis model considering the seepage field, stress field, and damage field. Simultaneously, the elastoplastic damage model involves many parameters and is difficult to determine. An inverse analysis program is compiled based on the differential evolution algorithm, and the surrounding rock damage parameters are inverted. Finally, the elastoplastic stressseepage-damage coupling program and the damage parameter displacement back analysis program is compiled using C++ language. -en, the program is used to calculate the coupling problem of tunnel elastoplastic stress-seepage-damage. -e results show that the proposed elastoplastic damage constitutive model can well describe the mechanical behavior of rock. -e computational procedure can also simulate practical engineering problems, which can provide specific guidance for site construction. Introduction During tunnel construction on the steep slope of the dam abutment of the large-scale water conservancy and hydropower project in Southwest China, the shallow and deep rock mass stress redistribution caused by excavation usually leads to damage to the surrounding rock. e degree of damage is related to various factors, such as excavation methods, physical and mechanical properties of the rock mass, initial geostress field, and natural fracture distribution. During the tunnel excavation process, the rock mass permeability changes significantly with the initiation, expansion, and penetration of rock mass fissures. e rock mass is low-permeability before failure, and the seepage-stress coupling effect is not obvious. However, with the initiation and expansion of the fissures, the evolution process and interaction of the stress field, seepage field, and damage field inside the rock mass become very significant [1]. e deformation and failure of rock mass under the stress-seepage coupling are not only a frontier and hot issue in the development of basic science but also a key scientific problem to be solved in applied research [2]. ere is a lot of research on the seepage characteristics of tunnels surrounding rocks and underground caverns during construction [3][4][5][6][7]. Due to the stress-seepage coupling problem in the hydropower project and the flood-rich tunnel during construction, the disaster will lead to greater losses. erefore, many scholars have conducted research on tunnel seepage problems in water-rich region projects. Liu et al. [8,9] proposed a coupled seepage-erosion water inrush model based on classical theories of solute transport and fluid dynamics in porous media to investigate the characteristics of seepage-erosion properties. Wu et al. [10,11] studied the characteristics of water flow and optimization of escape routes after water inrush in a post-open karst tunnel. Zhang et al. [12] studied the effect of Longsheng Reservoir on the seepage effect of the adjacent Kuzi Village Tunnel (located in Ulanchap, Inner Mongolia, North China). Because of the uncertainty of parameters of the fluidsolid coupling calculation, the back analysis method is studied by researchers. Based on the Levenberg-Marquit method of complex variable differentiation method, Liu et al. [13] established a multiparameter inversion method for the seepage-stress coupling problem with displacement information as a known quantity. Based on the full coupling analysis method for solving the coupling problem between stable seepage field and elastic displacement field, Wang Yuan [14] proposed a parameter inversion method for seepage and stressed full static coupling of a fractured rock mass. With the deepening of underground engineering activities, the research on the water-force coupling of the rock mass is more and more influenced by the damage evolution and pore fluid flow in rock mass [15][16][17]. Rich achievements have been made in the existing research about fluid-solid coupling calculation. However, there are also some problems to be solved. (1) ere are many studies on the coupling of rock elastic brittle damage and seepage, and the model of the coupling between rock plastic damage and seepage is rare. (2) Existing models generally use a constant permeability coefficient, and the permeability coefficient changes less with the damage. (3) e inversion of coupling parameters for the coupling of the damage model and seepage involves complex optimization problems. However, the general inverse analysis optimization algorithm is easily limited to the local optimum and does not easily converge to the optimal global solution. erefore, in this paper, the damage model method with a full coupling of the seepage field and the stress field is used based on the DP criterion and variable permeability coefficient method. e global optimization algorithm of difference evolution is introduced to back-analyze the damage parameters. e step iteration, numerical solving methods, and constitutive integration algorithm are studied. e elastoplastic stress-seepage-damage coupling program and intelligent back analysis program are compiled by C++ language. e programs are used to calculate the diversion tunnel of Shuibuya Hydropower Station. e distribution of stress field, seepage field, and damage field of the tunnel surrounding rock and the distribution curve of tunnel surrounding rock permeability coefficient are analyzed. Rock Permeability. e permeability of a rock refers to the ease with which a gas, liquid, or ion passes through a rock. Due to the inherent porosity of the rock, there is a phenomenon that the liquid or gas migrates from the high pressure to the low pressure. e permeability of the rock mainly depends on the pore structure of the rock and the performance of the aggregate. Rock materials contain pores and cracks of various sizes, so porosity is one of the main factors affecting permeability. Seepage-Stress Coupling. Under the action of water pressure, the seepage of water acts on the rock with effective stress, which affects the crack of the rock stress. At the same time, the change of the rock stress field often leads to the fissure closure or expansion, affecting the penetration performance of fissures. Consequently, the seepage field is redistributed as the permeability of the fracture changes. is interaction is defined as seepage-stress coupling. Seepage-Damage Coupling. With the deepening of the understanding of the seepage coupling problem, people gradually realize that the damage and crack propagation have a significant effect on the seepagestress coupling, mainly as follows: the impact of damage on the seepage process, the weakening of water, and the damage induced by osmotic stress. is problem is the seepage-damage coupling problem of rock during the rupture process. e seepage-stress coupling study focuses on the coupling methods for establishing different pore structure systems and describes their applicable conditions. e seepage-damage coupling is a combination of the above model and commonly used numerical calculation software, introducing medium fracture and damage judgment criteria, embedding in the characterization equation of the seepage-damage coupling of the medium to destroy the expansion zone, and investigating the seepage-damage coupling behavior in the engineering. Stress-Seepage-Damage Coupling Model for Rock Based on the experimental results of the evolution of seepage law during rock failure, this section introduces the elastoplastic damage constitutive relation, damage variable, permeability coefficient evolution equation, and effective stress based on the elastoplastic theory, damage mechanics, and classical seepage mechanics. en, a coupled model describing rock stress-seepage-damage is established. e model can effectively solve the stressseepage-damage coupling problem of the tunnel surrounding rock. e assumptions in this paper are as follows. (1) e skeleton of the saturated body studied is an ideal elastoplastic damage isotropic body and satisfies the assumption of small deformation. (2) e seepage is laminar, following Darcy's law. (3) e seepage fluid is incompressible, and the effects of temperature changes are negligible. Rock Mass Mechanics Field Equation. A unit body is taken out at any point in the object, and each stress component on any one of the unit bodies should satisfy the static balance condition. Establish a balanced differential equation in a three-dimensional Cartesian coordinate system as follows: e boundary value problem is shown in Figure 1. Equation (1) can be composed of the following three parts: 2 Advances in Civil Engineering where f j is body force, σ ij is the total stress tensor, Ω is the problem-solving domain, n i is the normal border cosine, t j is the surface forces acting on the known boundary surface, Γ 1 is the known force boundary, u m is the known displacement on the border, and Γ 2 is the known displacement boundary. e relationship between stress and strain is different for different constitutive models. e constitutive equation is abbreviated as follows: e geometric equation is as follows: where ε is the strain tensor and u is the displacement vector. e boundary value problem is transformed into a problem of solving the displacement u while satisfying the boundary condition constraints. When the displacement u is solved, the strain and stress states can be solved by the deformation equation and the constitutive equation. e principle of effective stress in porous media is as follows: where σ ij ′ is the effective stress tensor (pressure is positive and pull is negative), p is the pore water pressure, α is the equivalent pore pressure coefficient, and δij is the Kronecker symbol. Seepage Equation. Assuming that water is incompressible, according to Darcy's law, the continuous equation of seepage under passive unsteady conditions [18] is as follows: Bring h � (p/c) + z into equation (6); then, where h is the infiltration head, x, y, and z are the space coordinate, t is the time coordinate, k x , k y , and k z is the permeability coefficient of the x, y, and z axes as the main axis direction, S s is the unit storage amount, c is the water weight, and p is the pore water pressure. In the conditions with the free surface under nonpressure unsteady seepage, the initial condition is as follows: e boundary conditions, i.e., the head boundary and the flow boundary, are as follows: where Μ 1 is the known head boundary, f 1 is the known head boundary value, Μ 2 is the known flow boundary, and f 2 is the known flow boundary value. According to the effective stress principle (equation (5)) and the equilibrium condition (equation (1)), the equilibrium differential equation based on the principle of effective stress can be obtained. e equilibrium differential equation under the action of the seepage field embodies the dynamic coupling effect of stress seepage. Damage Variable and the Damage Evolution Equation. is study considers the equivalent plastic strain ε p as the evolution process of the rock damage variable [15]. Experimental research shows that, with the increase of ε p , the damage aggravates gradually. Damage variable D is a nonlinear function ε p . It can be expressed as the exponential function of equivalent plastic strain. e calculation of equivalent plastic strain can be shown as follows: where ε p1 , ε p2 , and ε p3 are three principal plastic strain, respectively. e evolution equation of the corresponding damage variable D is as follows: where the equivalent plastic strain threshold ε p 0 � 0, namely, the equivalent plastic strain produced damage evolution and κ is the normal number from the test. Equation (11) shows that, with the increase of cumulative plastic strain, the damage evolution eventually stabilized. Combining the simultaneous stress field (equation (2)), the percolation differential equation (equation (6)), the damage evolution equation (equation (11)), and the corresponding initial and boundary conditions (equations (8) and (9)), the coupling model of the stress field, damage field, and seepage field can be established. Permeability Characteristics in the Process of Rock Damage In the porous continuum model and the equivalent continuum model of rock mass seepage, the rock mass is regarded as a uniform medium composed of skeleton particles and pores (fractures). is structural feature makes the microscopic geometry of the rock mass change after the load or disturbance of the rock medium and the skeleton particles will be rearranged, resulting in changes in the porosity and permeability of the rock mass medium. e rock mass permeability coefficient should be considered as a variable in a fully coupled analysis, which is usually a function of porosity, strain, or stress. Guang-Ting [19] systematically summarized and introduced the three research methods of rock seepage stress coupling characteristics and reviewed the rationality and application of various results. e permeability coefficient-strain (or stress) equation is an indispensable governing equation for numerical analysis of seepage stress coupling. According to the different stress states of the rock mass, the permeability coefficient is defined as the function of stress and damage, and the evolution law of rock mass permeability coefficient in the elastic and plastic stages is realized. Under actual conditions, once the rock mass material yields and breaks, the rock mass permeability coefficient will increase sharply with the expansion and penetration of the original crack and the generation of a large number of new cracks. erefore, some scholars have introduced the concept of area reduction rate or sudden jump coefficient to simulate the permeability change law of rock damage yield stage, but this does not directly affect the impact of the damage process on rock mass permeability change. Compared with current research, this study more realistically reflects the change of the permeability coefficient in the elastoplastic stress-seepage-damage coupling analysis of rock mass. e relational expression between the coefficient of permeability of rock mass and volume strain can be obtained based on the elastic stage Kozeny-Carman equation [20]: where n 0 is the initial porosity, K 0 is the initial permeability of the rock, and ε v is the volumetric strain. e permeability coefficient of the rock in the plastic phase is shown as follows [21]: where K M and K D are permeability coefficients of the undamaged rock and fractured rock, respectively, and ε pF v � Dε p v , which is the plastic volumetric strain for the defect phase. Numerical Solving and Step Iteration Method for Coupling Model e solution to the elastic-plastic-damage-seepage coupling model of rock is a complex nonlinear problem. e difficulty of solving is mainly reflected in the relationship between elastoplasticity, damage, seepage calculation, and stress-damage-seepage interaction of rock mass. It is quite difficult to solve a problem involving so many nonlinear factors by only one iteration. After the above factors are summarized, this study iteratively solves them in a certain order, and many nonlinear problems can be solved in order to achieve the ultimate realization of the coupled model. Based on solid and seepage finite element theory, elastoplastic constitutive integral theory, and step-by-step iterative coupling method, the elastoplastic stress-seepage-damage program is compiled in this study. Elastoplastic Damage Finite Element and Constitutive Integration Algorithm. Discrete equations (2)-(5) to obtain a finite element equation with displacement as an unknown or its incremental form can be obtained as follows: where [K m ] is the total stiffness matrix of the stress field, u { } is the node displacement column vector, Δu is increment displacement, and {f m } is a nodal force vector that includes physical strength, surface strength, and pore pressure of the equivalent load. e overall stiffness matrix K is assembled with the element stiffness matrix. K (e) T is the function of consistent tangent modulus C as follows: where B is the strain matrix, that is, the matrix for finding strain based on displacement and B T is the matrix transpose of B. For rock, plasticity mainly refers to frictional sliding between internal cracks or joint surfaces and damage refers to the occurrence and expansion of internal cracks. Plastic damage coupling has two meanings: (1) interact with each other through their potential functions (and loading functions); (2) interact with each other through their consistency conditions, i.e., the evolution of the two internal variables of plasticity and damage interact [21]. e Drucker-Prager model is widely used in rock materials to describe the characteristics of plastic stressdeformation. e plastic yield function considering the damaging effect is as follows: (16) where p(σ) is the mean normal stress, p(σ) � 1/3tr[σ], tr[σ] is the trace of stress tensor (i.e., the algebraic sum of all principal stresses), J 2 is the second invariant of deviatoric stress, J 2 � 1/2s: s, s(σ) is the deviatoric tensor of stress, s(σ) � σ − p(σ)I, I is the second-order tensor, I � δ ij e i ⊗ e j , i, j � 1, 2, 3, δ ij is the Kronecker symbol, c(ε p ) is the cohesion under the damage, D is the damage variable, and η and ξ are the material parameters. e influence of damage on the internal friction angle is very small, so only the effect of damage on cohesion c is considered. With the accumulation of damage, the plastic strain increases, and the cohesion decreases gradually. It can be described as a power function [22]: where c(ε p ) ′ is the cohesion, c r is the cohesion of rock when it is obviously damaged, and ζ is the material parameter with a value between 0 and 1. Constitutive Integration Algorithm. e displacement increment can be obtained according to equation (14), and the strain increment can be calculated from displacement. Each iteration step calculates the stress increment from the given strain increment. e solution of the stress is related to the selection of the constitutive model. e study uses the constitutive model of Drucker-Prager (equation (16)). In the nonlinear finite element algorithm, each iteration step calculates the stress increment from the given strain increment. However, the updated values of stress and internal variables during the solution process are prone to not satisfy the yield function, resulting in inaccurate calculation results. is paper adopts the return mapping algorithm proposed by Simo [23], which has two processes of elastic prediction and plastic correction, and the algorithm is shown in Figure 2. Elastic prediction is to calculate the stress state σ trial n+1 according to the overall strain. If the stress state is outside the yield surface, then plasticity correction is driven by incremental plasticity factor, and the elastic trial stress returns to the yield surface so that plastic consistency is re-established in the updated state; then, σ n+1 is obtained. Consistent Tangent Modulus. When it returns to the smooth conical surface, the consistent tangent modulus is derived as follows: where C ep d is the consistent tangent modulus, G(D n ) and K(D n ) are damage shear modulus and damage bulk modulus, respectively, I d is volume tensor, T is the second-order unit tensor parallel to the elastic predicted strain, a, b, c, and d are coefficients, ε et dn+1 is the strain during elastic prediction at the time of t n+1 , D n is the damage variable at the time of t n , and η and ζ are the material parameters: When it returns to a sharp point, the consistent tangent modulus is derived as follows: (20) where H is the hardening modulus. e study adopts the associated flow rule, α and β are the parameters related to the internal friction angle, and they are chosen according to the required approximation to the Mohr-Coulomb criterion. Seepage Finite Element Method. Disperse the seepage field into combinations of a finite number of units. Disperse equation (6) to solve the seepage field in the hydraulic head function h of finite element basic equations, and the forms are where [K s ] is seepage matrix, h { } is hydraulic head column vector, and f s is a free term column vector. us, the original solution of the partial differential equation is used instead of solving algebraic equations. Based on the variational principle, the finite element solution program for seepage is compiled. e preprocedure process divides the mesh by means of ANSYS and then converts it into a seepage finite element program to solve the input file. e postprocessing is displayed by Tecplot, and the solution result of the seepage finite element program is converted into a format that can be displayed by the Tecplot software. Elastic-Plastic Stress-Seepage-Damage Coupling Iteration. e coupling mechanism between the groundwater seepage field and stress field in the rock mass is a relatively complex dynamic process. e interaction between the stress field and the seepage field is linked by changes in the permeability of the rock mass. When the rock mass is disturbed and the permeability is changed, the two mechanisms corresponding to the stress field and the seepage field are repeated to achieve a dynamic stable state. ere is much research related to the numerical methods of rock stress-seepage coupling which has given different coupling methods. ere are two main methods, i.e., stepby-step iteration and one-time coupling. In this paper, the Advances in Civil Engineering step-by-step iterative method is used to achieve elastoplastic stress-damage-seepage coupling. Under the initial stress state of the rock mass, the stress field, deformation field, and damage field of the elastoplastic damaged rock mass can be obtained by the incremental iteration calculation in Section 5.1. e volumetric strain of the element is obtained from the calculation of the deformation field. en, under the updated stress state, the permeation coefficient matrix of each unit is calculated according to the obtained volume strain (equations (12) and (13)), and the updated permeation coefficient matrix is subjected to the calculation of the seepage finite element in Section 5.2 to obtain the seepage field. Finally, the pore water pressure of the joint is calculated according to the calculation of the hydraulic head in Section 5.2, and it is brought back to the calculation of the mechanical field of Section 5.1 by the principle of effective stress. e reciprocating progress is continued until the stress field, the damage field, and the seepage field are obtained twice before and after satisfying the convergence criterion. e elastoplastic stress-seepage-damage coupling procedure is shown in Figure 3. Damage Parameters' Inversion Based on Differential Evolution Algorithm e damage parameters inverse problem turned into the optimization problem of constraint [24]: e constraint condition: where Y 0 i is the observed displacement, Y i is the FEM calculated value by the seepage-stress-damage coupling model, m is the number of observations, x i is the damage parameter, and x l i and x u i are the upper bound and lower bound on x i . e differential evolution (DE) algorithm is a global optimization algorithm proposed by Rainer Storn and Kenneth Price [24]. It is a new algorithm after genetic algorithm, ant colony algorithm, and particle swarm algorithm. It has great advantages in search success rate and computational efficiency, no requirement for the initial value, less controlled variables, fast convergence, good adaptability, optimization for multivariable complex problems, and no need to encode and decode operating. DE algorithm includes generating initial population, mutation and crossover operation, and selection operation. e specific principle is as follows. where G means the group evolution of every generation and i means the individual location in the population. e initial population is produced according to the method of random that uniform distribution is in the solution space: where r and ∈[0, 1] is a random number, x i,1 is the solution vector of first-generation, j � 1, 2, . . ., R, R is the number of the dimensions of solution vector, and x j U and x j L are the upper and lower bounds of the jth component, respectively. Mutation Operation. e mutation operation uses a different strategy, which uses the difference vector between individuals in the population to perturb the individual to achieve individual variation. e size of the difference vector can be automatically adjusted according to the individual distribution within the population, and the adaptation is good. For each target vector x i, G in the Gth generation, each vector individual contains R components, and the solution of the variance vector ] i, G + 1 is Advances in Civil Engineering where r1, r2, r3∈[1, 2, . . ., NP] are not the same random integers each other and not equal to i and F∈[0, 1] is the mutagenic factor that one of the main control parameters in the algorithm has to adjust the step magnitude of the vector difference. Interlace Operation. Interlace operation is in order to increase the diversity of the population. e new test vector u i,G+1 can be calculated by hybridizing the target vector x i, G, and the variation vector ] i, G + 1 according to the following rules: G+1 , if (randb(j) ≤ CR‖j � rnbr(i)), where randb(j) ∈ [0, 1] is random decimals corresponding with the jth component, j � 1, 2, . . ., R, CR ∈ [0, 1] is crossed factors that is another one of the main control parameters, and rnbr(i) is the coefficient corresponding with the ith vector, random integers among 1, 2, . . ., R. Selecting Operation. e greedy search method is used to select whether to select the test vector u i,G+1 as the target vector of the (G + 1)th generation. Compare test vector u i,G+1 with target vector x i, G , and choose vector u i,G+1 if u i,G+1 corresponding the smaller target value. Otherwise, retain x i,G if x i,G is corresponding the smaller one. New population x i,G+1 can be generated after selection as follows: e DE algorithm adopts real-coded and regards equation (22) as a fitness function in this paper. e back analysis calculation process is shown in Figure 4. In this study, the self-developed elastoplastic stressseepage-damage finite element solution program is named RMAST in the intelligent displacement back analysis program, which can calculate the displacement Y i of equation (22) and complete the inversion of damage parameters. General State of Shuibuya Hydropower Station. e Shuibuya Water Conservancy Project is located in the middle section of Qingjiang River in Badong County, Hubei Province, and is one of the important power stations for the development of the Qingjiang River Basin, as shown in Figure 5. e geological engineering conditions of the surrounding rock of the underground cavern are very complicated, and the tailwater tunnel section is particularly serious, passing through a variety of soft and hard phase rock formations. erefore, it is very important to analyze the stability of the surrounding rock in the tailwater tunnel during the construction process. e right bank diversion underground power station of Shuibuya Water Conservancy Project is located on the right bank of the dam site NE30°. ere are 4 power stations installed with a unit capacity of 400 MW and a total installed capacity of 1600 MW. Power station buildings include diversion canals, water inlets, diversion tunnels, main powerhouses, installation sites, busbars, tailwater tunnels, tailwater platforms, tailraces, 500 kV substations, traffic tunnels, ventilation, and pipeline tunnels, and off-site drainage holes. e Damage Parameters' Back Analysis. e section of the dam toe plate is selected as the calculated section and calculated according to the plane strain model. Advances in Civil Engineering e calculation range is centered on the diversion tunnel and is 321.5 m wide and 186.6 m high. e X-axis is forwardly directed to the side of the mountain, and the Y-axis is oriented vertically upward. e tunnel diameter equals 8.5 m, and tunnel center locates on the level line of y and equals 50 m height. ere are 1#, 2#, 3#, and 4# from left to right, respectively. e bottom edge of the calculation domain is fixed, and the normal constraints on both sides. e simplified calculation profile is shown in Figure 6. e model is divided into 459 nodes and 817 units. e initial hydraulic head height is set at y � 80 m, and the left and right sides exert a hydraulic head pressure that varies in a gradient along the gravity direction. e two sides of the model and the perimeter of the tunnel are permeable boundaries. e bottom of the model is the impervious boundary. Next, determine the elastoplastic mechanical parameters of the surrounding rock. Due to a large number of damage constitutive parameters, considering that the damage range of surrounding rock in tunnel excavation is limited, this paper only uses the DP damage model for the argillaceous limestone formation and uses the traditional ideal elastoplastic DP model for the limestone formation. Considering that the surrounding rock is only in the argillaceous limestone formation, only the stratum damage parameters are inverted. According to the results of preliminary surveys and laboratory test, the limestone bulk density c � 25 kN/m 3 , elasticity modulus E � 1.9 GPa, Poisson ratio μ � 0.23, cohesion c � 0.25 MPa, internal frictional angle Φ � 25°, initial porosity of surrounding rock e � 0.038, and the initial permeability k x � k y � 9.71 × 10 − 3 m/d. e argillaceous limestone bulk density c � 27 kN/m 3 , elasticity modulus E � 2.7 GPa, Poisson ratio μ � 0.25, cohesion c � 0.46 MPa, internal frictional angle Φ � 22°, initial porosity of surrounding rock e � 0.033, and the initial permeability k x � k y � 6.48 × 10 − 3 m/d. en, substitute the top arch displacement and horizontal convergence displacement of the four tail water holes in the field monitoring and numerical calculation into the Figure 7. Two important parameters in the DE algorithm: mutation factor F and crossover factor CR are studied. e DE/ rand/1/bin difference strategy is selected, the mutation factor F � 0.9, the crossover factor CR � 0.5∼0.8, and the crossover factor CR � 0.9. e iterative curve when the variation factor F � 0.5∼0.8 is shown in Figure 7. It can be seen from the above figure that, under the premise of selecting the fixed difference strategy, the variation of the crossover factor CR and the variation factor F have a certain influence on the accuracy and convergence speed of the inversion results, but they can converge to the optimal solution faster. e range of surrounding rock damage parameters is as follows. e cohesion with the damage c r ranges from 0 to 460 MPa, which is less than the range of cohesion c. e parameter ζ ranges from 0 to 1, and the parameter κ is obtained from experiments and is generally 1 to 5000. e displacement inversion parameters calculate results are shown in Table 1. In order to fully explain the calculation function and applicability of the program, the calculation is carried out in two cases: (1) the elastoplastic damage mechanics field calculation is performed separately without considering the seepage effect; (2) the stress-seepage-damage constitutive model established above is used to carry out the calculation of fluid-solid coupling. At the same time, for the actual seawater backflow phenomenon, three water level heights are considered, which are 50 m, 80 m, and 100 m, respectively. Analysis of Calculation Results considering and Not considering Seepage. e specific calculations and calculation results are as follows: (1) e elastoplastic damage mechanics field calculation of the surrounding rock and the lining after tunnel excavation is carried out without considering the seepage. e stress cloud diagram after tunnel excavation of the holes without lining and lining is shown in Figures 8 and 9, respectively. It can be seen that the x and y direction stresses around the tunnel after the lining is significantly increased compared with the stress excavation of the pores, and stress concentration occurs. Figure 10 is a diagram of the plastic zone around the hole after tunnel excavation. It can be found that the range of the plastic zone of the tunnel is significantly reduced due to the influence of the tunnel support. Figure 11 describes the excavation of the pores and the damage cloud pattern. e damaged area of the rock mass caused by excavation is mainly distributed on the left and right sides of the cavern. e damage of the rock mass in the left and right edges of the cavern is particularly serious, and the damage variable value D reaches 0.9, which causes the rock bearing capacity of the corresponding area to decrease significantly. is study selects 4 monitoring points above the four tunnel vaults to compare the displacement settlement values of the excavation and lining excavation. e settlement curve of the detection points is shown in Figure 12. It can be seen that the surface settlement after the lining is smaller than the surface settlement after the excavation of the pores, and the settlement value of each tunnel is also different. e larger the thickness of the soil above the tunnel, the larger the settlement value of the tunnel. (2) When calculating the stress-damage-seepage flow of the tunnel underlining, the key factor affecting the coupling between the seepage field and the stress field is the permeability coefficient of the surrounding rock. When the surrounding rock conditions are poor, there are a large number of joint fissures or the pores of the rock and soil which are Advances in Civil Engineering large, the permeability coefficient of the surrounding rock tends to be relatively large, and the coupling between the seepage field and the stress field will be stronger. Among them, groundwater contributes a lot to the deformation of the overlying stratum of the tunnel. erefore, if the coupling effect between the seepage field and the stress field is not considered, there would be a large error in the calculation result. Figure 13 is a comparison of the amount of surface settlement considered with or without stress-seepage-damage. It can be seen that the surface settlement value considering the stress-seepage-damage coupling effect is larger than the surface without considering the coupling effect, indicating that the seepage flow cannot be ignored for the deformation and failure of the rock mass. Analysis of Calculation Results of Different Hydraulic Heads. Tunnel excavation will destroy the aquifer structure of the surrounding rock and expose part of the groundwater channel, causing a sharp change in the hydrodynamic condition and the balanced state of the surrounding rock mechanics. Groundwater or other water bodies with which it is hydraulically connected and stored energy are turned from a relatively static state to a flowing state under neutral action. Groundwater flows into the tunnel through the seepage channel and enters the tunnel. e groundwater level decreases, and the pore water pressure decreases correspondingly, forming a reduced area adjacent to the tunnel area. e outside of the tunnel is a slowly changing area, as shown in Figure 14(a). After tunnel lining construction, because the lining concrete material has strong impermeability, the permeability coefficient is much smaller than the stratum, which can block the drainage of groundwater into the tunnel, and the amount of water in the tunnel decreases, as shown in Figure 14(b). e permeability coefficient of the lining is much smaller than that of the surrounding rock, and the water blocking effect is more obvious. erefore, the amount of water inflow after tunnel lining construction is much smaller. After the tunnel excavation, the pore water pressure of the surrounding rock is continuously consumed, and the groundwater penetrates into the cave, causing the change of the seepage field. Finally, the distribution shape of the seepage field similar to the seepage funnel centering on the tunnel excavation area is formed. Figure 15 shows the water pressure distribution of the pores around the hole at different hydraulic head heights. e influence of the lining on the pore water pressure of the surrounding rock is not obvious. e plastic zone and the damaged zone under different hydraulic heads are shown in Figures 16 and 17, respectively. As the hydraulic head Hs increases, the plastic zone and the damage zone of the tunnel gradually increase. In the rock mass stress-seepage-damage coupling model, the cause of rock mass damage evolution is the dynamic change of osmotic water pressure and rock mass stress in the rock mass. is study believes that the rock mass stress is coupled by the dynamic hydraulic gradient, which disturbs the stress distribution of the rock mass and leads to the damage evolution of the rock mass. Based on the elastoplastic stress-seepage-damage constitutive model established above, considering the influence Advances in Civil Engineering of seepage, the stress field and seepage field of the tunnel can be calculated. e tunnels are numbered 1#, 2#, 3#, and 4# from left to right. In order to consider the change of pore water pressure caused by different thicknesses of the soil, the pore water pressure of the monitoring points above the four tunnels is extracted, as shown in Figure 18. It is apparent that due to the location of each tunnel and the thickness of the overlying soil, the pore water pressure of the tunnel is different. Moreover, after the lining is applied, there is a large pore water pressure in the tunnel, so the larger the thickness of the overlying soil, the larger the pore water pressure. Figure 19 shows the distribution curve of the surrounding rock permeability coefficient along with the horizontal and vertical directions of the tunnel surrounding rock after tunnel excavation. en, Figure 19(a) is a schematic diagram of the tunnel monitoring point. Based on the measuring points in Figure 19(a), the relationship between the permeability coefficient of the surrounding rock and the surrounding rock distance can be analyzed, as shown in Figures 19(b) redistribution caused by tunnel excavation has a nonnegligible influence on the permeability coefficient of the surrounding rock of the tunnel. From the permeability coefficient of line AB and line IJ in Figure 19(b), it can be seen that the farther the tunnel is, the smaller the horizontal permeability coefficient is. From the survey lines CD, EF, and GH, it can be seen that the permeability coefficient of the surrounding rock of the tunnel increases first and then decreases. Moreover, the permeability coefficient of the vertical position of the tunnel gradually decreases with increasing distance from the circumference of the tunnel (Figure 19(c)). erefore, the closer the surrounding rock is, the larger the permeability coefficient is. As the depth of burial increases, the permeability coefficient around the tunnel also increases. e permeability coefficient of surrounding rock is obviously affected by the damage of rock mass, which indicates that the mechanical-hydraulic-damage (MHD) coupling model used in this model can not only reflect the evolution of damage but also reflect the relationship of the damage-permeability coefficient. So, it is more reasonable than the mechanical-hydraulic (MH) model of the constant permeability coefficient. In summary, the excavation of the rock mass causes serious damage to the left and right edge areas of the cavern. e bearing capacity of the rock mass decreases, the permeability coefficient increases rapidly, and the seepage flow increases. erefore, in the case of high water pressure, these areas are most prone to water inrush, so the corresponding reinforcement measures should be adopted. In the actual project, the variation of the stress field of the rock mass and the distribution of the damage zone can be used to delineate the change zone of permeability, which provides a basis for the antiseepage design. On the contrary, when the permeability of the rock mass changes significantly, the structure must be destroyed, and the damage extent and failure mode of the rock mass can be determined. Conclusions rough the research in this paper, the following conclusions are obtained: (1) is paper establishes an elastoplastic damage constitutive model based on the Drucker-Prager yield criterion and uses equivalent plastic strain to characterize the evolution of rock damage variables. According to the dynamic evolution formula of permeability coefficient, when the rock is in the elastoplastic state, the elastoplastic stress-seepagedamage model of rock is established. e numerical solution of the coupled model is given, and the corresponding program is compiled by C++ language. e established elastoplastic stress-seepagedamage coupling model of rock is applied to the tunnel simulation. e results show that the existence of a seepage field exacerbates the damage and failure of the surrounding rock, and the damage evolution of the stress field in turn affects the change of permeability coefficient. ere is a certain gap between the results of the coupled model of elastoplastic damage model without considering seepage and considering seepage. erefore, it is necessary to consider the deformation, damage, and failure characteristics of the seepage in the tunnel stability analysis. (2) e engineering application shows that the coupled model can truly reflect the complex macroscopic failure of rock materials through the interaction of stress, seepage, and damage. e programmed program can simulate the coupling characteristics of groundwater seepage field, stress field, and damage field and provide an analysis method for engineering construction with serious impact on groundwater. e algorithm of this paper has advantages over the elastic-plastic damage of the simple stress field or the ideal elastic-plastic fluid-solid coupling algorithm. (3) Based on the principle of differential evolution algorithm, an intelligent back analysis method is established for the problems involving many parameters and is difficult to measure in the coupled model. e return mapping algorithm has the characteristics of accuracy and stability. e Newton-Raphson method is used in the iteration to obtain the convergence rate of the approximate square. Besides, the differential evolution algorithm has no requirement for the initial value, the search success rate is high, the convergence speed is fast, and no encoding and decoding operations are needed, which is convenient for practical application. Combining the advantages of these two algorithms, the damage parameters in the coupled model are inverted. e inversion results show that the program has good accuracy and stability. (4) e algorithm in this paper is based on the continuous mechanical medium model and the linear strength criterion, DP criterion. e algorithm of this paper is not suitable for the fractured jointed rock mass or the fractured jointed rock mass that conforms to the nonlinear strength criterion. For jointed rock mass engineering, it is necessary to explore the Hoek-Brown constitutive damage model 16 Advances in Civil Engineering or discrete element numerical algorithm based on nonlinear strength criterion. is is the direction that needs further research in the future. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
2021-05-08T00:03:01.740Z
2021-02-24T00:00:00.000
{ "year": 2021, "sha1": "133cdb111db75c3fe90081d06111b43d2f0dabc3", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ace/2021/8341528.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3f9d517fd7b8649226dcfb873d86949dc80a03e9", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
212707330
pes2o/s2orc
v3-fos-license
Stricter Adherence to Dietary Approaches to Stop Hypertension (DASH) and Its Association with Lower Blood Pressure, Visceral Fat, and Waist Circumference in University Students How diet affects blood pressure (BP) in young adults has not been studied in sufficient depth. For this purpose, we analyzed adherence to the Dietary Approaches to Stop Hypertension (DASH) dietary pattern and BP in Spanish university students. The sample population of our cross-sectional study consisted of 244 subjects (18–31 years old), who were in good health. Measurements were taken of their systolic and diastolic BP. A food frequency questionnaire and 72 h food record were used to assess their dietary intake in the previous year. The resulting DASH score was based on foodstuffs that were emphasized or minimized in the DASH diet. Analysis of covariance adjusted for potential confounding factors showed that the mean values for systolic BP, visceral fat rating, and waist circumference (WC) of the subjects in the upper third of the DASH score were significantly lower than those of the subjects in the lower third (for systolic BP: mean difference −4.36 mmHg, p = 0.004; for visceral fat rating: mean difference −0.4, p = 0.024; for waist circumference: mean difference −3.2, p = 0.019). Stricter adherence to the DASH dietary pattern led to a lower BP, visceral fat rating, and WC values in these university students. Nevertheless, further prospective studies are needed to confirm these results. Introduction Cardiovascular diseases (CVDs) are the leading cause of early death in countries throughout the world [1]. With a global prevalence of approximately 26.4%, high blood pressure (BP) in young adults is regarded as a major public health problem [2]. Spain is hardly an exception as national studies have reported a prehypertension prevalence of 24% among university students [3]. Although elevated BP at a young age does not usually cause CVD, the development of hypertension is associated with the early onset of left ventricular hypertrophy, carotid wall thickening, and retinopathy [2]. In addition, young adults with abnormal BP are more predisposed to be hypertensive during midlife (40-65 years old) [4], with hypertension being the leading cause of premature death [5]. However, epidemiological studies indicate that a healthy lifestyle based on healthy eating habits is associated with lower BP, lower abdominal and visceral fat deposits [6], and thus lower cardiovascular risk [7]. Studies in university populations reflect the pervasiveness of unhealthy behaviors and lifestyles [8,9], especially unhealthy eating patterns [10]. Therefore, the university stage is critical for the establishment of nutritional behaviors, which may eventually become entrenched habits in the same way that high BP may become a life-long condition [11]. Therefore, the promotion of healthy habits among university students may lead to important long-term health benefits [11]. Currently, the effect of diet on the BP of young adults is not well understood [12]. Studies on the connection between diet and BP often focus on analyses of individualized nutrient intake, which does not clarify the biological mechanisms involved in this relationship [13]. Of the hypothesis-oriented methods for the identification of dietary patterns, those related to Dietary Approaches to Stop Hypertension (DASH) have been the focus of considerable research [14]. Accordingly, an umbrella review of meta-analyses by Dinu et al. [15] concluded that the DASH diet improves BP. In this same line, a cross-sectional study of adults by Phillips et al. [16] found that strict adherence to DASH was linked to a lower systolic BP. Moreover, the results of a randomized controlled trial of prehypertensive patients showed a significant reduction of systolic BP in patients following the DASH diet [17]. Nevertheless, up until now, most studies on the association between the DASH diet and BP have targeted adolescent or adult populations. To the best of our knowledge, there have been no studies of this association in young university students. For this reason, our study focused on adherence to the DASH dietary pattern and BP, visceral fat rating, and waist circumference (WC) in a sample of healthy Spanish university students. Study Design and Subjects This cross-sectional study was performed during the 2013-2014 academic year, and 244 of a total of 1188 university students participated in the study. The participants had a mean age of 22.4 ± 4.76 years. Their selection was the result of a random sampling of students at the university campus of Melilla, a Spanish city situated on the northwest African coastline, opposite the provinces of Granada and Almeria in Spain. Melilla is a modern western city, characterized by great cultural richness stemming from the centuries-long coexistence of different ethnic groups and cultures. Data Collection The Melilla campus is composed of three university centers: (i) the Faculty of Education and Sport Sciences, (ii) the Faculty of Social Sciences and Law, and (iii) the Faculty of Health Sciences. To participate in the study, students had to be enrolled in a degree program offered by one of these three faculties. They were also required to give their informed consent. Students with a prior medical history of endocrine or metabolic diseases, as well as those who did not wish to sign the consent form, were excluded from the study. Figure 1 summarizes the recruitment process. Information meetings were scheduled and held for all students (n = 1188) during September 2013 at the university campus of Melilla. Of the 1188 students, only 888 attended all meetings. At the meetings, participants learned about the different evaluations and questionnaires that they would have to complete to participate in the study. An informed consent form with a description of the study was given to all students attending the meetings. After applying the previously mentioned inclusion criteria, 300 students were selected for the study. However, 56 were subsequently excluded because of one of the following reasons: (i) previous diagnosis of an endocrine pathology (n = 13); (ii) incomplete anthropometric, dietary, or demographic data (n = 30); (iii) age ≥32 years (n = 13). Excluded students (n = 56) Students previously diagnosed with an endocrine disease (n = 13); students lacking anthropometric, dietary, or demographic data Information meetings were scheduled and held for all students (n = 1188) during September 2013 at the university campus of Melilla. Of the 1188 students, only 888 attended all meetings. At the meetings, participants learned about the different evaluations and questionnaires that they would have to complete to participate in the study. An informed consent form with a description of the study was given to all students attending the meetings. After applying the previously mentioned inclusion criteria, 300 students were selected for the study. However, 56 were subsequently excluded because of one of the following reasons: (i) previous diagnosis of an endocrine pathology (n = 13); (ii) incomplete anthropometric, dietary, or demographic data (n = 30); (iii) age ≥32 years (n = 13). Accordingly, 244 students who complied with all of the inclusion criteria were selected as participants. In October 2013, each participant was given an anthropometric evaluation. Their body composition was also analyzed, and their dietary habits were assessed. The study received the approval of the Ministry of Education and Youth of the Government of Melilla. Furthermore, the Ethics Committee of the University of Granada (Code 841) also approved the study as well as the informed consent form. All of the participants signed the informed consent document, and the confidentiality of their personal information was guaranteed by coding the data. This research was carried out in strict compliance with the international code of medical ethics established by the World Medical Association and the Declaration of Helsinki. Blood Pressure The BP of the participants was measured with a previously calibrated aneroid sphygmomanometer and a Littmann ® stethoscope. In this regard, the study followed the recommendations for BP measurement of the Subcommittee of Professional and Public Education of the American Heart Association Council on High Blood Pressure Research [18]. Each participant was requested not to eat, drink alcohol or caffeine, smoke, exercise, and bathe for at least 30 min before his/her BP measurement. During the BP measurement, the subject was seated on a chair for 5 min with his/her back supported, feet flat on the floor, and wrist relaxed at heart level. The results were interpreted according to the Korotkoff sounds: phase I, systolic BP; phase V, diastolic BP [18]. Coefficient of variation (CV%) of systolic BP was of 10.03%, while diastolic BP had a CV% of 13.89%. Dietary Intake Each participant filled out a comprehensive food frequency questionnaire (FFQ) consisting of 168 food items in order to assess his/her typical dietary intake during the previous year [19]. The objective was for them to record the frequency of consumption of each food item per day, week, and month. Furthermore, a 72 h food record (i.e., Thursday, Friday, and Saturday) was completed in order to report weekly variations on weekdays and the weekend. As confirmed in the literature, a 72 h food record can be used to assess nutrient intake because the record collects data for the typical or average diet [20]. Trained investigators filled in the 72 h food record in a face-to-face interview, during which individuals were asked to recall the food ingested in the preceding 72 h, including nutritional supplements and beverages. During the interviews, standard household measures and pictorial food models were employed to define amounts. Nutritional information was analyzed with Diet Source ® version 3.0, a nutritional computer application. Using the method in Fung et al. [21], the resulting DASH score was based on food items emphasized or minimized in the DASH diet. The focus was on the following eight components: high intake of fruits, vegetables, nuts and legumes, whole grains, and low-fat dairy products and low intake of sodium, sweets, and red or processed meats [22]. The participants were classified based on the energy-adjusted quintile categories of their ingestion of these eight components. For sodium, sweets, and red or processed meats, scores of 5, 4, 3, 2, and 1 were assigned to those in the first (lowest), second, third, fourth, and fifth (highest) quintiles, respectively. In contrast, for fruits, vegetables, nuts and legumes, low-fat dairy products, and whole grains, the opposite scoring system was applied. The eight component scores were then added up to yield the total DASH score for each subject, which ranged from 8 to 40. A higher overall DASH score corresponds to greater adherence to the DASH dietary pattern [21,23]. Anthropometric Measurements and Physical Activity Anthropometric parameters, including height, weight, body mass index (BMI), hip circumference, waist-to-hip ratio (WHR), and waist circumference (WC), were assessed according to the guidelines of the International Society for the Advancement of Kinanthropometry [24]. The weight of the subjects was measured with a self-calibrating Seca ® 861 class (III) digital floor scale, with a precision of up to 100 g. Their height was measured with a Seca ® 214 portable stadiometer. Participants were asked to stand in an upright position with their back and heels against the stadiometer and their head oriented on the Frankfurt plane. After the horizontal headpiece was placed on the top of their head, the BMI was calculated by dividing their weight by the square of their height (kg/m 2 ). The WC was measured at the horizontal plane midway between the lowest rib and the upper border of the iliac crest at the end of normal inspiration/expiration. Hip circumference was measured at the maximum width of the buttocks as viewed from the right side. A Seca ® automatic roll-up measuring tape with an accuracy of 1 mm was used for the WC and hip circumference measurements while the subjects remained in a standing position with their arms hanging at their sides at rest. WC had a CV% of 13.06%. The WHR was calculated as their WC divided by their hip circumference. A body composition analyzer (TANITA Model BC-418 MA ® , Tokyo, Japan) was used to estimate fat mass and visceral fat rating. This was done by measuring the bioimpedance of all participants. The CV% of % fat mass and visceral fat rating was of 37.87% and 86.25%, respectively. Visceral fat rating ranged from a minimum of 1 to a maximum of 59. A score of 1-12 indicates a healthy level of visceral fat, whereas a score of 13-59 indicates an excessive level of visceral fat [25]. The measurements were performed by the same trained researcher. Physical activity was assessed by means of the Physical Activity Questionnaire for Older Children (PAQ-C), a seven-day recall questionnaire with high validity and moderate reliability [26]. The questionnaire consists of nine items, and each item is scored on a 5-point scale. A value from 1 to 5 was obtained for each of the nine items used in the physical activity composite score, and the mean of these nine items was the final PAQ activity score. Scores of 1, 2-4, and 5 indicated a low, moderate, and high physical activity, respectively. Other Variables The other variables studied were the presence or absence of parental obesity, which was determined by asking all participants to submit a medical certificate with this information. A variable related to religion was also included in our study, and each student self-identified the religion that he/she practiced (Islam or Christianity). This variable was measured by the Religious Attitude Questionnaire (Cuestionario de Actitudes Religiosas), developed and validated by Elzo [27]. Statistical Analysis The participants were classified in three groups based on DASH score tertiles (n = 73, n = 102, and n = 69 in tertiles 1, 2, and 3, respectively). Continuous and categorical variables were compared in the tertiles by means of a one-way analysis of variance and the chi-squared test, respectively. In the two models, the multivariable-adjusted means of BP in DASH score tertiles were compared by performing an analysis of covariance (ANCOVA). In model 1, we adjusted the effects of sex and age as potential confounders, and in model 2, we controlled the effects of socioeconomic status (SES), parental obesity, PAQ-C summary score, WC, BMI Z-score, and energy intake. Pairwise differences in the mean BPs between the highest (T3) and lowest tertiles (T1) of the DASH score were examined by the Bonferroni post hoc test to adjust for multiple comparisons. All of the analyses were performed with version 24 of the SPSS software package (IBM, Armonk, NY, USA). A two-sided p-value <0.05 was considered statistically significant. Results The characteristics of the participants in the DASH score tertiles are listed in Table 1. Significant differences in religion, visceral fat rating, WC, and systolic BP were found in the tertiles (all p < 0.05). The mean systolic BP and the mean diastolic BP were 115.7 and 67.7, respectively, which yielded an overall mean BP of 91.7 mmHg. Coefficients of variation of % fat mass, WC, systolic BP, and diastolic BP were low, indicating that arithmetic mean is representative of the data set and this data is homogeneous, while visceral vat rating presented a high dispersion. Table 2 shows the dietary intake of the participants by tertile of DASH score. Except for omega-3 and omega-6 fatty acids, significant differences were observed in all dietary variables measured in the DASH score tertiles (all p < 0.05). Those participants with lower adherence (T1) to the DASH dietary pattern showed a higher intake of total fat, saturated fatty acid (SFA), cholesterol, and sodium. In contrast, subjects with a stricter degree of adherence (T3) showed a dietary pattern characterized by a higher intake of potassium, magnesium, and calcium accompanied by a higher ingestion of fiber, fruits, vegetables, legumes, nuts, low-fat dairy products, and whole grains. Table 3 lists the multivariable-adjusted means of the BP, visceral fat rating, and WC by DASH score tertile. In model 1, the multivariate analysis, adjusted for parental obesity, physical activity, and energy intake, showed that the mean systolic BP was lower as adherence to the DASH dietary pattern increased. This trend was also observed in the variables of visceral fat rating and WC. In model 2, adjusted for the confounding factors in model 1 as well as for sex and religion, the data reflected a very similar trend, though slightly better estimates and significance levels were found for systolic BP (p = 0.005) and WC (p = 0.003) with increasing adherence to the DASH dietary pattern among the university students. Data are presented as the mean (95% confidence interval). Multivariable-adjusted means of BP, visceral fat rating, and WC were compared between DASH score tertiles using analysis of covariance (ANCOVA) models. Model 1 was adjusted for parental obesity, the physical activity questionnaire summary score, and energy intake. Model 2 was adjusted for the confounders in model 1 as well as for sex and religion. Pairwise differences in the means of BP, visceral fat rating, and WC between the upper (T3) and lower third (T1) of the DASH score were analyzed with the Bonferroni post hoc test. DASH, Dietary Approaches to Stop Hypertension; BP, blood pressure; WC, waist circumference. Discussion To the best of our knowledge, this is the first study that evaluates the direct relationship between the DASH diet and variables such as BP, visceral fat rating, and WC in a sample population of Spanish university students. As reflected in our results, a stricter adherence to the DASH dietary pattern was associated with lower BP, visceral fat rating, and WC values. These results agree with Chiavaroli et al. [28], whose systematic review and meta-analysis found that, in controlled trials, the DASH diet was directly linked to a reduction in systolic BP (mean difference −5.2 mmHg (95% confidence interval (CI) −7.0 to −3.4)) and diastolic BP (−2.60 mmHg (−3.50 to −1.70)) as well as to an improved lipid profile, total cholesterol (−0.20 mmol/L (−0.31 to −0.10)), and low-density lipoprotein (LDL) cholesterol (−0.10 mmol/L (−0.20 to −0.01)). Our findings are also consistent with those of previous studies of young people, in which stricter adherence to the DASH diet was associated with a lower incidence of metabolic syndrome (MetS) and its components, such as elevated systolic BP and a predominantly abdominal and visceral fat distribution [29]. This association between the DASH diet and BP levels can be explained by the low sodium content of the DASH diet, which decreases BP in this group [30]. Correspondingly, the meta-analysis performed by He et al. [31] demonstrated that dietary sodium intake is a modifiable risk factor for hypertension at any stage of life. Furthermore, lower sodium intake may lead to improved BP levels through different mechanisms [32]. In another study of type 2 diabetic adults who followed the DASH diet for eight weeks, greater adherence to this pattern led to a significant reduction in body weight, WC, systolic BP, and diastolic BP [33]. These results suggest that the DASH diet may be a powerful tool to prevent the development of cardiovascular risk factors, such as high BP and troncular obesity. Another interesting finding of this study was that the students that strictly followed the DASH dietary pattern ingested more potassium, magnesium, fiber, and calcium, accompanied by a higher consumption of fruits, vegetables, legumes, nuts, low-fat dairy products, and whole grains. Quite possibly, this dietary pattern characterized by an abundant intake of fruits and vegetables, especially leafy vegetables, helps to reduce BP levels through nonenzymatic generation of nitric oxide by inorganic nitrate [34]. Likewise, according to Streppel et al. [35], the high contents of potassium and magnesium, one of the characteristics of the DASH diet, may explain the diet's beneficial effects on metabolism in general and on the lipid profile. In addition, according to Penton et al. [36], a high intake of potassium and magnesium may have antihypertensive effects derived from the ability of both minerals to induce vasodilation, reduce the release of renin at the kidney level, and establish a negative balance with sodium. On the other hand, according to Bucher et al. [37], a high intake of calcium in subjects with high adherence to the DASH diet may act as a modulating factor of BP levels, specifically decreasing systolic BP, although this speculation remains controversial. In their review of randomized clinical trials, Cormick et al. [38] indicated that abundant calcium intake slightly reduced systolic BP and diastolic BP, especially in normotensive young adults. On the other hand, a higher intake of fruits, vegetables, and legumes, together with less SFA and more monounsaturated fatty acid consumption, may explain the beneficial effects of the DASH diet on the parameters of abdominal fat and visceral fat accumulation. In addition, according to Hall [39], a high intake of mainly SFAs and nonesterified fatty acids may activate proinflammatory pathways, increasing oxidative stress and thus favoring endothelial dysfunction in such subjects. Based on these results, the DASH dietary pattern most likely exerts its beneficial effects through a combination of all these dietary factors. The fruits, vegetables, and other food items emphasized by the DASH diet contain numerous flavonoids and antioxidants, which can help to significantly reduce biomarkers of oxidative stress and inflammation [40,41], improve endothelial function, and thereby decrease BP levels [42]. However, the mechanisms through which the DASH diet acts on metabolic health are not fully understood and should be investigated in greater depth. One limitation of this study is that its cross-sectional design does not allow an inference of causality between compliance with the DASH dietary pattern and BP. The small sample size may act as a mitigating factor in the detection of a possible association between adherence to the DASH dietary pattern and diastolic BP. However, this study has remarkable strengths, such as the use of standardized and validated instruments and methodological procedures and an appropriate sampling method. Conclusions Stricter adherence to the DASH dietary pattern was found to be associated with lower BP, visceral fat rating, and WC values in a sample population of young university students. These results suggest that the DASH dietary pattern may be a useful tool in daily clinical practice to prevent and identify cardiovascular risk factors, such as high BP and predominantly troncular obesity. As mechanisms through which the DASH diet acts on metabolic health are not as yet fully understood, prospective studies should be carried out to further confirm these findings.
2020-03-15T13:03:18.468Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "102724735ed69682ba8cbe60e52d5efc818af76a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/12/3/740/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5828d77c1ca63ebe29301f1d2ad9e9cf2b229fe0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221745097
pes2o/s2orc
v3-fos-license
Flying Together: Drosophila as a Tool to Understand the Genetics of Human Alcoholism Alcohol use disorder (AUD) exacts an immense toll on individuals, families, and society. Genetic factors determine up to 60% of an individual’s risk of developing problematic alcohol habits. Effective AUD prevention and treatment requires knowledge of the genes that predispose people to alcoholism, play a role in alcohol responses, and/or contribute to the development of addiction. As a highly tractable and translatable genetic and behavioral model organism, Drosophila melanogaster has proven valuable to uncover important genes and mechanistic pathways that have obvious orthologs in humans and that help explain the complexities of addiction. Vinegar flies exhibit remarkably strong face and mechanistic validity as a model for AUDs, permitting many advancements in the quest to understand human genetic involvement in this disease. These advancements occur via approaches that essentially fall into one of two categories: (1) discovering candidate genes via human genome-wide association studies (GWAS), transcriptomics on post-mortem tissue from AUD patients, or relevant physiological connections, then using reverse genetics in flies to validate candidate genes’ roles and investigate their molecular function in the context of alcohol. (2) Utilizing flies to discover candidate genes through unbiased screens, GWAS, quantitative trait locus analyses, transcriptomics, or single-gene studies, then validating their translational role in human genetic surveys. In this review, we highlight the utility of Drosophila as a model for alcoholism by surveying recent advances in our understanding of human AUDs that resulted from these various approaches. We summarize the genes that are conserved in alcohol-related function between humans and flies. We also provide insight into some advantages and limitations of these approaches. Overall, this review demonstrates how Drosophila have and can be used to answer important genetic questions about alcohol addiction. Introduction Alcohol use disorder (AUD) frequently causes harmful domestic and societal consequences. Alcohol is the most commonly abused drug, and alcohol misuse and abuse are leading causes of preventable death [1,2], underlying~5.9% of global deaths in 2012 [3]. Additionally, alcohol abuse cost the U.S.~$249 billion in 2010 [4]. In the U.S. alone,~18 million people (~7%) have some form of AUD [1], which is defined by the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) as problematic alcohol consumption and use involving craving, dependence, tolerance, withdrawal, relapse, poor decision making, and/or continued consumption despite negative consequences [5]. Nmdar1 Sed Rec [59] Post-synaptic adaptor/regulator of glutamatergic synapses HOMER1 AC; AC [60,61] HOMER2 AC, alcohol-related problems; reward-related learning and memory [60,62] homer Exposure-induced expression, SS, Rapid Tol (SS) [63] Insulin-like growth factor receptor IGF1R LR [64] InR MET [65] SS [66] Rapid Tol (Sed Rec) [69] Rapid Tol (SS), exposure-induced expression [70] AW seizure susceptibility; AW seizure susceptibility [71,72] Voltage-gated K + channel KCNQ5 AD [ [52] Neuropeptide Y receptor NPY2R AD, AW, comorbid alcohol and cocaine dependence [94] NPFR SS [93] Alcohol preference [95] Correlation b/n expression and EtOH preference or intake [52] Transcriptional repressor involved in circadian rhythm PER2 AC with sleep problems [96] PER3 AA/AD [97] per Rapid Tol (Time to Sed) [37] Circadian modulation of SS [98] Guanine exchange factor (GEF) PSD3 AD, AC, adolescent binge drinking [99] Efa6 Alcohol preference, SS, Rapid Tol (SS) [ AD; AD [105,106] Vmat Correlation b/n expression and EtOH preference or intake [52] Norepinephrine transporter SLC6A2 AD [107] DAT Act [108] Nuclear zinc-finger protein ZNF699 AD, post-mortem expression [109] hang Rapid Tol (MET) [110] Rapid Tol (eRING) [42] Columns show a brief description of the function of the gene product, the human (gray) or fly (white) orthologs, human or fly alcohol phenotypes associated with the gene variation, expression, or manipulation, with results from different studies separated by semi-colons and in respective order (Human: AA-alcohol abuse; AC-alcohol consumption (by volume or frequency); AD-alcohol dependence; AW-alcohol withdrawal; AUD-Alcohol use disorder diagnosis based on DSM IV criteria; LR-level of response to alcohol; Max drinks-most drinks consumed within a specified time period; post-mortem expression-transcript levels quantified from post-mortem tissue of alcoholics versus non-alcoholics; SRE-Self-Rating of the Effects of alcohol. Fly: Act-locomotor activity in the presence of alcohol; Alcohol preference-alcohol drinking/eating preference; eRING-ethanol Rapid Iterative Negative Geotaxis assay, measuring EtOH-induced reduction of negative geotaxis; exposure-induced expression-transcript levels quantified after exposure to EtOH versus mock exposure; MET-mean elution time from inebriometer; Olfactory preference-fraction of flies captured in a trap with alcohol odor vapor; Rapid Tol-rapid tolerance to the behavioral measure indicated in parentheses; Sed Rec-time required for flies to recover from sedation; SS-sensitivity to alcohol-induced sedation); and relevant citations. The well-established Drosophila research community has generated a myriad of easily obtainable genetic resources, comprising the largest collection of readily available transgenes and other genetic tools. Including RNAi-lines for gene knockdown, easily obtainable mutant strains exist for the majority of fly genes, whether created by CRISPR/Cas9, homologous recombination, or more classic methods [111]. These tools enable efficient hypothesis testing and complex, precise genetic manipulations that are important for validation and elucidation of genes implicated in unbiased studies. For example, Morozova et al. selected 37 candidate gene mutations from a transcriptional comparison of ethanol-sensitive versus -resistant fly strains and showed altered sensitivity to sedation in 32 of them [80]. Among the most important genetic tools is the Gal4/UAS system, which permits complex, cell-specific genetic manipulations such as genetic labeling, overexpression, RNAi-mediated transcript knockdown, gene rescue, and diverse CRISPR/Cas9-mediated gene edition [112] (Figure 1). Each of these outcomes can be limited to specific developmental time points, cell populations, or both. Moreover, temperature-sensitive or light-regulated effector genes can silence or activate neurons expressing a gene of interest in a temporally restricted manner. Massive collaborative projects have resulted in Gal4/UAS tools becoming available for most known fly genes. RNAi transgenes are available for almost any gene of interest, and thousands of Gal4 drivers are available that drive expression in distinct subsets of neurons [113]. Understanding the anatomical specificity of addiction genes is important, given that the same genetic manipulation can cause differing results depending on the precise locus of action. For example, global expression of a protein kinase A (PKA) inhibitor causes sensitivity to alcohol sedation [114], while anatomically limited inhibition causes resistance or sensitivity, depending on the neuroanatomical locus [65,114]. Many studies demonstrate the utility of these Drosophila genetic tools to establish causal roles of various genes in alcohol phenotypes, including many linked to specific cell populations [31,66,101,[115][116][117][118][119][120][121]. The split-Gal4 system permits even further refinement by limiting manipulations to subsets defined by two criteria (e.g., GABAergic neurons in the ellipsoid body) [122], thus allowing investigation into the contribution of neuronal subpopulations or even individual neurons to the development of alcohol abuse disorders ( Figure 1). Using such tools, specific neuronal populations are easily targeted in flies via straightforward crosses, rather than relatively difficult virally mediated targeting in mammals. These neuronal subsets may include neurotransmitter systems, which are highly conserved, and/or specific brain regions, which, while not structurally homologous between humans and flies, are often analogous in function. The Gal4/UAS system also allows expression of fluorescent proteins or tagged proteins limited to specific cell types of interest. This advantage permits cell-type-specific visualization, sorting, and transcript analyses using assays such as isolation of nuclei tagged in specific cell types (INTACT) [123,124], translating ribosome affinity purification (TRAP) [125], and chromatin affinity purification (CAST-ChIP) [126] (see also Reference [127] for review). Genome-wide transcription analyses (transcriptomics), which are already readily performed in flies, can become even more refined using these tools. As additional omics methods, assay for transposase-accessible chromatin-sequencing (ATAC-seq) and chromatin immunoprecipitation sequencing (ChIP-seq) can be performed in flies. These assays represent effective ways to investigate the genome-wide effects of alcohol exposure on chromatin remodeling and DNA binding of proteins such as transcription factors and epigenetic enzymes, respectively. Performing these tests with human tissue is rare and impossible to perform after controlled alcohol exposure or to restrict to specific cell types, though isolating specific brain regions is feasible. In flies, but not mammals, genes identified from ATAC-seq or other omics methods can be easily integrated into transgenes and functionally tested [128]. Similarly, important human SNPs or human orthologs of genes of interest can be engineered in flies to explore their biochemical or behavioral roles [129][130][131]. Finally, despite their relatively simple nervous systems, flies retain a fairly complex behavioral repertoire that mirrors many behavioral paradigms found in vertebrate models, again demonstrating their usefulness in AUD research [111]. Figure 1. The Gal4-UAS system allows precise control of transgene expression. In this binary system, the yeast transcription factor Gal4 is placed under the control of a specific gene promoter, which limits Gal4 expression to select cell types expressing the driver gene. This transgenic construct is combined with a second transgene that places a desired effector gene downstream of the Gal4-binding upstream activation sequence (UAS). Thus, the expression of the effector gene is under spatial and temporal control of a specific gene promoter. The split-Gal4 system uses an intersectional approach to refine Gal4 expression. The Gal4 activation domain (AD) and DNA-binding domain (DBD) are placed downstream of two different promoters. In cells that express both promoters, the AD and DBD combine to form a functional Gal4 protein, which then binds the UAS and drives transgene expression in a more spatially restricted manner. For example, in brain areas where AD (green region) and DBD (purple region) expression overlap (white neurons), the UAS is expressed. Genome-wide transcription analyses (transcriptomics), which are already readily performed in flies, can become even more refined using these tools. As additional omics methods, assay for transposase-accessible chromatin-sequencing (ATAC-seq) and chromatin immunoprecipitation sequencing (ChIP-seq) can be performed in flies. These assays represent effective ways to investigate the genome-wide effects of alcohol exposure on chromatin remodeling and DNA binding of proteins such as transcription factors and epigenetic enzymes, respectively. Performing these tests with human tissue is rare and impossible to perform after controlled alcohol exposure or to restrict to specific cell types, though isolating specific brain regions is feasible. In flies, but not mammals, genes identified from ATAC-seq or other omics methods can be easily integrated into transgenes and functionally tested [128]. Similarly, important human SNPs or human orthologs of genes of interest can be engineered in flies to explore their biochemical or behavioral roles [129][130][131]. Finally, despite their relatively simple nervous systems, flies retain a fairly complex behavioral repertoire that mirrors many behavioral paradigms found in vertebrate models, again demonstrating their usefulness in AUD research [111]. Figure 1. The Gal4-UAS system allows precise control of transgene expression. In this binary system, the yeast transcription factor Gal4 is placed under the control of a specific gene promoter, which limits Gal4 expression to select cell types expressing the driver gene. This transgenic construct is combined with a second transgene that places a desired effector gene downstream of the Gal4-binding upstream activation sequence (UAS). Thus, the expression of the effector gene is under spatial and temporal control of a specific gene promoter. The split-Gal4 system uses an intersectional approach to refine Gal4 expression. The Gal4 activation domain (AD) and DNA-binding domain (DBD) are placed downstream of two different promoters. In cells that express both promoters, the AD and DBD combine to form a functional Gal4 protein, which then binds the UAS and drives transgene expression in a more spatially restricted manner. For example, in brain areas where AD (green region) and DBD (purple region) expression overlap (white neurons), the UAS is expressed. Drosophila Alcohol Assays Establish Flies as an Effective AUD Model System Since addiction is a complex combination of various behaviors, researchers generally break down AUDs into discrete aspects of addiction represented by specific behavioral responses (i.e., endophenotypes), such as naïve ethanol (EtOH) sensitivity, functional tolerance (brain-mediated decreases in response resulting from repeated exposures), or alcohol consumption. Many of these distinct behaviors can act as metrics to indicate human propensity for developing AUDs. Specifically, AUD risk is augmented in individuals exhibiting reduced alcohol sensitivity, greater tolerance, increased consumption, greater stress, and greater EtOH dependence [132][133][134][135][136]. Drosophila are useful for uncovering the genetic underpinnings of these endophenotypes because many of these important response metrics can be modeled and reproducibly quantitated in fly behavioral assays. In fact, the validity of this model system has been established in parallel with development of various innovative assays that permit research into Drosophila EtOH responses and addiction. Partly due to the substantial similarities between human and fly AUD phenotypes (i.e., strong face validity), it is now widely accepted that flies are a powerful model for alcohol abuse [99,101,111,[137][138][139]. Like humans, flies become hyperactive and disinhibited upon exposure to low doses of EtOH, uncoordinated at moderate doses, and sedated at high doses [114,[140][141][142]. The original test to quantify EtOH sedation was the fly "inebriometer" [143] (Figure 2). More recent assays, such as the 7 of 29 "Booze-o-mat," determine flies' naïve sensitivity to alcohol's effects by providing measurements of hyperactivity, postural control, sedation, and time to recovery after EtOH cessation [111]. These tests also show that flies develop rapid tolerance (i.e., they require longer to sedate upon second exposure after all EtOH from initial exposure has completely metabolized) [46,144]. Rapid tolerance forms in as little as two hours and can persist for 24 h [46] or for weeks, depending on methodology and genotype [145]. Importantly, fly tolerance studies consistently find that EtOH absorption and metabolism do not change between first and subsequent alcohol exposures, nor between treatment groups with differing sensitivity or tolerance [46,69,144]. Thus, observed differences in sedation upon repeat exposure result from functional tolerance (mediated by the nervous system), not metabolic tolerance (mediated by altered activity of enzymes that metabolize EtOH). Given that alcohol addiction in humans largely depends on the development of functional tolerance, this fact again demonstrates the usefulness of adult flies to study AUDs. assays that permit research into Drosophila EtOH responses and addiction. Partly due to the substantial similarities between human and fly AUD phenotypes (i.e., strong face validity), it is now widely accepted that flies are a powerful model for alcohol abuse [99,101,111,[137][138][139]. Like humans, flies become hyperactive and disinhibited upon exposure to low doses of EtOH, uncoordinated at moderate doses, and sedated at high doses [114,[140][141][142]. The original test to quantify EtOH sedation was the fly "inebriometer" [143] (Figure 2). More recent assays, such as the "Booze-o-mat," determine flies' naïve sensitivity to alcohol's effects by providing measurements of hyperactivity, postural control, sedation, and time to recovery after EtOH cessation [111]. These tests also show that flies develop rapid tolerance (i.e., they require longer to sedate upon second exposure after all EtOH from initial exposure has completely metabolized) [46,144]. Rapid tolerance forms in as little as two hours and can persist for 24 h [46] or for weeks, depending on methodology and genotype [145]. Importantly, fly tolerance studies consistently find that EtOH absorption and metabolism do not change between first and subsequent alcohol exposures, nor between treatment groups with differing sensitivity or tolerance [46,69,144]. Thus, observed differences in sedation upon repeat exposure result from functional tolerance (mediated by the nervous system), not metabolic tolerance (mediated by altered activity of enzymes that metabolize EtOH). Given that alcohol addiction in humans largely depends on the development of functional tolerance, this fact again demonstrates the usefulness of adult flies to study AUDs. Figure 2. Assays used to test alcohol-related behaviors in Drosophila. The inebriometer measures sensitivity as a function of loss of postural control by determining the amount of time required for EtOH-exposed flies to "elute" out of a column with interspaced oblique baffles. The "Booze-o-mat" assay employs video tracking of fly postural control and/or movement during vaporized EtOH Figure 2. Assays used to test alcohol-related behaviors in Drosophila. The inebriometer measures sensitivity as a function of loss of postural control by determining the amount of time required for EtOH-exposed flies to "elute" out of a column with interspaced oblique baffles. The "Booze-o-mat" assay employs video tracking of fly postural control and/or movement during vaporized EtOH exposure to determine flies' naïve alcohol sensitivity. Consumption assays such as the capillary feeder (CAFÉ) and the fluorometric reading assay of preference primed by ethanol (FRAPPÉ) determine flies' preference for EtOH-containing food compared to control solutions. Different consumption assays permit different temporal resolution. Flies also develop symptoms of alcohol dependence and withdrawal. For instance, similar to humans, larvae experience neuronal hyperexcitability resulting from EtOH withdrawal, a finding that holds true for adult flies [70,71,146]. Further, fly larvae exhibit decreased learning ability during withdrawal compared to unexposed and re-inebriated flies, indicating cognitive dependence [147]. Last, various preference assays have been utilized to discover important similarities between fly and human EtOH preference responses that help to establish face validity of this model organism. Kaun and colleagues found robust preference learning by employing an odor-pairing Y-maze, demonstrating that, as in humans, alcohol acts as a behavioral reinforcer in flies, similar to analogous findings in rodents using conditioned place preference tests [148]. Consumption studies using assays such as the capillary feeder (CAFÉ), the fluorometric reading assay of preference primed by ethanol (FRAPPÉ), and the proboscis extension reflex (PER) reveal that, like humans, naïve flies are initially indifferent toward or avoidant of alcohol, depending on exposure method and parameters [149,150] (Figure 2). However, after prior alcohol experience, they develop EtOH preference, which increases over time and rebounds strongly after a period of abstinence [149], reminiscent of increasing intake and relapse in human AUD patients. In conjunction with aversive stimuli such as quinine (bitter taste) or electrical shock, these assays also show that flies will overcome negative stimuli in order to self-administer [148,149]. It should be noted that consumption tests often involve prior starvation, which may cause confounding effects via activation of stress pathways and genetic networks unrelated to alcohol responses [151]. Nonetheless, these and similar assays are frequently used to effectively quantify alcohol preference in flies. Overall, these various translatable assays enable robust and rapid functional testing of genes nominally implicated in AUDs. From Mammalian Gene Discovery to Fly Functional Testing Human and rodent studies have successfully utilized various approaches to identify many genes associated with AUDs. These approaches are discussed below, including GWAS, post-mortem transcriptomics, QTL analyses, and investigation of genes with known physiological connections. Functional testing to verify the roles of these genes is important. Therapeutic targeting of suspected AUD genes is more likely to be safe and effective if their mechanistic underpinnings are well understood. Some genes consistently associated with alcoholism in humans have clear mechanisms, such as the EtOH metabolism gene aldehyde dehydrogenase 2 (ALDH2). ALDH enzymes break down acetaldehyde, a molecule which causes nausea, facial flushing, and tachycardia. People with less efficient ALDH2 alleles experience more severe reactions, are more likely to be deterred by this aversive reaction, and are thus less likely to develop AUDs [152,153]. Contrastingly, some genes such as AUTS2 (discussed in detail below) are frequently implicated in human studies, yet have poorly understood function and no established physiological links to AUD phenotypes [38][39][40]. Thus, functional and mechanistic studies in model organisms are crucial. Building off of human gene discovery, Drosophila can be used to reveal the roles of functional protein states such as expression levels, localization, post-translational modifications, binding to other proteins or nucleic acids, etc. Given the wealth of available tools and assays, Drosophila represent an efficient and effective way to test roles and mechanisms of potential genes contributing to AUD risk, formation, and maintenance. The candidate genes that fuel such functional studies in flies arise from several different approaches. Below, we discuss each approach, including advantages, limitations, and examples of studies that have applied it to uncover candidate genes that were later successfully validated in flies. Human Genome-Wide Association Studies (GWAS) GWAS have revealed a substantial number of candidate AUD genes. This approach finds associations between inherent DNA sequence polymorphisms (or sets of polymorphisms) and AUD phenotypes measured by alcohol consumption, dependence, maximum drinks over a given time span, etc. Currently, most individuals at high risk for AUD discover their heightened genetic risk factors only after they develop a problem, if ever. In contrast, combined with the ever-increasing ease of full-genome sequencing, genetic players found using GWA analyses can potentially reveal high inherited susceptibility for AUD before the disease happens. Given that genetic propensity for AUD is extremely heterogeneous [19], treatment considerations for extant pathologies may also be guided in the future by understanding individuals' particular genetic backgrounds. Many important discoveries have been made using the GWAS approach. One gene implicated in multiple GWAS is AUTS2, a nuclear protein that interacts with polycomb repressor complexes, which play a role in gene regulation via chromatin remodeling [154]. Schumann et al. identified this locus via GWA meta-analyses using alcohol consumption as the dependent variable [40]. They then found increased AUTS2 expression in the human prefrontal cortex from carriers of a minor AUTS2 allele, as well as altered expression between various high alcohol-preferring and low-alcohol-preferring mouse lines. The human importance of AUTS2 is further supported by another GWAS of alcohol consumption, a GWA meta-analysis of the maximum number of drinks consumed in 24 h, and a biased haplotype analysis [38,39,155]. Lastly, Schumann et al. showed that reduced expression of the fly ortholog tay reduces EtOH sensitivity. tay negatively regulates the epidermal growth factor receptor (EGFR) pathway [156]. EGFR signaling plays diverse roles in flies [157], especially during development, is responsive to promising, FDA-approved drugs in humans, and is frequently implicated in fly alcohol behavior [51,80,82,[158][159][160]. For instance, EGFR suppresses EtOH-induced locomotion [159]. Thus, tay and AUTS2 may affect alcohol behaviors through this pathway. As another example of genes elucidated using GWAS, Schmitt and colleagues performed a meta-analysis of GWAS data on the endophenotype known as SRE (Self-Rating of the Effects of alcohol), yielding 37 hits, including the transcription factor MEF2B [75]. Follow-up validation of Drosophila orthologs revealed that loss-of-function mutations of the Mef2 transcription factor decrease EtOH sedation sensitivity but not rapid tolerance. Another group found that Mef2 reduction in neurons, or more specifically in mushroom body α/β neurons, reduces tolerance, corroborating the importance of this gene in alcohol responses [78]. The dissimilar fly tolerance results between these two groups may be an example of global gene manipulations causing different effects than manipulations limited to particular neuronal populations. A recent CGAS by Muench et al., and a human GWAS meta-analysis by Evangelou et al., further corroborate the role of Mef2 by implicating the human ortholog MEF2C [76,77]. Though the exact mechanisms of action remain unclear, mammalian Mef2A and Mef2D regulate dendrite differentiation and synapse number [161,162]. Further, signaling pathways affected by EtOH control Mef2 expression and activity, as do pathways linked to neural activity (e.g., intracellular calcium) [161,163,164]. Indeed, Sivachenko and colleagues showed a role for Mef2 in fly neuronal plasticity, including temporal cycling of neuronal cytoskeleton structure, suggesting intriguing connections to adaptive neuronal processes involved in addiction [165]. Supporting these hypotheses, other work shows that Mef2 suppresses cocaine-induced increases in dendritic spine density [166]. Finally, Adkins et al. found that the RYR3 gene, encoding a ryanodine receptor regulating intracellular calcium levels, has a "suggestive association" with human alcohol dependence [102]. This finding was not significant in replication; however, loss of the fly ortholog RYR notably reduced rapid tolerance to EtOH-induced sedation, highlighting the importance of functional validation of findings that may appear inconsistent in human studies due to limited sample sizes and low statistical power. A brief discussion of caveats to GWAS studies is warranted. For instance, given that increasing evidence supports a role of epigenetics in mediating EtOH responses, it is important to note that there are potential disconnects between the genomic sequences studied in GWAS and the true transcriptional states that contribute to EtOH phenotypes and AUD susceptibility. Additionally, genes implicated in GWAS may not in and of themselves produce acute EtOH responses or the adaptations that lead to addiction. Candidate genes could be upstream regulators of the actual effector genes, including regulators involved in distinct but relevant processes such as executive function, motivation, and decision-making. Historically, GWAS can also suffer from selection bias, environmental confounds, poor reproducibility, and weak statistical power, largely due to vast heterogeneity between subjects and studies. Although some GWAS studies yield very few or no genetic variants that reach genome-wide significance, these shortcomings are increasingly ameliorated by increased sample sizes, pooled meta-analyses, and improved "post-GWAS" methods [24,25]. For instance, Evangelou et al. performed a meta-analysis of GWAS data of alcohol consumption from almost 500,000 people, which had enough power to identify 46 putative AUD genes [76]. Indeed, with this success comes the problem of functionally validating so many potential hits, given that they found no overrepresentation of cohesive gene families, pathways, or ontologies. This challenge becomes manageable by turning to high-throughput models like Drosophila for gene validation. Similarly, for gene discovery, many concerns of human GWAS are diminished in fly GWAS and other fly studies, which are amenable to higher sample number and greater statistical power (discussed below). Transcriptomics on Post-Mortem Human Tissue One approach to connect gene expression to psychiatric disease is the application of transcriptomics to brain tissue from deceased AUD patients versus healthy controls [10,12,20,23]. These approaches uncover associations between the severity of AUD phenotypes and global or region-specific gene expression. Moreover, they often use network analyses to distinguish genes that are expressly altered by AUDs from genes that may be dysregulated merely as part of co-regulated EtOH-responsive networks. Transcriptome profiling is generally accomplished with microarrays or RNA-sequencing (RNA-seq). Whole-genome profiling using these methods demonstrates the effects of chronic alcohol use on gene expression in various brain regions known to play a role in AUDs, including the prefrontal cortex, nucleus accumbens, and hippocampus [12,20,[167][168][169]. Building upon existing human transcriptome data from microarrays [41], one group performed gene set overlap analysis between this data, transcriptomics on mice exposed to EtOH, and transcriptomics comparing isogenic mice bred to be alcohol-preferring or non-preferring [42]. Using this combined approach, they identified the most highly ranked hit, a chloride intracellular channel known as Clic4, as a potential AUD gene. Subsequent validation in flies (mutants), C. elegans (mutants), and mice (virally mediated overexpression) showed significantly altered alcohol sensitivity. EtOH sensitivity in loss-of-function Drosophila mutants was consistently decreased across studies, despite dissimilar assay methods [42,43], and also in flies with neuron-specific RNAi knockdown of Clic4 [43]. One weakness of transcriptomic approaches is that they establish only correlational links between genetic state and disease phenotypes. Additional weaknesses include the challenges of RNA degradation, heterogeneity between individuals, environmental confounds, and highly dynamic transcriptional adaptation in response to unpredictable stimuli. Hence, highly controlled functional testing in flies is all the more critical. As a notable alternative to typical transcriptomics, two groups performed ChIP-seq on post-mortem samples to show that, like gene expression, histone methylation is altered in the brains of alcoholics [10,169]. These studies supported later fly research that revealed a role of various histone demethylases in alcohol responses [121]. Given that covalent epigenetic markers are more stable than mRNA, there is great potential for epigenome studies in this context, though these approaches are still in their infancy [10]. Nonetheless, these omics methods provide in-depth genetic profiles separable by brain region and remain as powerful tools to directly study AUDs in humans. Rodent GWAS, QTL Analyses, and Transcriptomics As an alternative to the human approaches already discussed, important gene discovery can also be accomplished with rodent models. These approaches may include similar post-mortem transcriptomics and GWAS-style analyses, with the additional possibility of performing QTL analysis on rodent lines with purposefully limited genetic variation [170]. QTL studies identify genomic regions whose genetic variation or expression correlate with quantification of phenotypes of interest. Investigation of rodent gene expression profiles after EtOH exposure can also yield useful information, similar to flies (see below). Methods to study AUD genetics in rodents have been reviewed extensively and will not be discussed in-depth here [171,172]. However, one noteworthy example is a study by Mulligan et al. This group demonstrated the effectiveness of meta-analysis combining rodent genetics (using congenic strains) and transcriptomics (using microarray after alcohol exposure) [173]. In their results, they highlighted EGFR signaling and cytoskeleton regulation as some of the most overrepresented pathways differentially expressed between mice stains exhibiting differential alcohol consumption, converging with the aforementioned AUTS2/EGFR studies and with findings implicating cytoskeleton dynamics using forward genetics in flies (see below). Additionally, the neuropeptide NPY (fly ortholog: NPF) and its receptors are notable examples of many effective rodent methods, which were subsequently applied to yield corresponding fly and human discoveries that revealed the mechanistic conservation of this gene in AUDs (for review, see Reference [174]). NPY/NPF controls both hunger and stress levels. This gene was initially implicated by a QTL analysis and comparisons of expression levels between alcohol-preferring versus non-preferring rats, and by measuring NPY transcript levels in wild-type rats with or without EtOH exposure [170,175]. Thiele and colleagues also found that NPY deficiency in mice increases EtOH consumption and resistance, while overexpression reduces these phenotypes [176]. In flies, NPF modulates reward states [95], confirmed recently in a study that used optogenetics to allow flies to self-administer by moving to the appropriate area, then tested the flies' conditioned place preference [177]. Similar to rodents, reduction of NPF (or its receptor, NPFR) increases EtOH resistance, while overexpression has the opposite effect [93]. Sekhon et al. also tested inbred fly lines to associate NPF and NPFR with altered EtOH preference [52]. Completing the picture, NPY and NPY receptors have been implicated in numerous human studies [36,[88][89][90][91][92]94]. Work on NPF/NPY exemplifies a primary strength of rodent gene discovery: greater cross-species validation strengthens confidence that the gene is causally involved in AUDs. If a gene discovered in rodents can be successfully validated and mechanistically explored in flies and demonstrated to associate with AUD in humans, such a conserved role despite vast evolutionary distances strongly suggests a role for the gene in AUDs and great promise as a potential therapeutic target. Targeting Genes with Established Physiological Relevance In addition to the approaches discussed so far, researchers also perform functional testing of genes in flies in response to prior association with relevant gene networks or physiological processes known to play a role in rodent or human AUDs, independent of any large-scale omics or GWAS studies. Genes investigated for this reason include various synthesis enzymes, transporters, receptors, and degradation enzymes for neurotransmitters such as dopamine, serotonin, GABA, glutamate, and octopamine [6,111,138,148,178]. Octopamine is the functional equivalent of norepinephrine in Drosophila [179]. Further examples include CREB, CREB binding protein (CBP), and the BK-type Ca 2+ -activated K + channel, slo [6,111,138,178,180]. Given the vast collection of literature supporting roles for these genes in AUD, only one will be discussed here. slo was first investigated in the context of alcoholism because it undergoes homeostatic regulation after sedation by organic solvents and plays a role in tolerance to benzyl alcohol [138]. Loss of slo globally or in neurons eliminates EtOH tolerance [69], while slo induction is sufficient to produce a tolerance-like phenotype [70]. Further, EtOH sedation increases slo expression in neurons but not in non-neuronal tissue, which is concomitant with tolerance formation [70]. In flies, neuronal hyperexcitability resulting from EtOH withdrawal is at least partially dependent on persistent slo upregulation [71,72]. These types of ion channels may play a role in maladaptive brain plasticity leading to AUDs in humans, supporting the mechanistic validity of fly models [181]. Finally, two separate GWAS studies associated the human ortholog KCNMA1 (potassium calcium-activated channel subfamily M alpha 1) with alcohol dependence [34,68]. Thus, established physiological relevance laid the foundation for mechanistic AUD hypotheses and important discoveries of the role of slo in flies and humans. Summary of Human-to-Fly Approaches Various approaches permit effective gene discovery in mammalian systems. Though easily translatable, it is often difficult to assess the causative role of candidate genes in observed AUD phenotypes. The examples cited above demonstrate the usefulness of Drosophila to accomplish this purpose. Notably, some genes remain implicated in multiple human studies that, to our knowledge, have not yet been examined in Drosophila. For instance, β-Klotho (gene name: KLB), a transmembrane protein that complexes with fibroblast growth factor receptors (FGFR), was implicated in a human GWAS and a separate GWAS meta-analysis investigating alcohol consumption [38,182]. The latter study by Schumann et al. also found that KLB knockout mice have increased alcohol preference. Although King et al. showed that mutations in the fly FGFR gene htl reduce EtOH-induced locomotion [159], further validation of the role of KLB in AUD phenotypes is still needed, as is greater mechanistic understanding. Investigation of the mostly uncharacterized fly ortholog, CG9701, have potential to address these important gaps. Other interesting examples are various genes involved in serotonergic neurotransmission, which have been implicated in both biased and unbiased human genetic studies but have not yet been directly tested in flies [183][184][185]. Serotonin signaling clearly plays a role in alcohol responses, but much mechanistic insight could be gained by using flies to determine the effects of manipulating these various genes in specific neural populations and/or at specific developmental timepoints. From Fly Gene Discovery to Human Association Complementing the approaches already discussed, research can proceed in the opposite direction, wherein AUD gene discovery begins in Drosophila and moves to human validation. This overall approach is advantageous because, as a more efficient and genetically tractable animal model, gene discovery occurs faster in flies than in mammals. Human validation takes the form of candidate gene association studies (CGAS), which use reverse genetics to test associations between phenotypes of interest and small numbers of genes hypothesized to be important. Compared to GWAS, CGAS represent a more effective method of investigating specific disease questions. Critically, limiting the pool of candidate genes also limits the problem of multiple comparisons, creating more power for discovery of relevant polymorphisms despite low frequencies, subtle effects, or smaller sample sizes. Overall, since gene discovery in flies is generally accompanied by mechanistic and functional tests, approaching questions in this way combines the Drosophila strengths of breadth and depth with the mammalian strength of high translational value. Behavioral Screens in Drosophila Gene discovery in flies often begins with large-scale forward screens which remain practical due to the ease of generating random or deliberate mutations and the ease of quickly generating and testing thousands of flies in high-throughput assays. Unbiased forward screens begin with genetic mutagenesis accomplished with chemical agents, radiation, CRISPR/Cas9 [186,187], or transposable elements to establish hundreds of different fly strains. These strains are each scored for a given behavioral readout to detect aberrant phenotypes. Subsequent genetic mapping, DNA-sequencing, and rescue experiments then confirm the identities and causative roles of disrupted genes so that researchers can draw conclusions about their involvement in the phenotypes of interest. Single gene discoveries made in flies using one method easily expand into elucidation of entire pathways found gene-by-gene using a variety of complementary approaches. Such was the case after Rothenfluh and colleagues performed a transposable P-element screen of~1200 fly strains, examining EtOH-induced phenotypes [188]. They identified mutations in RhoGAP18B, a GTPase-activating protein (GAP). RhoGAP18B binds and inactivates actin-regulating Rho-family GTPases such as Rac1 and Rho1. Accordingly, loss-of-function mutations of RhoGAP18B and hyperactive Rac1 or Rho1 cause resistance to EtOH sedation [188,189]. Independently, loss-of-function mutations in Rsu1, another Rac1 inhibitor, were also found to cause resistance to alcohol sedation [101]. Hypothesis-driven CGAS in the same study found associations between human RSU1 polymorphisms and alcohol consumption in two independent cohorts. These initial human findings suggest that this pathway plays a conserved role in alcohol responses and demonstrate the utility of fly gene discovery followed by human hypothesis testing. Reverse genetics testing of related genes has further expanded the pathway to include upstream and downstream players such as the integrin cell-adhesion molecule and cofilin, an actin-severing protein, respectively [101,189]. Cofilin modulates actin cytoskeleton dynamics, suggesting a mechanism through which these genes could affect neuroplasticity and alcoholism [115]. To identify additional participants in the pathway, a subset of 300 randomly selected mutants was screened for effects on semi-lethality, a distinct pleiotropic phenotype of the strongest RhoGAP18B allele [100]. EtOH responses were tested in mutant lines implicated by the first screen. This iterative method identified Efa6, a guanine exchange factor (GEF) and activator for the small GTPase Arf6 [99]. Further hypothesis testing of Efa6 by Gonzalez et al. and Peru et al. found that Arf6 and Efa6 mutant flies exhibit increased sedation sensitivity and decreased tolerance [99,100]. Gonzalez et al. further showed that a SNP in one of four human Efa6 orthologs, PSD3, and a haplotype containing this SNP were associated with adolescent binge drinking and frequency of consumption. Moreover, the haplotype was linked to increased dependence in an independent sample. These human studies revealed that PSD3 expression is mostly restricted to the brain and is especially high in the prefrontal cortex. Of the four human orthologs, PSD3 exhibits the most limited expression patterns, suggesting less pleiotropy and higher potential for drug targeting. Finally, reverse genetics hypothesis testing elucidated the identity and relative order of various genes connected to Arf6 that form a pathway parallel to that of RhoGAP18B, including insulin receptor (InR) upstream and mTor and S6 kinase (S6K) downstream [66]. Inhibition of the mammalian ortholog mTORC1 with the FDA-approved drug rapamycin reduces alcohol seeking and drinking in mice [190][191][192]. Overall, this process of gene detection and testing demonstrates how screens and hypothesis-driven testing in flies and humans can work together to discover novel pathways with high potential for targeted drug therapy. Forward screens were also used by Scholz et al. to find decreased tolerance in hangover (hang) mutant flies, later confirmed in another study examining tolerance to EtOH-induced reduction of negative geotaxis [67,110]. hang encodes a nuclear zinc-finger protein that plays a role in cellular stress pathways, supporting the hypothesis that stress contributes to addiction phenotypes. Indeed, flies exposed to heat shock prior to naïve EtOH exposure display resistance to alcohol's effects, indicating heat/EtOH cross-tolerance. In hang mutants, however, this cross-tolerance is largely abolished, suggesting that tolerance is mediated in part by hang-dependent cell stress pathways. Furthermore, mutation of either hang or dunce (dnc), a cAMP-degrading phosphodiesterase, produces similar tolerance deficits and reduced cellular stress responses [193]. The same group found that hang binds dnc mRNA, while dnc regulates hang function during tolerance formation. Thus, the effects of hang on EtOH tolerance may occur through cAMP-signaling-dependent stress response pathways. Based on initial findings with hang, Riley and colleagues performed a CGAS that revealed a significant association of the human ortholog ZNF699 and alcohol dependence [109]. Human relevance was further shown by the finding of decreased ZNF699 mRNA expression in the dorsolateral prefrontal cortex of postmortem tissue from individuals with an associated risk haplotype. Related to these pathways, Li et al. first investigated jwa (also known as addicsin; ARL6IP5 in mammals) because of a similar association with stress responses [35]. Indeed, RNAi-mediated knockdown and overexpression in flies decreased and increased rapid EtOH tolerance, respectively. This gene exemplifies how, in contrast to unbiased screens, suspected AUD genes are often selected for further investigation because of known connections with previously implicated pathways or physiological processes in a one-gene-at-a-time approach. These higher-powered experiments increase the chances of finding moderate and small effect sizes, and their appeal as investigative or therapeutic targets is often bolstered by preexisting mechanistic hypotheses. Human studies then confirm translatability. In this case, Edenberg and colleagues independently performed human GWAS that supported an association between ARL6IP5 and alcohol dependence, though no SNP reached genome-wide significance [34]. Forward screens have also been utilized to demonstrate that genes affecting responses to one drug of abuse are likely to affect other drug responses. Tsai et al. performed an unbiased screen for mutations affecting Drosophila cocaine sensitivity, which implicated the transcriptional repressor dLmo (Bx) [194]. Subsequent functional testing showed that dLmo loss increased EtOH sedation sensitivity, while overexpression decreased it [74]. Corroborating results from Sekhon and colleagues using the Drosophila Genetic Reference Panel (DGRP) found an association between dLmo and EtOH preference [52], and Kapoor et al. implicated the human ortholog LMO1 in a GWAS looking at maximum drinks ever consumed within 24 h [39]. In mice, loss of orthologs Lmo3 or Lmo4 alters behavioral responses to cocaine, yet only Lmo3 affects alcohol responses [74,195]. dLmo plays a role in both drug responses in flies, suggesting that evolutionary divergence has resulted in different mammalian homologs functioning in different pathways that are still integrated in flies (see also Reference [99]). Thus, translation of fly genetic discoveries into mammalian systems could benefit from accounting for this possibility by examining all mammalian orthologs of implicated fly genes. As another example of AUD gene discovery through testing of genes connected in pathways, Lasek and colleagues investigated anaplastic lymphoma kinase (dAlk) after microarray expression analyses revealed it to be negatively regulated by dLmo in flies [32]. ALK is involved in Erk signaling and other pathways [196]. Lasek et al. also found that dAlk fly mutants show increased resistance to EtOH sedation. A follow-up CGAS in the same study identified four human ALK polymorphisms linked to reduced EtOH responses. This gene was further validated in humans by a GWAS meta-analysis [33]. Overall, the initial screen of cocaine sensitivity by Tsai et al. facilitated discovery of various important AUD genes and biological principles, showing the promising potential of investigations into genes implicated in other substance use disorders. Unbiased screens can become labor-intensive, so an alternative approach is to reduce screens to particular sets of candidate genes whose network or molecular roles have been previously implicated. Pinzon et al. used this approach to test effects of global histone demethylase (HDM) knockout on fly EtOH sedation sensitivity and tolerance [121]. Increasing evidence supports a role in AUDs of enzymes that modulate histone methylation and chromatin remodeling [180]. Since six out of seven phylogenetic families of human Jumonji C (JmjC) domain containing HDMs are represented by fly orthologs, each of the 13 known fly HDMs was knocked out and systematically tested for alcohol phenotypes. This study revealed effects of KDM3, lid, NO66, and HSPBAP1, the first three of which have orthologs that are upregulated in whole brains from alcohol-preferring mice [173]. Direct human evidence is lacking thus far, though the human ortholog of NO66, RIOX1, is downregulated in the amygdala of alcoholics [12]. The HDM study is exemplary for its success at performing a systematic screen of all genes within a family, which would be difficult to perform in higher model organisms. Nonetheless, an even more saturated screen of genes within the same pathways would be helpful for greater understanding of epistatic interactions [138]. In contrast to the structured gene discovery processes discussed thus far, AUD gene discovery and testing can also occur after independent convergence of results from multiple model systems. For instance, forward genetic transposon screens were the first to suggest a role of cAMP signaling in EtOH responses: Moore et al. found a sensitive mutant called cheapdate that was in fact an allele of amnesiac (amn), a known learning and memory gene thought to modulate adenylate cyclase [197]. Years later, Sekhon et al. independently implicated amn [52]. Tests of similar learning and memory genes revealed additional notable alcohol phenotypes caused by manipulations of rutabaga (rut) [11,142,197,198], encoding fly adenylyl cyclase, and dnc [193,199,200]. Separate from these pathways and studies, other studies have suggested alcohol-related roles of other genes in the network, including the cAMP-dependent protein kinase A (PKA) [114,197,201], protein kinase C (PKC) [79,[202][203][204], CREB [205][206][207], and CREB binding protein (CBP) [117,208], consistently suggesting a causal role of cAMP signaling pathways in alcohol abuse. The K + channel KCNQ is another example of the one-gene-at-a-time approach and the phenomenon of fly and human studies autonomously arriving at corroborating conclusions. KCNQ was examined because EtOH inhibits the non-inactivating K + M current mediated by the channel, which normally reduces neural excitability [73]. KCNQ loss in flies augments sensitivity and tolerance to the sedating effects of ethanol [73]. This gene was again implicated in flies by GWAS and extreme QTL analyses using the DGRP resource and by RNAi knockdown [51]. Kendler et al. completed the picture by implicating human KCNQ5 in a GWAS examining alcohol dependence, though again, no SNP achieved genome-wide significance [68]. Overall, whether as motivation or corroborating evidence for human investigations, biased and unbiased forward screens in Drosophila have and will continue to uncover many important genetic contributors to AUD. Fly GWAS and QTL Analyses With almost five million known SNPs in the fly genome [9], sufficient genetic variation exists within Drosophila to allow effective gene discovery through GWAS and QTL studies. These relatively rare fly studies are valuable for their atypical yet comprehensive forward genetic approach. Although fly GWAS retain most of the advantages and disadvantages already discussed for human and rodent GWAS, they alleviate some problems by reducing environmental confounds and permitting quantification of phenotypic variability between individuals. Vast numbers of isogenic flies allow effective mapping of this variability to the genome, unlike in humans, where isogenic sample size is limited to sets of twins [209,210]. Further, linkage disequilibrium diminishes rapidly in flies compared to mammals, increasing the chances that SNPs associated with AUDs represent causal, rather than merely linked, variants [129,211]. Additionally, genetic tools available in Drosophila support GWAS and QTL success. The DGRP is a readily available stock collection comprised of over 200 lines created by extensive inbreeding of wild-caught females [129]. Each line has a sequenced genome, and many include transcriptome data [129,130]. Studies employing the DGRP can enhance results by advanced intercross mating schemes meant to amplify power and reveal effects of lower frequency alleles, as done by Fochler and colleagues [82]. Similar techniques were also employed to create the Drosophila Synthetic Population Resource (DSPR), including over 1600 recombinant inbred lines useful for mapping causative genetic variation [131]. Sekhon et al. used the DGRP to identify 507 genes associated with EtOH preference and 384 genes associated with both food and EtOH consumption [52]. Several fascinating studies by Morozova et al. have employed the DGRP for GWAS and extreme QTL analyses to corroborate AUD roles of genes like Men (see below), dLmo, and rut (see above) [11,51,79]. Using these techniques and transcriptomic approaches (discussed below), these studies also implicate whole gene networks, including those involved in dopamine synthesis and cAMP signaling, again showing high translatability. Finally, inbred fly lines are also useful for drawing associations between EtOH phenotypes, genetic variants, and/or expression profiles. For instance, Morozova et al. used microarrays to study the transcriptomes (discussed below) of fly lines bred for 35 generations for resistance or sensitivity [80]. After functional validation, they found that mutations in 32 out of 37 candidate genes indeed altered EtOH sensitivity. This high confirmation rate suggests that this method is effective for discovery of important genes mediating alcohol addiction. Drosophila Transcriptomics As an effective means of uncovering genes directly linked to alcohol intake, transcriptomics can be performed on flies that have been exposed to alcohol once or multiple times versus those that have not. As with human transcriptomics, these assays center on microarray and RNA-seq analyses. These approaches assume that genes differentially expressed in response to EtOH may be the same genes that contribute to AUD propensity and formation. Partially circumventing this assumption, researchers can enhance analysis with transcriptional comparison of controls and mutants known to affect alcohol responses. Genes found to display genotype × exposure interactions are especially likely to be involved in aberrant mutant phenotypes and possibly in wild-type responses. Thus, this suite of methods allows investigation into potential genetic mechanisms of both mutant phenotypes and AUD responses. For instance, three independent studies performed similar microarray tests after EtOH exposure [63,81,160]. Synthesis of these results found that 14% of 1669 significantly dysregulated transcripts were identified in at least two of these studies, with 2% in all three [160]. These commonalities were relatively few in number, possibly due to different study designs or fly genetic backgrounds. However, their direction of change was remarkably consistent between studies, together suggesting highly robust gene associations that represent promising targets for future investigation. Indeed, many single genes were discovered, and later functionally validated, and further gene ontology analysis also revealed consistently altered genetic networks. These networks included many already implicated in mammals, such as those involved in metabolism, olfaction, epigenetics, and immunity. A notable gene identified in one of these microarray studies was homer, which is involved in post-synaptic regulation, especially of excitatory glutamatergic signaling [63]. homer transcripts decreased in response to EtOH exposure, and functional validation showed that homer is required for normal naïve EtOH sedation and tolerance. Although one CGAS found no association between human orthologs of homer and alcohol dependence [212], two unbiased studies suggested a role in human AUDs [60,61], and a large-scale GWAS implicated human HOMER2 in reward-related learning and memory [62]. Given the connection between homer and NMDA receptors [63], these findings support the larger hypothesis of glutamatergic signaling being important in alcohol addiction. As a further illustration of the effectiveness of transcriptomic approaches, Morozova et al. performed three unique studies testing fly transcriptomics in response to one or two EtOH exposures [11,80,81]. In an integrated approach, the 2011 study used unbiased screens to identify 139 unique mutations affecting EtOH sensitivity and tolerance [11]. Combining these hits with transcriptome data identified correlated transcriptional networks centered around nine genes whose mutation caused EtOH sensitivity and 12 mutations that caused resistance. A separate study in 2009 by the same group measured similar outputs but investigated associations between naïve EtOH sedation and mRNA profiles prior to sedation [79]. Many implicated genes in these studies were functionally validated in flies. Remarkably, all four studies identified malic enzyme (Men) as an important player in EtOH responses. Malic enzyme links glycolysis, the TCA cycle, and fatty acid synthesis. Alcoholics exhibit alcohol-induced fatty acid synthesis [213]. Using a CGAS to circumvent the multiple-testing problem of GWAS, the 2009 study found a significant association between human malic enzyme (ME1) and cocktail drinking, confirming the translatability of their Drosophila findings and demonstrating the effectiveness of using fly gene discovery to inform hypothesis-driven human association studies. Since then, Sekhon et al. corroborated the role of Men in flies by finding an association with EtOH preference [52], while Fochler et al. supported these findings with extreme QTL mapping and functional validation measuring fly alcohol consumption [82]. This Men narrative illustrates how a variety of transcriptomic approaches can effectively corroborate to elucidate the genetic underpinnings of AUDs. One additional example of genes implicated using fly transcriptomics is worth noting: Ghezzi and colleagues measured expression of microRNAs (miRNAs) after EtOH exposure due to prior clues from rodents and flies of miRNA relevance in AUD [84]. miRNAs act as gene expression regulators by targeting specific mRNAs for degradation. Within 30 min of exposure, 14 miRNAs had altered expression. Of these, two out of seven tested were functionally validated: miR-6 and miR-310. Many of the putative targets of these miRNAs are established alcohol-related genes [138]. Human miR-92 is the sequence-related homolog of fly miR-310 and was shown to be upregulated in the prefrontal cortex of human alcoholics [83]. Thus, the usefulness of transcriptomics extends beyond protein-coding mRNAs. Summary of Fly-to-Human Studies Overall, Drosophila represent an effective and efficient model system, not just for gene validation and mechanistic investigations, but also for initial gene discovery. Behavioral screens, GWAS and QTL analyses, transcriptomics, and single-gene approaches each contribute unique insights into how genes and gene networks react to EtOH or prime the organism for altered responses that are potentially deleterious and predictive of AUD formation. Future Directions Flies have proven indispensable to the discovery and/or validation of numerous AUD-related genes. However, a quick scan of the literature summarizing these advancements, such as Park et al. [138], reveals that most confirmed AUD genes have simply been shown to affect alcohol responses, and much work remains to be done to uncover their mechanistic foundations. To date, most implicated genes have been studied only on a global level or sometimes on a neuronal level. This fact becomes problematic given that the roles of genes in AUD likely vary by brain region, cell population, and neuronal circuit (e.g., References [114] and [65], discussed above). Indeed, Ojelade et al. tested Rsu1, a known regulator of Rac1, and found that global Rsu1 loss leads to high naïve EtOH preference, while Rsu1 reduction in specific brain regions causes normal naïve preference but decreased learned preference [101]. Butts and colleagues similarly found an anatomy-specific role for Rac1 and cofilin [115]. In rodents, Rac1 in the dorsal versus ventral striatum plays opposite roles in cocaine-induced reward and spine maturation [214,215]. Furthermore, Scaplen et al. recently showed that population-level dopaminergic activation encodes alcohol rewards, whereas specific microcircuits encode cued activation of alcohol memories [216]. Ideally, mechanistic studies will employ fly genetic tools and increasing insight provided by the completed fly connectome [217] to parse out the specific cell populations or circuits in which genes play a given role. Moving forward, important sub-groups may include glial populations, which are generally under-researched. Additionally, further investigation is warranted into genetic temporal underpinnings. Alcohol-related genes that cause developmental changes are useful for understanding AUD predisposition, but may not directly contribute to EtOH responses or AUD formation. Methods to manipulate genes only in adult Drosophila can rule out developmental influences, thereby more accurately modeling drinking problems and potential solutions for adult humans. Gene manipulation at different stages of alcohol exposures or stages of addiction is also warranted. Furthermore, most gene expression analyses only measure mRNA levels, which can show poor correlation with functional protein levels. Thus, proteomics should corroborate and supplement transcriptomic studies [218]. This approach has been utilized with flies and with human post-mortem tissue [219,220], but is generally not well explored. Moreover, there is great potential in using flies to screen potential drugs to treat AUDs, which has proven effective in other contexts [221,222]. Finally, many genes and pathways have been implicated in flies that, to our knowledge, lack studies examining their human correlates, despite obvious homology, probable shared pathways, and evidence of links to AUDs in mammals. These include rut (discussed above) [11,142,197,198], the dopamine/ecdysteroid receptor gene DopEcR [223,224], the deacetylase gene Sirt1/Sir2 [78,81,160,225], various PKA genes [114,197,201], and happyhour, encoding a kinase that is a negative regulator of the (druggable) EGFR pathway mentioned above [158]. Further work is required to firmly establish the importance of some of these genes in Drosophila ethanol responses, whereas others are strongly implicated in flies but lack hypothesis-driven investigation in humans to advance the translational process. Conclusions Understanding the genetic bases of alcohol addiction is crucial for effective prevention and treatment. We have explained many effective methods to identify AUD-related genes using Drosophila as a starting or end point. For gene discovery, no single method far surpasses the other, so a variety of approaches should be used to maximize the chances of identifying critical genetic players and networks. Intersecting genes and pathways found using different methods and studies are strong candidates for further investigation and potential molecular targeting. In contrast, for qualitative filtering of potential AUD genes and for mechanistic understanding, Drosophila is an unparalleled model, given flies' robust behavioral repertoire and convenient genetic toolkit. Indeed, conserved genetic targets that similarly influence alcohol responses despite the evolutionary distance separating these organisms are more likely to represent core elements of AUD propensity and development that have high therapeutic potential. Overall, Drosophila represents a powerful model to understand and mitigate human AUDs. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
2020-09-17T13:06:15.711Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "5d55dda2ae6a0e68595d4b387c71075e7d4b3446", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/21/18/6649/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bec0c54e335e3b5ee1a56595292f89781e530b5d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
220371132
pes2o/s2orc
v3-fos-license
Comparison of Venous Thromboembolism Risks Between COVID-19 Pneumonia and Community-Acquired Pneumonia Patients Objective: The objectives were to investigate and compare the risks and incidences of venous thromboembolism (VTE) between the 2 groups of patients with coronavirus disease 2019 (COVID-19) pneumonia and community-acquired pneumonia (CAP). Approach and Results: Medical records of 616 pneumonia patients who were admitted to the Yichang Central People’s Hospital in Hubei, China, from January 1 to March 23, 2020, were retrospectively reviewed. The patients with COVID-19 pneumonia were treated in the dedicated COVID-19 units, and the patients with CAP were admitted to regular hospital campus. Risks of VTE were assessed using the Padua prediction score. All the patients received pharmaceutical or mechanical VTE prophylaxis. VTE was diagnosed using Duplex ultrasound or computed tomography pulmonary angiogram. Differences between COVID-19 and CAP groups were compared statistically. All statistical tests were 2 sided, and P<0.05 was considered as statistically significant. All data managements and analyses were performed by IBM SPSS, version 24, software (SPSS, Inc, Chicago, IL). Of the 616 patients, 256 had COVID-19 pneumonia and 360 patients had CAP. The overall rate of VTE was 2% in COVID-19 pneumonia group and 3.6% in CAP group, respectively (P=0.229). In these two groups, 15.6% of the COVID-19 pneumonia patients and 10% of the CAP patients were categorized as high risk for VTE (Padua score, >4), which were significantly different (P=0.036). In those high-risk patients, the incidence of VTE was 12.5% in COVID-19 pneumonia group and 16.7% in CAP group (P=0.606). Subgroup analysis of the critically ill patients showed that VTE rate was 6.7% in COVID-19 group versus 13% in CAP group (P=0.484). In-hospital mortality of COVID-19 and CAP was 6.3% and 3.9%, respectively (P=0.180). Conclusions: Our study suggested that COVID-19 pneumonia was associated with hypercoagulable state. However, the rate of VTE in COVID-19 pneumonia patients was not significantly higher than that in CAP patients. C oronavirus disease 2019 , caused by a novel severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2), has spread around the planet. The clinical presentations of COVID-19 vary from asymptomatic to severe acute respiratory syndrome. It has been suggested that coronavirus, like some other viruses, may also have significant impact on the hematopoietic and hemostatic systems resulting in thrombotic and bleeding complications. [1][2][3][4][5] Recent publications reported that SARS-CoV-2 infection might increase the risks of venous thromboembolism (VTE), especially in hospitalized patients with severe symptoms such as COVID-19 pneumonia. [5][6][7] However, the real incidence of VTE in COVID-19 inpatients has not been well documented, and the VTE risks in COVID- 19 with community-acquired pneumonia (CAP) patients. This study compares the risks and incidences of VTE between the 2 groups of patients with COVID-19 pneumonia and CAP. APPROACH The authors declare that all supporting data are available within this article. Total of 616 pneumonia patients were admitted to the Yichang Central People's Hospital-a tertiary regional medical center in Hubei, China, from January 1 to March 23, 2020. All the patients presenting with fever or respiratory symptoms had COVID-19 screening tests, and the diagnosis of COVID-19 pneumonia was made according to the following published criteria: (1) symptomatic patients with bilateral pulmonary infiltrates and multifocal ground-glass opacities consistent with atypical pneumonia on chest computed tomography scan, (2) rhinopharyngeal specimen reverse transcription polymerase chain reaction (RT-PCR) test positive for SARS CoV-2, or (3) SARS CoV-2 gene assay positive. 8 Our institution opened fever clinic for COVID-19 patients on January 1, 2020, but RT-PCR test was not started until the end of January. The suspected COVID-19 patients were initially diagnosed based on travel/contact history, clinical manifestations, and chest computed tomography findings. We excluded the patients who were discharged before RT-PCR test available. All the 616 patients in this study were positive from 1 or 2 rounds of RT-PCR tests during hospitalization. The patients who presented with diagnosed COVID-19 pneumonia were directly admitted to the dedicated units. The patients with clinical manifestations and imaging evidence of pneumonia, but negative for first COVID-19 test, were admitted to transitional isolated units until repeated tests were performed. Those patients who were confirmed to have COVID-19 infection by the repeated test were then transferred to the dedicated units. The patients who were negative per repeated COVID-19 test were then transferred to regular hospital wards from transitional units. Medical records of the 616 patients were reviewed retrospectively. This study was approved by the Institutional Review Board of the Yichang Central People's Hospital. Informed written consent was not required for this deidentified retrospective study. Patients' demographic data were collected and analyzed statistically. The risks of VTE were assessed using Padua prediction score. 9 All patients received VTE prophylaxis following standard protocols with low-molecular-weight heparin or unfractionated heparin or mechanical intermittent pneumatic compression device if contraindicated to anticoagulants. High-risk (Padua score, >4) patients were screened using Duplex ultrasound or computed tomography pulmonary angiogram to rule out VTE. Descriptive analyses were reported as relative frequencies for discrete variables. Continuous variables were reported as mean±SD or median and interquartile range for normal and non-normal distributed variables, respectively. To determine the differences of observational parameters between COVID-19 pneumonia and CAP groups, χ 2 test, Fisher exact test, t test, or Mann-Whitney U test were performed. All statistical tests were 2 sided, and P<0.05 was considered as statistically significant. All data managements and analyses were performed using IBM SPSS, version 24, software (SPSS, Inc, Chicago, IL). Table 1. In general, COVID-19 pneumonia patients were younger with less underlying diseases, whereas more CAP patients had chronic comorbidities, including coronary artery disease, cerebrovascular disease, hypertension, diabetes mellitus, and history of malignancy. However, acute liver and renal dysfunction occurred more common in COVID-19 pneumonia patients, especially in critically ill patients (Table 1). RESULTS Overall rate of VTE was 2% in COVID-19 pneumonia group and 3.6% in CAP group, respectively, with no significant difference (P=0.229; Subgroup analysis showed that in the high-risk (Padua score, >4) patients, the incidences of VTE were 12.5% (5 of 40) in COVID-19 pneumonia group and 16.7% (6 of 36) in CAP group, with P=0.606 (Table 4). The incidence of VTE in the critically ill patients who required ventilator support was 6.7% (3 of 45) in COVID-19 group and 13% (7 of 54) in CAP group, respectively (Table 4). Vertical comparison within each group showed the incidence of VTE in the patients who required ventilator support was higher than that in the patients without ventilator support (Table 5). However, cross-comparison demonstrated no statistical difference between the COVID-19 pneumonia and CAP groups (P=0.484). The length of hospital stay of the COVID-19 pneumonia patients was significantly longer than CAP patients (28 versus 9 days; P<0.001). In-hospital mortality rate was 6.3% in COVID-19 group versus 3.9% in CAP group, but statistical difference was not reached (P=0.138; Table 1). In COVID-19 pneumonia group, 1 patient on ventilator support developed acute arterial ischemia with gangrene toes and fingers. DISCUSSION Previous publications documented that coronaviruses may have strong influences on hematopoietic and hemostatic systems and are associated with thrombotic complications. 1-6,10-12 The mechanisms are potentially due to endothelial cell dysfunction secondary to infection, 1,13 venous stasis because of immobilization during hospitalization especially in critically ill patients, and hypoxia stimulation. 14 Antiphospholipid, anticardiolipin, and anti-β 2 -glycoprotein I antibodies were detected in 3 patients with complicated severe COVID-19 infection, 15 but data in large series of patients were not available. The mechanisms of this novel SARS-CoV-2 on thrombotic system have not been clearly understood. Recent cohort studies and case reports suggested that the VTE risks were increased in COVID-19 patients. [2][3][4][5][6][7]16,17 Current interim guidelines recommend to assess VTE risk in all COVID-19 patients admitted to hospital and provide pharmacological prophylaxis to all high-risk patients. 18,19 It was also suggested that anticoagulant treatment was associated with decreased mortality in severe COVID-19 patients. 6 However, no previous study has compared the VTE risks between the 2 groups of COVID-19 pneumonia patients and CAP patients. We conducted a retrospective chart review study to investigate the difference of VTE risks between the 2 groups of COVID-19 pneumonia and CAP patients, who were treated during the same time period at the same hospital with 2600 regular beds and 500 dedicated COVID-19 beds located in the Hubei province-the epidemic center in China. Patients in the COVID-19 group were younger and healthier, whereas CAP patients had more chronic underlying diseases, such as coronary artery disease, cerebrovascular disease, hypertension, and diabetes mellitus. This reflected the characteristics of COVID-19 spreading among the populations who were socially more active (Table 1). Notably, rates of acute kidney and liver dysfunction were significantly higher in COVID-19 pneumonia patients than in CAP patients (5.9% versus 1.4%, P=0.002, and 4.7% versus 0.3%, P<0.001, respectively). Cytokine storm might be one of the mechanisms causing multisystem organ dysfunction. 1,3,5 In-hospital mortality was as higher as 6.3% in COVID-19 pneumonia group versus 3.9% in CAP group but not statistically different (P=0.18). The length of hospital stay of the COVID-19 pneumonia patients was significantly longer than CAP patients. The main reason was COVID-19 patients had to wait until they became asymptomatic and were negative from 2 separate SARS-CoV-2 tests. It has been reported that SARS-CoV-2 increases the risk for VTE in COVID-19 patients. [2][3][4][5][6][7] Our data showed similar findings as the average Padua score was significantly higher in COVID-19 group than that in CAP group. However, the rates of VTE were not statistically different between the 2 groups, 2% versus 3.6% (P=0.229). These numbers of VTE in our groups were similar to previous report as 2.9% in COVID-19 patients. 5 In this study, the percentage of high-VTE-risk patients with Padua score >4 was 15.6% in COVID-19 pneumonia group, which was significantly higher than the rate of 10% in CAP group (P=0.036). Longitudinal subgroup analysis demonstrated that the incidence of VTE increased from 2% in the entire COVID-19 group as a whole to 12.5% in the high-VTE-risk COVID-19 pneumonia patients with Padua score >4. In CAP group, the incidence of VTE increased from overall 3.6% to 16.7% in high-VTE-risk patients (Padua score, >4). Cross group comparison demonstrated that the differences of VTE between COVID-19 and CAP groups were not statistically significant (P=0.606). These findings suggested that SARS-CoV-2 infection was associated with hypercoagulable state especially in high-risk patients. However, the actual VTE incidence in COVID-19 pneumonia group was not significantly higher than that in CAP patients (2% versus 3.6%; P=0.229). It was also reported that severity of the SARS-CoV-2 infection was associated with increased VTE risk. [5][6][7] However, direct comparison of the VTE rates in critically ill patients has not been performed. Our analysis compared the incidences of VTE in the patients who required ventilator support with the patients with mild-to-moderate symptoms. Longitudinal comparison in COVID-19 group showed that the rates of VTE increased to 6.7% in ventilator support patients from 0.9% in the patients without respiratory distress, although statistical significance was not reached (P=0.054). In CAP group, the incidence of VTE was significantly higher in the patients who required ventilator support than that in the patients with no ventilator support, 13% versus 2% (P<0.001). Our study revealed that elevated Padua prediction scores and increased VTE risks, which reflected hypercoagulable state, were associated with the severity of COVID-19 pneumonia and CAP. However, the incidences of VTE were not statistically different between these 2 groups based on cross group analyses. These real-world data suggested that SARS-CoV-2 infection might, similar to CAP, increase the VTE risk but did not further result in higher actual VTE events when routine deep venous thrombosis prophylaxis was given. We could imagine that complex mechanisms are involved in the pathophysiological processes of VTE development in the patients with SARS-CoV-2 infection. Attempting to interpret our findings that VTE incidence was not higher in COVID-19 pneumonia group than in CAP group, one possible explanation might be that COVID-19 patients were younger with fewer underlying diseases. Additionally, higher rates of acute liver and renal dysfunction in COVID-19 patients might cause some degree of coagulopathy. 20 In contrast, chronic comorbidities and history of malignancy were more common in CAP group (Table 1), which could result in higher VTE risks. There were several limitations in this report. First, this was a retrospective chart review study with patient selection bias. Second, the small numbers of VTE rates in subgroup analysis might generate type II statistical errors. Third, the underlying chronic comorbidities and different treatment regimens might also play an important role in VTE development. Finally, It has been reported that the sensitivity of RT-PCR test was 83.3% for the first round and 91.7% after repeated. 21 Theoretically, the false negative tests could result in some crossover diagnoses between the COVID-19 and CAP groups. Fortunately, we did not have any later diagnosed COVID-19 pneumonia from the CAP group. In summary, our study suggested that COVID-19 pneumonia was associated with hypercoagulable state according to the Padua prediction scores, especially in critically ill patients. However, the incidence of VTE in COVID-19 group was not significantly higher than that in CAP patients. We agree that attention on VTE prophylaxis should be paid to the patients with COVID-19 infection. However, further investigations should be conducted to evaluate whether extra or stronger VTE prophylaxis, compared with CAP, is necessary for the patients with COVID-19 pneumonia.
2020-07-07T13:01:22.091Z
2020-07-06T00:00:00.000
{ "year": 2020, "sha1": "b5c61e5d67352e6e4756fd38cd75e39aeccbb5cc", "oa_license": null, "oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/ATVBAHA.120.314779", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "f7ea36acfac2583730c639165f8d69c66ec40fae", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261020291
pes2o/s2orc
v3-fos-license
Modification and validation of the COVID-19 stigma instrument in nurses: A cross-sectional survey Background Nurses taking care of patients with infectious diseases have suffered from noticeable societal stigma, however currently, there is no validated scale to measure such stigma. This study aimed to revise and validate the COVID-19 Stigma Instrument-Nurse-Version 3 (CSI-N-3) by using item response theory (IRT) as well as classical test theory analysis. Methods In phase I, the Chinese CSI-N-3 was modified from the English version of HIV/AIDS Stigma Instrument-Nurse based on standard cross-cultural procedures, including modifications, translation/back translations, pilot testing, and psychometric testing with classical test theory and Rasch analysis. In phase II, a cross-sectional study using cluster sampling was conducted among 249 eligible nurses who worked in a COVID-19-designed hospital in Shanghai, China. The influencing factors of COVID-19-associated stigma were analyzed through regression analysis. Results In phase I, the two-factor structure was verified by confirmatory factor analysis, which indicated a good model fit. The 15-item CSI-N-3 achieved Cronbach’s α of 0.71–0.84, and composite reliability of 0.83–0.91. The concurrent validity was established by significant association with self-reported physical, psychological, and social support levels (r = −0.18, −0.20, and −0.21, p < 0.01). In IRT analysis, the CSI-N-3 has ordered response thresholds, with the Item Reliability and Separation Index of 0.95 and 4.15, respectively, and the Person Reliability and Separation Index of 0.20 and 0.50, respectively. The infit and outfit mean squares for each item ranged from 0.39 to 1.57. In phase II, the mean score for the CSI-N-3 in Chinese nurses was 2.80 ± 3.73. Regression analysis showed that social support was the only factor affecting nurses’ COVID-19-associated stigma (standardized coefficients β = −0.21, 95% confidence interval: −0.73 ~ −0.19). Conclusion The instrument CSI-N-3 is equipped with rigorous psychometric properties that can be used to measure COVID-19-associated stigma during and after the COVID-19 pandemic among nurses. The use of this instrument may facilitate the evaluation of tailored stigma-reduction interventions. Introduction The coronavirus disease 2019 global pandemic (World Health Organization, 2020) has placed frontline healthcare workers (HCWs) under extraordinary stress related to the high risk of infection, and resultant understaffing, uncertainty, and psychological distress (e.g., anxiety, depression, or insomnia) related to the illness (Cai et al., 2020;Liu et al., 2020).This is especially true for nurses, who make up the largest group of HCWs and who spend long periods of time providing care and monitoring COVID-19 patients (Chen et al., 2019).Nurses are often directly exposed to the virus and therefore are at high risk of developing the disease (Chen et al., 2019;Fernandez et al., 2020;Mo et al., 2020). Compared with HIV, hepatitis, influenza A, H1N1, severe acute respiratory syndrome (SARS), and Middle East respiratory syndrome (MERS; Park et al., 2018;Nyblade et al., 2019;Chersich et al., 2020), COVID-19 is highly communicable and has higher mortality rates, with stigma figuring prominently among nurses working with COVID-19 patients.Studies estimate approximately 20-49% of nurses in Taiwan and Singapore experienced social stigmatization during the SARS outbreak (Bai et al., 2004;Koh et al., 2005), with one such example being a nurse who was scolded by fellow passengers for making trains "dirty" (Chersich et al., 2020).During 2015 MERS pandemic, while caring for MERS patients, Korean nurses were discriminated against by family members, friends, and neighbors as well as by community members in the schools that their children attended (Jung and Cho, 2015). During the recent COVID-19 pandemic, several studies have reported that stigma has been experienced by HCWs.In one study by Simeone et al., Italian nurses experienced "stigma in the working environment" and "stigma in everyday life" (Simeone et al., 2021).Echoed in another study, Egyptian physicians experienced stigma while taking care of COVID-19 confirmed cases (Mostafa et al., 2020).Similarly, healthcare providers in Iran have also been impacted by COVID-related stigma (Kalateh Sadati et al., 2021).One study on perceptions of HCWs during the COVID-19 pandemic, conducted with non-healthcare worker adults, showed that study participants feared and avoided interactions with healthcare workers.This is a wide-spread and under-recognized issue during the COVID-19 pandemic (Taylor et al., 2020).These reports provide evidence that during the COVID-19 outbreaks, nurses have suffered from COVID-19-associated stigma due to the contagious nature and serious and potentially deadly outcomes of the disease (Bruns et al., 2020).COVID-19 related stigmatization among HCWs has been reported globally.For example, 17.3-91.0% of HCWs in Egypt experienced COVID-19-related stigma (Brooks et al., 2020).In addition, HCWs complained about their personal experiences with discrimination and later, burned out from caring for COVID-positive patients (Shiu et al., 2022).However, there is a dearth of empirical data on the measurement of COVID-related stigma experienced specifically by nurses. The lack of research regarding COVID-19-associated stigma is due to the unavailability of validated measures of such stigma.Most measures used to explore the stigma of HCW during SARS (Ho et al., 2005), influenza A, and H1N1 (Kisely et al., 2020) were informal assessments that were not evaluated for reliability and validity.Several existing instruments are currently being used to measure the HIV/ AIDS-related stigma of people living with HIV, the general population, and HCWs' perceived stigma while taking care of HIV-infected individuals (Holzemer et al., 2009;Uys et al., 2009). For this study, we adopted the HIV Dynamic Model of Holzemer et al. (2007) as the theatrical framework to guide the development of the stigma process and adapted the COVID-19 Stigma Instrument-Nurse (CSI-N) as a tool to measure levels of perceived stigma in nurses.The model is equipped three components, including the healthcare system, the environment (culture, economics, politics, law, and policies) and the agents (person, family, workplace, and community).The stigma process includes stigma triggers (testing, diagnosis, disease, disclosure, and suspicion), stigmatizing behaviors (blame, insult, avoidance, and accusation), types of stigmas (received, internal and associated), and stigma outcomes (poorer health, decreased quality of life, denied access to care, violence, and poorer quality of work life; Holzemer et al., 2007). The 19-item HIV/AIDS stigma instrument-nurse (HASI-N) scale was the first reliable and valid scale used to measure HIV/AIDSrelated stigma that is perpetrated and experienced by nurses; it includes two domains-nurses stigmatizing patients and nurses being stigmatized (Brooks et al., 2020).As the authors noted, the HASI-N scale could be modified to address infectious diseases other than HIV/ AIDS, and considering similar stigma conditions may be experienced by HCWs who provide care to individuals with COVID-19, these authors felt it appropriate to modify the HASI-N for use in COVID-19.The COVID-19 Stigma Instrument-Nurse (CSI-N) scale was designed to measure COVID-19-related stigma among nurses. High perceived stigma is directly associated with worse mental health among HCWs caring for HIV patients in Africa (Uys et al., 2009), MERS patients in Korea (Park et al., 2018), and SARS patients in Singapore (Verma et al., 2004), but findings regarding the linkage of stigma to HCWs' physical health outcomes are mixed (Uys et al., 2009;Park et al., 2018;Logie and Turan, 2020).Perceived stigma may impair nurses' job satisfaction and decrease their ability to provide effective care and therefore undermine the quality of care they provide (Chang and Cataldo, 2014;Nyblade et al., 2019).However, the stigma experienced by nurses during the COVID-19 pandemic and its influencing factors is still unknown.Limited studies have shown that COVID-19-infected individuals presenting with anxiety, higher levels of education, perceived risks, and familiarity with quarantine policy have a high likelihood of perceived stigma (Duan et al., 2020).Thus, in how to measure stigma associated with COVID-19 and its effects, a validated, trustworthy, and effective method was required to assess both the levels of stigma experienced by nurses during the pandemic as well as its influencing factors.In this paper, we present how we (1) modified the HASI-N into the CSI-N, (2) validated the CSI-N with both classical test theory (CTT) and item response theory (IRT), and (3) report on the COVID-19-associated stigma experienced by frontline nurses and its influencing factors. Participants and setting A convenience sampling method was used in Shanghai, China to recruit 400 Chinese registered nurses working in a COVID-19designated facility.Two hundred and forty-nine eligible Chinese registered nurses participated.Nurses were eligible to participate if they rotated through COVID wards, understood the purpose of the survey, and were willing to complete the survey.The ratio of subjects to variables determined the sample size of 5-10 to 1, (Streiner and Norman, 2003) and yielded a total of 11 variables in the study survey.All participants were reimbursed after completing the survey. Design and procedure After the approval of the study by the relevant institutional ethical review boards, our study took a two-stage approach that included a Stage I instrument modification and validation, and a Stage II crosssectional survey. Stage I: Instrument modification and validation In this stage, we adhered to the COnsensus-based Standards for the selection of health status Measurement INstruments (COSMIN) checklist (Mokkink et al., 2010a,b).The HASI-N (Uys et al., 2009) comprises 19 questions/items and two factors: nurses stigmatizing patients (e.g., "A nurse provided poorer quality care to an HIV/AIDS patient than to other patients") and nurses being stigmatized (e.g., "People said nurses who provide HIV/AIDS care are HIV-positive").A four-point Likert scale ranging from 0 = "never" to 3 = "most of the time" was used to measure these questions/items.A Cronbach's alpha reliability of 0.90 was calculated in this HASI-N scale showing good reliability.A negative association between job satisfaction and stigma significantly reinforced the HASI-N construct validity (Uys et al., 2009). After obtaining permission from the original author of HASI-N, we revised the HASI-N into the CSI-N using four steps (see Figure 1). Step 2: Translation The translation model was followed during the trans-cultural interpretation of the HASI-N with a sequence of (1) translation, (2) back-translation, (3) comparison, and (4) linguistic adaption (Brislin, 1970;Jones et al., 2001).First, the 19-item HASI-N was translated by a bilingual nursing researcher (English to Chinese).Also, the backtranslation of the Chinese into English version was done by another bilingual researcher, followed by a third member who compared the back-translated English version with the first version of the English instrument.One question was revised to confirm the two versions (translation and back-translated) close to the first (original)-version.Specifically, the Chinese sentence "直呼护士的名字" ("Someone called a nurse names") in item 15 was replaced with "耻笑护士" ("scorn nurse").The process resulted in the first version (version 1) of the Chinese Stigma Instrument-Nurse-Nurse (CCSI-N) to (CCSI-N-1) for pilot testing. Step 3: Pilot test To ensure fluency, readability, as well as comprehensibility of the new scale, one-on-one interviews were conducted with 17 nurses by phone and used a structured interview guide to understand how nurses translated items of the CCSI-N-1.Probes included: "Tell me what is this question asking?"; "What answer would you give to this question?";and "What does the [survey concept] mean to you?"All interviews were digitally recorded and transcribed verbatim for later analysis.The nurses indicated that the description of item 7 (A nurse made a COVID-19 patient wait until the last for care) was not suitable considering the centralized treatment and care for COVID-19 The cross-cultural adaption processes the HIV/AIDS stigma instrument-nurse (HASI-N) to COVID-19 stigma instrument-nurse (CSI-N).CTT, classical test theory; IRT, item response theory.patients; therefore, we deleted that item.The 18-item Chinese version 2 of CSI-N (CCSI-N-2) was ready for the validation steps. Step 4: Psychometric test The CTT and IRT evaluate the psychometric properties of the scale.After completing the item analysis, three items/questions (I5, I12, and I14) were discarded, therefore, the final 15 items/questions of CCSI-N-3 were generated (see Supplementary material A). Stage II: Cross-sectional survey In the cross-sectional survey, we adhered to the Strengthening the Reporting of Observational Studies in Epidemiology statement (von Elm et al., 2014).From April 18 to May 23, 2020, data were collected using the Questionnaire Star (QS or Wenjuanxing), an online survey program in China similar to the US-based Survey Monkey.We posted study recruitment information during the monthly nurse meetings at the COVID-19 facility.If eligible nurses were interested in participating and able to provide informed consent for the study online, the QR code or URL for the CCSI-N-3 was shared via the online messaging/calling system We Chat.1 Eligible nurses self-administered the 15-min online survey, including standardized measures to collect demographic data, self-reported health status, and social supports, as well as the CCSI-N-3.The sociodemographic variables included participants' age, gender, marital status, ethnicity, highest educational level, professional title, and years as a nurse.The self-reported physical health, psychological health, and social support levels were measured by three questions; each of these factors was rated on a 10-point Likert scale from 1 = "very bad" to 10 = "very well." In this study, the social support construct assessed nurses' support from family, colleagues, or the hospital they worked for. Data analysis SPSS 23.0 (IBM, Chicago, IL, United States) and Mplus 6.1 (Muthén & Muthén, Los Angeles, CA, United States) were used for data analyses.In addition, IRT analysis used WINSTEPS 3.75.0(Chicago, IL, United States) for the final report; p < 0.05 was considered significant. Item analysis The item was deleted when it met the following criteria of CTT and IRT analysis: (1) factor loading or cross-loading <0.4; (2) infit and outfit mean squares over the range of 0.6-1.4;and (3) after items were deleted, the alpha coefficient for the overall scale was increased (Nguyen et al., 2014;Huang et al., 2017). Structural validity The structural validity of the scale was assessed by CTT and IRT analyses combined with the confirmatory factor analysis (CFA).In the CFA, the best-fitting model of the scale was examined by using the method of maximum likelihood.The model's goodness of fit was evaluated with normed χ 2 (χ 2 /df) between 1.0 and 3.0, Root Mean Square Error of Approximation (RMSEA), Comparative Fit Index (CFI), Tucker-Lewis Index (TLI), and Normed Fit Index (NFI; Johnson et al., 2011;Huang et al., 2017). In the IRT analysis, we examined the unidimensionality assumptions of the IRT analysis by a principal component analysis (PCA).Assuming nurses may interpret scales differently in terms of the items, we used the partial credit model to assess the item and person separation reliability, item and person separation index, category probability curves, infit and outfit mean squares, test information function (TIF; Baker, 2001), and differential item functioning (DIF; Campbell and Fiske, 1959;Wolfe and Smith, 2007). Convergent validity Convergent validity of the CSI-N-3 was estimated by computing Pearson's correlations between the CSI-N-3 and selfreported physical health, psychological health, and social support levels. Stage II: Statistical analysis The one-sample Kolmogorov-Smirnov tests were not statistically significant even though the data fitted the assumptions of normality.Continuous variables were presented as standard deviations (SDs) and mean.Categorical variables were presented as percentages and means or proportions and SDs.Then, this followed up with one-way ANOVA and independent t-tests to recognize variances in the nurses' COVID-19-associated stigma score.In addition, we examine the associations among age, years of working as a nurse, selfreported physical health, psychological health, social support levels, and the score of the COVID-19-associated stigma by Pearson's correlation analyses.We also explored the factors influencing nurses' perceived COVID-19-associated stigma by multiple linear stepwise regressions.Multicollinearity was assessed with the variance inflation factor. Sample characteristics A total of 249 nurses participated in the survey, 96% (239/249) of whom were female.The mean age of participants was 31 years (SD = 5.52), and 64% (164/259) of them were married.More than half of the participants (53%) had 5-10 years of work experience as a nurse.Other socio-demographic characteristics of study participants are presented in Table 1. Psychometric properties of the CSI-N-3 Item retention As shown in Table 2, according to the criteria of item retention, three items (I5, I12, and I14) were removed due to cross-loading (I12, I14), factor loading <0.4 (I5), and infit and outfit mean squares outside the range of 0.6-1.4(I5, I12, and I14).After deleting I5, Cronbach's alpha coefficient for the overall scale was increased from 0.81 to 0.84. In the IRT analysis, the two subscale's unidimensionality assumptions were supported by PCA; that is, the residuals explained 58.9 and 55.1% (>50%) of the raw variance, whereas the unexplained variance in the first contrast was 1.7 and 2 (<3.0) eigenvalue units. As shown in Table 3, the item difficulty for each item ranged from −1.77 to 1.33, and infit and outfit mean squares for each item ranged from 0.32 to 1.40.No evidence of disordered thresholds was found in the category probability curves, as the category calibration increased in an orderly way (see Figures 3A,B).During the analysis, we found the item reliability to be (0.95 and 0.96), item separation index of (4.15 and 5.12), person reliability of (0.94 and 0.94), and person separation index of (3.92 and 3.94).DIF was not found when evaluated by professional title and working place (Wolfe and Smith, 2007).Regarding the TIFs, the subscales of nurses stigmatizing patients and nurses being stigmatized gathered information most precisely when Ө ranged from −1.0 to 1.0 and − 2.0 to 2.0, respectively, (see Supplementary material B). Convergent validity The total CCSI-N-3 score was negatively correlated with selfreported physical health, psychological health, and social support levels significantly (r = −0.18,−0.20, and −0.21, p < 0.01) which was confirmed by Pearson's correlation analysis. COVID-19 stigma scores of the participants The total mean score for the CSI-N-3 in Chinese nurses was 2.80 ± 3.73 (range 0-45) overall, with a mean score of 1.42 ± 2.13 (range 0-24) for the nurses stigmatizing patients factor and a mean score of 1.38 ± 2.46 (range 0-21) for the nurses being stigmatized factor.Supplementary material C has presented the mean score for each item. Factors correlated with COVID-19 stigma of nurses Self-reported physical health, psychological health, and social support levels were correlated with the score of COVID-19-associated stigma significantly (r = −0.18,−0.20, and −0.21, p < 0.05) confirmed by Pearson's analysis.This implied that ages and years of working as nurses were not correlated with the COVID-19-associated stigma Measured in logit; positive item logit indicates that items with positive difficulties are considered to be relatively hard; while a negative item logit indicates that items with negative difficulties are considered to be relatively easy.MNSQ mean square.The DIF contrast by professional title in the following order: b Naïve nurses compared with experienced nurses.c Naïve nurses compared with charge nurses.The DIF contrast by working place in the following order: d Severe COVID cases compared with mild/moderate COVID cases.e Severe COVID cases compared with all case severities.score (r = −0.23 and 0.01, p < 0.05).As shown in Table 1, other sociodemographic variables were not statistically significant (p < 0.05).The total score COVID-19-associated stigma-nurse was taken as the dependent variable, and the statistically significant self-reported physical health, psychological health, and social support levels were picked as independent variables (p < 0.05) in the regression analysis.Regression analysis showed self-reported social support (standardized coefficients β = −0.206,t = −3.32,95% confidence interval: −0.72 ~ −0.18) was the only factor influencing nurses' stigma related to COVID-19, explaining 4.70% of the total variance (F = 5.05, p < 0.001).The variance inflation factor for self-reported social support was one, which is below the criteria value of 2.1. Discussion The modification and validation of the CSI-N with both CTT and IRT This is a pioneer study to modify and validate the CCSI-N-3 via a thorough, multiple-The phase process.Psychometric evaluation based on the CTT and IRT demonstrated the 15-item CSI-N-3 with a two-factor solution is a trustworthy and effective self-report instrument for evaluating nurses' COVID-19-associated stigma.The factor analytic methods used in CTT reported the equivalent factor structure model with the HSI-N (the original scale; Uys et al., 2009), including subscales of nurses being stigmatized and nurses stigmatizing patients. In addition to the construct validity of the CSI-N, as supported by CFA, the convergent validity of the scale was also supported as there were significant negative correlations with self-reported physical health, psychological health, and social support levels.Similar to other communicable diseases such as HIV, SARS, and MERS (Park et al., 2018;Logie and Turan, 2020), our study showed that COVID-19-associated stigma adversely affects the physical and mental health of frontline nurses, although the r value was low.The low r value simply shows that these constructs were significantly correlated but different from the individual constructs (Campbell and Fiske, 1959).Importantly, Cronbach's α reliability was more than 0.6, indicating that the CCSI-N-3 presented with acceptable internal consistency and reliability (Johnson et al., 2011). Using IRT analysis, we have provided information about items in the CSI-N-3 that expand on traditional CTT methods (Adnan et al., 2018;Ahorsu et al., 2020).Our data support an ordered threshold in the category probability curves, which means that the category rating scale of the CSI-N-3 worked well and that nurses could use the scale to differentiate the four levels of item difficulty (Johnson et al., 2011;Adnan et al., 2018).The combination of a good person-separation index (>2) and person reliability (>0.8) suggests that the CSI-N-3 has acceptable measurement precision and is sensitive to distinguishing both high and low levels of COVID-19-associated stigma among frontline nurses (Johnson et al., 2011). When represented graphically, high TIF values correlated with low standard measurement errors and therefore assure its accuracy (Hambleton et al., 1991).The most precise information provided by the TIF on the CSI-N-3 supports precise and reliable measures in the low to middle levels of the CSI-N-3.Furthermore, IRT measures also allow for the estimation of the equivalence of item calibrations across different samples and contexts (Johnson et al., 2011).In our study, we examined how 15 items may have been used differently, based on the nurses' professional titles and the severity of cases at the workplace (mild, moderate, and/or severe COVID cases).The DIF findings showed there were no professional titles and workplace differences in the item difficulty, which further supports the stability and validity of the CSI-N-3 (Johnson et al., 2011). The COVID-19-associated stigma experienced by frontline nurses and its influencing factors The score of CSI-N-3 reflects the level of COVID-19-associated stigma perpetrated or experienced by nurses; however, we found that the mean score of CSI-N-3 (2.80 ± 3.73) appears to suggest a major floor effect; that is, the level of nurses stigmatizing patients or being stigmatized was not as high as the stigma level of nurses who worked with people living with HIV [8.74 ± 9.31;(2318)] and MERS-CoV (Park et al., 2018).This finding might be explained by the cultural differences between China, where the CSI-N-3 was developed and tested, versus South Africa, where the HASI-N was developed and tested.In addition, since the original HASI-N study was conducted in 2008 before the implementation of effective interventions to decrease stigma in healthcare institutions and nursing education, the external stigma may have since decreased surrounding infectious diseases.Under the influence of Confucian culture, most Chinese nurses have manifested a greater sense of work responsibility, dedication to patient care, personal sacrifice, and professional collegiality during the pandemic (Fernandez et al., 2020;Liu et al., 2020). During the study, milder forms of COVID-19-associated stigma were mainly noted in terms of nurses being stigmatized and gossiped about, such as being labeled as COVID-19 positive and being contagious.A possible explanation is that the general population, especially neighbors, routinely viewed nurses as a threat to the safety of others and as "disease carriers" (Hambleton et al., 1991), and thus nurses faced avoidance by community members due to this fear (Maben and Bridges, 2020).Furthermore, item 3 (A nurse who kept her distance when talking to a COVID-19 patient) got the highest score on the instrument, i.e., was most often endorsed.During the early months of the COVID-19 pandemic, personal protective equipment (PPEs) for nurses was in short supply and nurses knew the main perceived infection routes of COVID-19 to be by droplet, contact, and aerosols therefore they avoided close contact with patients as much as possible to protect themselves.On the other hand, even with sufficient PPE, nurses showed a certain degree of fear and stigma toward COVID-19 patients.Nevertheless, the total level of COVID-19-associated stigma was low among surveyed nurses, and nurses were unaware that their physical distancing behaviors may have biased their provision of care (Nyblade et al., 2019) and exacerbated avoidance, mistreatment, and stigma toward COVID-19 patients (Logie and Turan, 2020). Coinciding with similar studies (Mao et al., 2018;Arshi et al., 2020), this study found that social support was negatively associated with COVID-19-associated stigma among nurses.This result suggests that social support is an effective coping strategy that can alleviate stigma.As Gardner and Moallef (2015) suggest support for nurses from the media and community as "stalwart heroism and sacrifice" contributed to their positive experiences and less perceived stigma (Gardner and Moallef, 2015).As Liu et al. (2020) There are several limitations to this study.Firstly, since this study was conducted at one of the major infectious disease hospitals in Shanghai, China, it may not be representative of other Chinese-speaking areas.Secondly, the low magnitude correlations between stigma and physical health, psychological health, and social support might be due to the three single-item physical health, psychological health, and social support measures used in this study not adequately assessing these constructs.Thus, valid and reliable scales that are available in Chinese to assess nurses' physical health, psychological health, and social support are needed to further assess the construct validity of the scale.Thirdly, using all types of social and mass media, the Chinese government is publicly encouraging all healthcare providers actively engaged in COVID-19 care.Since we recruited within an infectious disease institution in Shanghai, nurses may not have been willing to share their "true" feelings as the survey link came from their workplace.A longitudinal study is recommended to see if nurses will be more forthcoming in their answers and to compare their current and future answers to see if the passage of time and the fading of the national attention on COVID-19 will affect their responses.Furthermore, the non-significant relationship between physical and psychological health and nurses' reported stigma may be related to measurement issues.Some CCSI-N-3 psychometric characteristics should be evaluated further, such as test-retest reliability and the responsivity or sensitivity of the CCSI-N-3, and thus, would benefit from experimental or longitudinal studies in the future.Lastly, the sample size for IRT analysis was relatively small, despite the lack of consensus on the optimal sample size.A further refinement of the scale based on testing a larger representative sample may produce more stable parameter estimates and robust results. Conclusion The preliminary psychometric properties presented in this paper support the use of the 15-item CSI-N-3, which is used to measure the internal and external COVID-19-associated stigma experienced by nurses who care for COVID-19 patients during and after the COVID-19 pandemic.Although low levels of stigma in nurses were found in this study's sample, the adverse effects of stigma during a pandemic should not be neglected.This instrument may facilitate the cross-cultural comparison of COVID-19-associated stigma experienced by nurses among different countries and expedite the improvement of additional tailored interventions for stigma reduction.Future studies should explore how to actively mobilize nurses' social support resources to decrease the stigma associated with COVID-19 and to improve nurses' quality of patient care and overall job satisfaction. ( A ) Category probability curves for the subscale of nurses stigmatizing patients.(B) Category probability curves for the subscale of nurses being stigmatized.The four curves from left to right represent four response categories (0 = never; 1 = once or twice; 2 = several times; 3 = most of the time). TABLE 2 Item analysis of the scale. (Liu et al., 2020)e support systems including hospitals, colleagues, families, friends, and society can help frontline nurses minimize the stigma associated with caring for COVID-19 patients.With logistical support from their , peer support, and encouragement among colleagues (e.g., the sharing of workplace experiences), frontline nurses had a sense of safety and felt less stigma(Liu et al., 2020).However, in light of the relatively small explained variance in the regression model, further exploration of other factors is encouraged, and the complexity of factors that affect COVID-19 stigma for nurses is suggested. hospital
2023-08-20T15:07:45.595Z
2023-08-18T00:00:00.000
{ "year": 2023, "sha1": "f46925d018222b8f7ba466babe46b12df5509b66", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1084152/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8da9881ad175996166b7cd7d4a0d9ccaa0eb759b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2412705
pes2o/s2orc
v3-fos-license
Psychogenic non-epileptic seizures in children and adolescents: Part I – Diagnostic formulations Psychogenic non-epileptic seizures (PNES) are a nonspecific, umbrella category that is used to collect together a range of atypical neurophysiological responses to emotional distress, physiological stressors and danger. Because PNES mimic epileptic seizures, children and adolescents with PNES usually present to neurologists or to epilepsy monitoring units. After a comprehensive neurological evaluation and a diagnosis of PNES, the patient is referred to mental health services for treatment. This study documents the diagnostic formulations – the clinical formulations about the probable neurophysiological mechanisms – that were constructed for 60 consecutive children and adolescents with PNES who were referred to our Mind-Body Rehabilitation Programme for treatment. As a heuristic framework, we used a contemporary reworking of Janet’s dissociation model: PNES occur in the context of a destabilized neural system and reflect a release of prewired motor programmes following a functional failure in cognitive-emotional executive control circuitry. Using this framework, we clustered the 60 patients into six different subgroups: (1) dissociative PNES (23/60; 38%), (2) dissociative PNES triggered by hyperventilation (32/60; 53%), (3) innate defence responses presenting as PNES (6/60; 10%), (4) PNES triggered by vocal cord adduction (1/60; 2%), (5) PNES triggered by activation of the valsalva manoeuvre (1/60; 1.5%) and (6) PNES triggered by reflex activation of the vagus (2/60; 3%). As described in the companion article, these diagnostic formulations were used, in turn, both to inform the explanations of PNES that we gave to families and to design clinical interventions for helping the children and adolescents gain control of their PNES. Introduction Psychogenic non-epileptic seizures (PNES) are time-limited disturbances of motor-sensory control accompanied by an alteration in consciousness, but without ictal activity on electroencephalogram (EEG). PNES commonly present with rhythmic tremor or rigor-like movements: violent thrashing; complex movements such as flexion and extension; myoclonic-like movements; episodes of unresponsiveness; episodes of collapse/swooning; non-epileptic auras; and shuddering, staring and tonic posturing (Morgan & Buchhalter, 2015). Because PNES mimic epileptic seizures, children and adolescents with PNES usually present to neurologists or to epilepsy monitoring units; after a comprehensive neurological evaluation and a diagnosis of PNES, the patient is referred to mental health services for ongoing treatment. This study, Part I of a two-part article, presents the diagnostic formulations -the clinical formulations about the probable neurophysiological mechanisms, known or hypothesized -that were constructed for 60 consecutive children and adolescents with PNES who were referred to our Mind-Body Rehabilitation Programme for treatment. In Part II, we use the formulations presented here to frame discussions with patients and families, and to identify what treatments are most likely to help patients diagnosed with particular subtypes of PNES . More than a century ago, Janet (1889) proposed a dissociation model in which danger, severe stress, illness or fatigue could destabilize the neural system, disrupt the mental synthesis between ideas, acts, and sensory and motor functions, and cause PNES. Despite this early interest in PNES, the technological advances that allow contemporary researchers to study brain function had yet to be developed. Interest in PNES waned; research came to a halt. Notwithstanding the lack of progress in understanding the neurobiological mechanisms underlying PNES, up to a third of all patients presenting to specialist epilepsy centres continued to be diagnosed with PNES (Uldall, Alving, Hansen, Kibaek, & Buchholt, 2006) and to be referred to mental health services for treatment. When this study was being established, neurologists and psychiatrists hypothesized that PNES involved a heterogeneous range of neurobiological mechanisms that varied from one patient to another (Baslet, 2011;Goldstein & Mellers, 2012). Figuring prominently in this context was Baslet's 'Psychogenic Non-epileptic Seizures: A Model of Their Pathogenic Mechanism', a contemporary reworking of Janet's dissociation model of PNES (Baslet, 2011). His model provided an overarching framework in conceptualizing our study, talking about PNES with patients and families, and interpreting our findings. Using the language of contemporary neuroscience, Baslet (2011) suggested that PNES reflect a release or activation of prewired motor programmes secondary to a functional failure in 'cognitive-emotional executive control circuitry' (p. 9) in the prefrontal cortex (PFC) and that sudden shifts in arousal (both increases and decreases) play a central role in PNES pathophysiology. Under his model, multiple different mechanisms could result in PFC dysfunction and subsequent destabilization of the neural system, with each mechanism representing a different PNES subgroup. In parallel, scientists and clinicians working in areas nominally unrelated to PNES identified a broad range of animal and human responses to fear, physiological stressors and danger. These responses resembled, in terms of presentation, what our own team was encountering with PNES patients. That is, the responses were ones that could be understood as explaining some of the presentations that we were seeing clinically. In the remainder of this section, we provide a brief overview of the broad range of brain-body responses to fear, physiological stressors and danger. Following that, we describe how these brainbody responses can be used to understand the clusters of clinical presentations we encountered in our cohort of 60 consecutive child and adolescent patients with PNES. Innate defence responses to danger or to memories of danger The study of innate defence responses began in the 1800s when Darwin used the terms feigning death to describe tonic immobility in fireflies, lizards and spiders (C. Darwin, 1839), and flight and utter prostration to describe the flight and the collapsed immobility responses in humans (C. Darwin, 1872). Subsequent research with animals and humans identified a continuum of defence responses -freezing, flight, fight, tonic immobility, collapsed immobility and quiescent immobility -termed the defence cascade. Two of these defence responses -tonic immobility and collapsed immobility -are discussed in the 'Introduction' section of this study because of their relevance to children/adolescents presenting with PNES. Several lines of published research pointed to the clinical relevance of tonic and collapsed immobility in understanding PNES: case reports of soldiers and war veterans who went into states of collapse in response to fear during military action or in response to memories of military action (Kardiner, 1941;Mosso, 1896); reports of rape victims who experience what is referred to as rapeinduced paralysis (Galliano, Noble, Travis, & Puechl, 1993;Moller, Sondergaard, & Helstrom, 2017); Stefan Bracha's (2004) work on fainting in response to fear; Bruce Perry's accounts of long periods of unresponsiveness in maltreated children (Perry & Szalavitz, 2006); and Stephen Porges' (2011) work on the role of the defensive vagus in shut-down states. Based on the clinical descriptions contained in this literature, we realized that in our tertiary care hospital, some of the children/ adolescents that neurologists had referred to us with a diagnosis of PNES were actually experiencing tonic immobility or collapsed immobility in response to some sort of threat. In our effort to understand the neurobiology of tonic immobility and collapsed immobility, we collaborated with the tenth author (P.C.), a neuroscientist, to develop a neurobiological model of the innate defence responses (Kozlowska, Walker, McLean, & Carrive, 2015). According to that model, innate defence responses are hard-wired, automatically activated motor, autonomic and sensory responses mediated by subcortical neural circuits. Each defence response has a signature neural pattern -a somatomotor component (which involves either activation or loss of tone of skeletal muscle), an autonomic/visceromotor component (which involves sympathetic, defensive parasympathetic (vagal), or mixed activation of the viscera) and a pain-processing component (opioid system). High states of arousal are necessary for innate defence responses to be activated. Children/adolescents who activate tonic immobility will present immobile, with closed eyes or an unfocused gaze, with or without tremors in the extremities, and will be unresponsive to external stimuli, including pain stimuli. Children/adolescents who activate collapsed immobility will present with sudden collapse (fainting) that involves a loss of consciousness, a loss of muscle tone and sometimes a loss of continence. Because the neural signatures of tonic and collapsed immobility include activation of the opioid system, some clinicians have utilized naloxone -which blocks opioid receptors -to disrupt the neural pattern and to terminate the tonic/collapsed immobility response (Bruce Perry, personal communication, June 2016). 1 Hyperventilation in response to stress or danger The autonomic system 2 (which mediates changes in arousal) is tightly coupled with the skeletomotor system (which mediates changes in muscle tone and muscle activity, including increased activity of muscles responsible for respiration (Dum, Levinthal, & Strick, 2016). Because of this coupling, when a threat is perceived, increases in arousal are accompanied by increases in respiration: the body prepares itself for action. When ventilation exceeds metabolic demand, hyperventilation (HV) occurs, and in susceptible individuals, HV can change brain neurophysiology in powerful ways. In our clinical work with children/adolescents with PNES, we had observed that our patients typically presented in a state of high arousal. This clinical observation was confirmed in our research with children/adolescents with functional neurological symptoms, which identified increases in physiological arousal (Kozlowska, Palmer, et al., 2015), increases in cortical arousal (Kozlowska, Melkonian, Spooner, Scher, & Meares, 2017) and a state of motor readiness to emotional signals (Kozlowska, Brown, Palmer, & Williams, 2013). We had also observed that many of our patients with PNES hyperventilated in and around the time of their PNES, and that HV appeared to trigger their PNES. We formally tested this hypothesis in the scientific arm of this study -the PNES Hyperventilation Study (Kozlowska, Rampersad, et al., 2017). We found that nearly half of the children/adolescents with PNES (26 of 60) had difficulty in regulating CO 2 during a HV-challenge, and that over half those with PNES (32 out of 60) appeared to trigger their events with HV. The PNES Hyperventilation Study indicated that HV was one of the mechanisms by which PFC function could be disrupted -triggering, in turn, PNES. HV can disrupt brain function and trigger PNES through two potential processes (see Kozlowska, Rampersad, et al., 2017 for review). In the first, cortical arousal phase of HV, it causes increased excitability in widely distributed networks and can therefore, via the arousal mechanisms, contribute to the functional failure of executive control circuitry (i.e. loss of horizontal integration of brain function). In the second hypoxic phase of HV, prolonged HV causes cerebral hypoxia due to constriction of cerebral arteries and can contribute to a functional disconnect between the cortex and lower brain structures (i.e. loss of vertical integration in brain function); hypoxia disrupts the signals from the brain stem that ordinarily maintain both consciousness and muscle tone. Whether the two above processes should be conceptualized as an example of dissociation, hypoxia or mixed dissociation-hypoxia process is an open question. What is clear, however, is that in vulnerable individuals, HV appears to disrupt PFC function in significant ways and can result in a release of subcortical motor programmes: PNES (for a clinical example see Chandra et al., 2017). Non-Hyperventilation-related hypoxia in response to threat From the established medical literature, we were aware that in addition to HV-induced hypoxia, non-epileptic seizures could occur as a function of hypoxia secondary to disruptions of the breathing cycle or by reflex activations of the defensive vagus (Gastaut, 1974;Stephenson, 1990). Concretely, as part of an atypical response to stress, some individuals will experience hypoxia-induced nonepileptic seizures because they occlude their airway (via vocal cord adduction, holding the breath or the valsalva manoeuvre, all in response to distress) or because activation of the defensive vagus results in decreased blood flow to the brain (via reflex activations of the defensive vagus in response to fear, pain or exposure to blood). We were also aware that this group of non-epileptic seizures involved an unacknowledged conceptual overlap between what neurologists typically conceptualized a 'physiologic' non-epileptic seizures (those caused by a known physiological mechanisms) and 'psychogenic' non-epileptic seizures (those caused by distress or underlying psychological conflicts or stressors; Engelsen, Gramstad, Lillebø, & Karlsen, 2013). That is, the neurophysiological mechanisms that caused hypoxia-induced non-epileptic seizures could also be triggered by distress, fear, panic, sudden fright or pain. Other dissociative brain processes in the face of threat Finally, four additional bodies of research contribute to our understanding of other stress-induced, dissociative brain processes at work in patients who present with PNES. This additional research involves (1) methodological advances in analysing EEG and brain-imaging data, specifically in studies with patients with PNES, (2) the emerging literature about changes that occur in the brain as a function of cortical arousal, (3) the literature on dissociation and (4) arousal-related priming, activation, and proliferation of glial cells, which increases the individual's sensitivity to stress. Taken together, these four bodies of work identify brain processes that are likely to contribute to dissociation -a loss of connectivity between brain areas that typically work together -which cannot be understood or conceptualized by any of the mechanisms described earlier in this section. Neurophysiological studies with adult patients with PNES have been reviewed exhaustively by Perez and colleagues (Perez et al., 2015;Perez & LaFrance, 2016). In a nutshell, neurophysiological studies suggest that functional failures of executive control circuitry reflect alterations in connectivity in resting-state brain networks involved in the following: emotion regulation and arousal, cognitive control, self-referential processing, and motor planning and coordination. Studies in adults and adolescents show that functional failures also include changes in EEG synchrony, both within cortical brain systems and between cortical and subcortical brain systems (Barzegaran, Carmeli, Rossetti, Frackowiak, & Knyazeva, 2016;Umesh, Tikka, Goyal, Sinha, & Nizamie, 2017). In parallel, an emerging body of work has examined how changes in cortical arousal facilitate shifts in network organization -to weaken cortical networks and to strengthen subcortical onesas part of the brain's response to threat (Arnsten, 2015;de Kloet, Joels, & Holsboer, 2005;Hermans et al., 2011). Exposure to acute, uncontrollable stress causes catecholamine release in the PFC and impairs both PFC function and connectivity within cortical networks (Arnsten, 2015), causing a disruption (dissociation) of horizontal integration of brain function. Research on dissociation suggests that, on the molecular level, cortical arousal also involves secretion of endogenous opioids, endogenous cannabinoids and other anaesthetic neurochemicals (Lanius, 2014) that can likewise impair function in frontal areas -the cingulate cortex, orbitofrontal cortex and insula cortex, all of which have high levels of opioid receptors (Lanius, 2014). Anaesthetic neurochemicals may also disrupt the vertical integration of brain function -the normal relationship between the cortex and subcortical brain systems -thereby interfering with signals from the brain stem that ordinarily maintain consciousness, and leading to changes in the individual's level of consciousness (Lanius, 2014). Finally, recent advances have found that glial cells -the cells that surround neurones and that support and interact with them -are involved in the brain's response to stress (Ji, Chamessian, & Zhang, 2016;von Bernhardi, Eugenin-von Bernhardi, Flores, & Eugenin Leon, 2016;Wu, Dissing-Olesen, MacVicar, & Stevens, 2015). In restorative mode, glial cells stabilize and regulate neural networks, suppress inflammation and promote healing. In response to stress, they switch into defensive mode. In defensive mode, they proliferate and secrete pro-inflammatory neurochemicals that excite neurones and that disrupt brain function by interfering with the homeostatic regulation of synapses. In this way, glial cells play a major role in priming the brain's sensitivity to stress and in stress-related changes in network organization. Glial cells also induce stress-related neuroplastic changes that maintain chronic pain (Ji et al., 2016). Similar, stress-induced glial-mediated neuroplastic changes are implicated in patients whose PNES (and other functional neurological symptoms) become chronic. Taken together, the above bodies of work provide us with a basic understanding of the broad range of dissociative brain processes' that are triggered in the brain in response to increases in cortical arousal, and that disrupt brain function and connectivity. In daily life, a broad range of stressors -illness, injury, emotional distress secondary to adverse life events, or psychological trauma -can activate cortical arousal mechanisms (catecholamine release, secretion of anaesthetic neurochemicals and network reorganization) and, in susceptible individuals, shift brain organization into a defensive state. In this defensive state, the brain switches from reflective voluntary control of behaviour to reflexive modes of behaviour. Salient emotional signals are prioritized, and motor control is modulated by emotion-processing regions (Arnsten, 2015;Blakemore, Sinanaj, Galli, Aybek, & Vuilleumier, 2016;Hermans et al., 2011). An unwanted by-product of this process may be the emergence of functional neurological motor symptoms, including PNES. Whereas functional motor symptoms, both positive and negative (abnormal gait, functional tremor, functional tics, motor weakness and limb paresis), appear to reflect a relatively stable reorganization of neural networks in response to stress, PNES appear to reflect transient disruptions of neural networks -disruptions that affect the vertical integration of brain function and that cause a disconnect between cortical and subcortical systems (Barzegaran et al., 2016). The result is a temporary 'glitch' in top-down executive control over the motor regions in a time-limited release of motor programmes in the basal ganglia, midbrain and brain (PNES). Aims of the study As we have seen above, PNES is an umbrella category that incorporates a range of atypical neurophysiological responses to emotional distress, physiological stressors and danger. In the sections below, our goal is to determine the extent to which these mechanisms can be clinically incorporated, on a case-by-case basis, into diagnostic formulations that identify distinct subgroups of patients with PNES. We use the expression diagnostic formulations to refer to working hypotheses that take into account and synthesize all available information about the child/adolescent's presentation, including information obtained from the child/adolescent, family and neurologist, complemented by the team's clinical knowledge and its own study of the scientific and clinical literature. The diagnostic formulation provides both a shared understanding of the problem and a roadmap for the journey of treatment (Gordon, Riess, & Waldinger, 2005;Kozlowska, 2013). Participants The study was approved by the Sydney Children's Hospital Network Ethics Committee. Participants and their legal guardians provided written informed consent in accordance with the Australian National Health and Medical Research Council guidelines. The participants of the study consisted of 60 consecutive children and adolescents -42 girls and 18 boys, aged 8-17.67 years (mean = 13.45; standard deviation (SD) = 2.61) -who were referred to Psychological Medicine for treatment of PNES after assessment in the Department of Neurology during a 5-year period (April 2011-March 2016). The time from onset of PNES ranged from 1 day to 48 months (median = 2 months). In 28 cases (47%), the PNES presented alongside other functional neurological symptoms; in 10 cases (17%), the PNES presented alongside a chronic pain presentation; and in 22 cases (36.7%), the PNES were the primary presenting symptom. All families reported antecedent stressors (range = 1-12; mean = 4.63; median = 4; see Table 1). Comorbid symptoms and diagnoses are documented in Table 1. Other clinical characteristics, intelligence quotient, comorbid neurological conditions and the semiology of PNES are documented in Tables 2 and 3. All children in the study participated in the PNES Hyperventilation Study, which provided neurophysiological data and HV-challenge profiles (Kozlowska, Rampersad, et al., 2017). Eight children/adolescents were excluded from the previous study because of partial pressure of carbon dioxide (PCO 2 ) data were not collected, and four because PCO2 data were inadequate (technical difficulties or child's lack of cooperation during the HV challenge). Procedure All patients with PNES completed a comprehensive neurology assessment, were diagnosed with PNES and were referred to Psychological Medicine for treatment. The Psychological Medicine assessment involved a comprehensive family assessment (Kozlowska, English, & Savage, 2013). The team made its diagnostic formulation, based on all available information, at the completion of the family assessment and provided an explanation about the child/adolescent's PNES -the lay version of the diagnostic formulation -to the child/adolescent and family. Baseline respiratory rates were recorded at the beginning of the child's individual assessment, which included a determination whether the child/adolescent was capable of using a biofeedback tool called MyCalmBeat. The formulation/explanation was updated if new information came to light during the inpatient treatment admission. Data analysis The data analysis was qualitative. The diagnostic formulations -our clinical formulation about the probable neurophysiological mechanisms underlying particular presentations -are clustered below into subgroups of similar patients (PNES subgroups, see Figure 1). Normal reference ranges were used to evaluate elevated baseline respiratory rate and heart rate (Fleming et al., 2011). To make the qualitative data clinically relevant to mental health clinicians, we provide a clinical vignette for each PNES subgroup. With the consent of the patients and parents, Vignettes 1, 2, 5 and 6 describe individual patients in particular PNES subgroups. Vignettes 3 and 4 are amalgams put together from similar cases. PNES subgroup 1: dissociative PNES Diagnostic formulation. One-third of patients (23/60; 38%) were clustered into subgroup 1: dissociative PNES (see Figure 1). As discussed in the introduction, our diagnostic formulation in this scenario is that a broad range of stressors have activated cortical arousal mechanisms and have shifted brain organization into a defensive state. An unwanted by-product of this process is the emergence of functional neurological motor symptoms, including PNES. In this dissociative PNES subgroup (n = 23), 14 patients had PNES comorbid with other functional neurological symptoms (see Figure 2), 7 had PNES alone and 2 had PNES in the context of a chronic pain presentation. On clinical measures of arousal and motor readiness, 39% of patients (9/23) had baseline heart rates above the 75th percentile, and 58% (11/19) had baseline respiratory rates above the 75th percentile. A handful of patients (4/23; 17%) had skewed HV profiles. On clinical assessment, patients typically reported that their PNES occurred suddenly, without warning. If warning signs were present, they included motor agitation (e.g. jiggling legs), sudden headache and a sense of 'spacing' or 'vagueing' out. Although we did not observe these patients to hyperventilate before the PNES -or to precipitate their PNES via HV -some patients were sometimes, but not always, observed to hyperventilate during the PNES. Vignette 1: dissociative PNES. Fiona, a 14-year-old girl, experienced a painful, twitching left foot following a tumble turn in a swimming competition, after which she presented to hospital and had a neurological assessment. After 3 weeks, new symptoms emerged. Fiona experienced PNES -tonicand clonic-like movements lasting up to an hour -followed by an inability to speak or to move her limbs. After each PNES, for periods lasting up to a week, she also did not remember recent events and did not recognize family and friends. In addition to the pain and twitching, she developed pins and needles in her left foot and fluctuating dystonia of the third, fourth and fifth left toes. She experienced intermittent dystonia or twitching of other body parts (head, neck and shoulders). Fiona and her older brother lived at home with their estranged parents. For many years, Fiona had witnessed high levels of conflict between her brother and her mother, including episodes of physical violence towards her mother. Although Fiona and her father resided in the same house, she had not had a conversation with her father for 6 years. Fiona and her family did not want to engage with the Psychological Medicine team. They attended the family assessment during Fiona's second admission to the neurology ward, when Fiona had begun to suffer from PNES and when she could no longer recognize her parents. The family accepted, though somewhat reluctantly, the explanation pertaining to the PNES and functional neurological symptoms. Fiona's mother and brother became very upset that the stress in the family could have affected Fiona's body in such a severe way. From that point on, Fiona's brother ceased his angry outbursts at home. Treatment in hospital included pharmacotherapy for down-regulating arousal, stabilizing sleep and terminating excessively long PNES (melatonin 6 mg and clonidine 50 mcg at bedtime; fluoxetine 20 mg and olanzapine 5 mg if the PNES lasted more than an hour). It also included individual work with Fiona to help her track body state, to identify warning signs (sudden headache, feeling hot, sweating), and to use progressive muscle relaxation, guided-imagery recordings or slow breathing to avert PNES. As part of a family intervention, the unresolved issues within the family system were discussed explicitly; the family decided that repair of the estrangement between them was not possible. Fiona engaged in 18 months of outpatient treatment, during which she began to explore her anxiety in relation to school work and her home life. Her PNES now occurred very intermittently -once every 3 months -when she was sleep deprived or stressed. PNES subgroup 2: dissociative PNES triggered by Hyperventilation Diagnostic formulation. In total, 32 patients (32/60; 53%) were clustered into PNES subgroup 2 (see figure 1). Clinically, this subgroup was indistinguishable from subgroup 1 except that the children/ adolescents' PNES were typically triggered by HV (Kozlowska, Rampersad, et al., 2017). As discussed in the introduction, our working formulation for this patient cluster was that when these patients became stressed, they activated their respiratory motor system (alongside the autonomic nervous system and cortical arousal systems) and inadvertently hyperventilated, thereby disrupting brain function and triggering PNES. In this HV-induced subgroup (n = 32), 13 patients had PNES comorbid with other functional neurological symptoms (see Figure 3), 12 had PNES alone and seven had PNES in the context of a chronic pain presentation. In this last group, four also experienced transient functional motorsensory symptoms. On clinical measures of arousal and motor readiness, 53% of patients (17/32) had baseline heart rates above the 75th percentile; 75% (21/28) had baseline respiratory rates above the 75th percentile; and 72% (23/32) had skewed HV profiles (see black line in Figure 4). On clinical assessment, many patients reported that their PNES were typically preceded by warning signs, including 'breathing too fast', 'heart beating', sweatiness, nausea, feeling dizzy, blurry vision, visual blackout, sudden headache, tight band around the head, wobbly legs and a feeling of fogginess and being unable to think clearly. The visual and cognitive symptoms described by these patients are prototypical symptoms of HV. Paroxysmal increases in ventilation -probable HVoccurred immediately prior to PNES episodes and were observed in all 32 cases during the assessment or treatment admission. Further sequencing work with these patients and their families suggested that HV events (and subsequent PNES) were triggered by psychological distress in 26 cases, by pain in four cases and by exercise in two cases. The sources of psychological distress were very broad, ranging from anticipatory anxiety about commonplace daily stressors, such as scholastic expectations at school, to adverse life circumstances such as illness worries, family conflict, loss events or bullying, to intrusive memories of the past sexual abuse by a parent or grandparent. For all four patients whose PNES were triggered by pain, the pain occurred in the context of chronic pain conditions. For both patients whose PNES were triggered by exercise, the relevant sports activities took place during a (Kozlowska, Rampersad, et al., 2017). The shaded blue area depicts the homeostatic range for arterial CO2. The top blue line depicts controls. Controls showed a clear pattern of PCO 2 changes during the HV task: a baseline PCO 2 within the homeostatic range, a steep drop in PCO2 during HV, and a prompt return to homeostasis during recovery. The middle red line depicts the 60 children and adolescents with PNES who participated in the study (and in the current study). Children and adolescents with PNES showed a downwardly skewed HV-challenge profile suggesting difficulties with PCO2 regulation. The bottom black line depicts the subgroup of 32 children and adolescents whose PNES were typically preceded by -'triggered by' -HV. period of increased stress at school, with the consequence that the children/adolescents were unable to down-regulate following exercise (resulting in HV and then PNES). Nine children/adolescents were diagnosed with HV-induced PNES but had HV profiles indistinguishable from controls. This phenomenon brought to our attention that the manner in which a particular patient hyperventilates during the HV-challenge may be different from the manner in which that same patient hyperventilates during real-life scenarios. In this context, a normal -or almost normal -HV-challenge profile did not necessarily exclude the possibility of HV-induced PNES. What characterized these nine patients was the paroxysmal nature of their HV in real-life situations: severe HV occurred in response to specific triggers. Two patients who had been sexually abused by close family members (father and grandfather, respectively) demonstrated extreme HV only when experiencing vivid intrusive memories of the past abuse. The other seven patients hyperventilated intermittently in the context of daily stressors, recent adverse life events or memories of the past bullying. At other times, they did not manifest any symptoms associated with increases in ventilation. Vignette 2: HV-triggered PNES. Danae was a 14-year-old adolescent girl with left cerebral atrophy of unknown origin (unchanging over time) and a history of absence seizures well controlled with medication. She presented with a new type of seizure event -twitching and tonic-and clonic-like movements -for which there was no electrical correlate on video EEG (vEEG). PCO 2 readings during the HV component of the vEEG showed that Danae was hypocapnic (34 mmHg) prior to formal HV, that her PCO 2 level dropped to 20 mmHg with HV, and that it failed to recover (32 mmHg at 15 minutes post-HV). During the family assessment in Psychological Medicine, and as various school stressors were being explored (the key source of distress for Danae), her breathing rate reached 40 per minute (= HV), precipitating what was, for her, a typical PNES. Danae was unable to slow down her breathing (to a normal rate of <20 breaths per minute), and her PNES continued, on and off, over 30 minutes. The explanation to the family included an explanation of HV as the underlying mechanism and a clear expectation that Danae would be able to control her episodes with breath training and treatment of her anxiety. The intervention included breath training using a biofeedback tool (MyCalmBeat), treatment of anxiety with a selective serotonin reuptake inhibitor and quetiapine (37.5 mg at night) and cognitive-behavioural therapy. Roughly a year later (after being well for some time), at a time that school examinations were overwhelming her with fear and anxiety, Danae re-presented with a different type of PNESnamely, of sudden fainting accompanied by incontinence (of urine; see section of innate defence behaviours below). This presentation was followed by a family intervention that helped Danae's parents to modify their expectations of academic achievement and to support a choice of career in which Danae could flourish free of PNES. PNES subgroup 3: innate defence responses presenting as PNES Diagnostic formulation. Six patients (6/60; 10%) -four boys and two girls -were clustered into PNES subgroup 3 because we assessed their PNES as reflecting activation of innate defence responses (tonic immobility or collapsed immobility) (see Figure 1). Triggers for the episodes included highly arousing imaged memories or thoughts pertaining to any of the following: sexual abuse by a parent, emotional and physical abuse by a parent, exposure to domestic violence between parents, memories of a deceased father who had suddenly died from cancer, war trauma, and, in a neurologically compromised adolescent girl, extreme fear of school examinations (see end of Vignette 2). Two patients presented with episodes of collapsed immobility alone. Of these, one had baseline heart and respiratory rates above the 75th percentile, and both had normal HV-challenge profiles. Three patients presented with collapsed immobility episodes and also PNES triggered by HV, and one with collapsed immobility episodes, tonic immobility episodes and PNES triggered by HV. Of these four, three had baseline heart rates above the 75th percentile; four had baseline respiratory rates above the 75th percentile; and four had skewed HV-challenge profiles. These four patients were also included in data reported in subgroup 2 (see Figure 1). Vignette 3: innate defence responses presenting as PNES. Jasmine was an 8-year-old girl who was attending therapy with her adoptive mother for unpredictable shifts in mood and behaviour. Jasmine had been subjected to extreme physical abuse prior to her adoption; for example, once when she was angry, her biological mother had tried to cut off one of Jasmine's toes. In therapy, when the therapist was discussing examples of what made Jasmine angry or distressed, Jasmine was unable to manage the conversation. At first she seemed not to be hearing the therapist, and she had a blank look on her face. Then she went pale and limp, and was unresponsive to the therapist's voice and touch. The collapsed state lasted for 40 minutes. Jasmine's adoptive mother mentioned that this happened often at home. Jasmine was hypersensitive to changes in tone of voice, and if her adoptive mother raised her voice in any way, Jasmine would become nonresponsive, sometimes going pale and limp. Stories about how the animal the opossum responds to threat -by becoming limp and unresponsive -helped reframe Jasmine's behaviour as reflecting an innate stress response. PNES subgroup 4: PNES associated with syncope triggered by vocal cord adduction in the context of distress Diagnostic formulation. One patient (1/60; 2%) was clustered into PNES subgroup 4 (see Figure 1). The diagnosis of vocal cord adduction was confirmed by the respiratory team's direct visualization of the vocal cords while the patient was having symptoms that led to a non-epileptic seizure. On clinical measures of arousal and motor readiness, the patient had a baseline heart and respiratory rates >75th percentile. The HV-challenge profile was normal. Whereas vocal cord adduction in anxious children with chronic asthma -or misdiagnosed as chronic asthma -is documented in the literature (Ibrahim, Gheriani, Almohamed, & Raza, 2007;Silberg, 2001), non-epileptic seizures following vocal cord adduction have not previously been documented. Despite signs of marked respiratory distress, hypoxia (measured by pulse oximetry) during vocal cord adduction is rare (Brugman, Howell, Rosenberg, Blager, & Lack, 1994). By the same token, non-epileptic seizures associated with vocal cord adduction triggered by distress are also rare. Vignette 4: vocal cord adduction. Mika was a 9-year-old child with a history of chronic treatmentresistant asthma and weekly presentations to hospital. He was referred to neurology for vEEG after a seizure-like event. Subsequently, another event was witnessed during lung-function testing. Mika became anxious and began to cough intermittently and to take in huge, noisy gulps of air. Suddenly, the noisy breathing stopped. Mika's eyes rolled back, and he slumped to the side and was incontinent of urine. A blue tinge around his lips signalled a hypoxic state. Following the event, he did not remember what had happened. Similar events occurred while Mika was still in hospital. Some were followed by tonic-and clonic-like movements. The speech therapy component of the intervention involved Mika being taught to use the sounds shh, sss and fff to open his vocal cords when he felt tightness in his chest. The psychological intervention addressed anxiety and involved a range of relaxation, visualization, self-talk and vocalization techniques. The shh, sss and fff sounds were embedded into Mika's visualization and relaxation exercises. Subgroup 5: non-epileptic seizures associated with syncope triggered by activation of the valsalva manoeuvre in the context of distress Diagnostic formulation. One patient (1/60; 2%) was clustered into PNES subgroup 5. Subsequent to presentation, this patient also developed non-epileptic seizures triggered by HV (see Figure 1). On clinical measures of arousal and motor readiness, the patient had baseline heart and respiratory rates >75th percentile and a skewed HV-challenge profile. The valsalva manoeuvre involves forced expiration against a closed airway, either by closing one's mouth and pinching one's nose shut, or by exhaling against a closed glottis. The manoeuvre increases intrathoracic pressure, which leads to decreased cardiac output and decreased cerebral circulation even as the available oxygen itself decreases. Because respiration is driven primarily by the level of carbon dioxide, decreasing that level by hyperventilating prior to breath-holding enables individuals to hold their breath for longer periods of time. A loss of consciousness associated with the valsalva manoeuvre is well documented in adolescents and young men, who use it as a means of group entertainment (Howard, Leathart, Dornhorst, & Sharpey-Schafer, 1951), and in divers, where it is associated with high rates of mortality (Kumar & Ng, 2010). In children, however -and especially in children with developmental delay -the valsalva manoeuvre may be used habitually as a means of eliciting pleasant sensations (Gastaut, Zifkin, & Rufo, 1987;Lai & Ziegler, 1983) or managing feelings of distress. We -the authors of this study -have also seen this presentation in children/adolescents who have been maltreated in infancy. Like all hypoxic events, the loss of consciousness caused by the valsalva manoeuvre can involve hypoxia-related movements that can look like a seizure. Vignette 5: non-epileptic seizure associated with the valsalva manoeuvre. Lizzy was a 9-year-old girl of average intelligence with a 1-month history of collapse episodes. She had a history of exposure to drugs in utero and of severe neglect and abuse from birth to 4 years of age. Since that time she was looked after by her grandparents, who became her primary attachment figures. Following the death of her grandfather, Lizzy began to experience episodes of collapse. EEG telemetry over a 24-hour period captured a number of events (including one collapse), all of which were associated with EEG slowing and no changes in heart rate. Lizzy would become distressed, take a breath and grimace as she held the breath against a closed mouth. The key treatment intervention was a slow breathing exercise that, with the help of her grandmother, Lizzy implemented when distressed. Individual work with a psychologist helped soften Lizzy's grief. Subgroup 6: non-epileptic seizures associated with syncope triggered by reflex activation of the vagus Diagnostic formulation. Two patients (2/60; 3%) were clustered into PNES subgroup 6 (see Figure 1). On clinical measures of arousal and motor readiness, one patient had a baseline heart rate >75th percentile and both had a baseline respiratory rate >75th percentile. The HV-challenge profile was skewed in one patient and normal in the other. Syncope triggered by reflex activation of the vagus is common across the lifespan and is well described in the literature (Pavri, 2014). The vagus can be activated by pain and other noxious stimuli, the sight of blood, orthostatic stress (and activation of heart mechanoceptors) or C-fibre mechanoreceptors located in the lungs, oesophagus, bladder and rectum, associated with coughing, swallowing or vomiting, micturition or defaecation, respectively. In all of these scenarios, reflex activation of the defensive vagus 3 leads to bradycardia or asystole, which causes cerebral hypoxia. As in all cases of cerebral hypoxia, hypoxia-induced loss of consciousness can involve hypoxia-related movements that can look like an epileptic seizure. In one patient, a 16-year-old girl, non-epileptic events appeared to be triggered by acute pain flare-ups (see Vignette 6). In the other patient, a 14-year-old girl with chronic anxiety (and HV) and established orthostatic syncope, the collapse events typically occurred after meals and appeared to reflect an unusual presentation of postprandial syncope. Vignette 6: non-epileptic seizures triggered by reflex activation of the vagus. Siew was a 16-year-old girl with a 6-month history of complex regional pain syndrome following a sprain injury. A more recent injury had aggravated the pain, causing Siew to experience sudden sharp spikes of pain. Siew began to present to accident and emergency weekly with episodes of fainting or of fainting followed by tonic-and clonic-like movements, which looked just like epileptic seizures. During one such event, Siew sustained significant bruising to her head. After multiple neurology reviews and EEG/vEEG studies, all of which were normal, Siew was diagnosed with PNES. Unable to leave home on her own, Siew became increasingly anxious, weak physically and illness focused, and developed a broad range of other nonspecific somatic symptoms (insomnia, nausea, abdominal pain and loss of appetite). Management included an explanation of the probable underlying mechanism -a reflex activation of the vagus by pain. Incremental physical exercise (while Siew wore a protective helmet) enabled Siew to regain her natural level of fitness. Training in mind-body strategies enabled her to better manage her anxiety, work through pain flare-ups and manage her other somatic symptoms. Her chronic pain symptoms also continued to improve. Limitations The scientific arm of this study -the PNES Hyperventilation Study (Kozlowska, Rampersad, et al., 2017) -included a thorough discussion of the study limitations. An additional limitation in the clinical arm (this study) is that we did not have funding to pursue further ambulant electrocardiography (ECG) and EEG monitoring (to document heart rate changes or hypoxia-associated EEG slowing) in patients clustered into PNES subgroups 3 and 6. Conclusion In conclusion, PNES is a nonspecific, umbrella category that is used to collect together a range of atypical neurophysiological responses to emotional distress, physiological stressors and danger. Recent advances in neuroscience, neurophysiology and the field of dissociation provide us with a richer framework for thinking about PNES. In this study, we used our review of brainbody responses to fear, physiological stressors and danger as the basis for clustering our child and adolescent patients under distinct diagnostic formulations -clinical formulations about the probable neurophysiological mechanisms -that explained their PNES. In Part II -the companion study -we describe how we used the formulations presented here to frame the explanations that we gave to patients and families, and to inform the treatment interventions (within each subgroup) that we used to help our patients gain control of their PNES . As the knowledge about the neurobiology of PNES expands, and as new diagnostic tools become available, the framework offered in this study will need to be updated, expanded and revised to keep abreast of developments in the field. This clinical study highlights the complex interplay among neural, physiological and emotional phenomena; it challenges dualistic thinking and practice; and it emphasizes an integrated mind-body approach, one that links brain, psyche and soma. Catherine Chudleigh is a Clinical Psychologist and a member of the multidisciplinary team. Alongside other clinicians in the team she has developed a range of child-friendly interventions that help children and adolescents with functional somatic symptoms to become aware of body states and to manage states of high arousal. Catherine Cruz is a Clinical Nurse Consultant and a member of the multidisciplinary team. She engages in ongoing support and education of nursing staff and new graduates ensuring that nursing staff who work in the Mind-Body Programme are well educated about PNES (and other functional somatic symptoms), that they manage these children/adolescents with calm competence and that they relate to patients and their families with understanding and empathy. Melissa Lim is a Clinical Psychologist-and a member of the multidisciplinary team-who has particular skills in using mind-body strategies when working with children with non-epileptic seizures and their families. Melissa uses Somatic Experiencing in her work. Georgia McClure is a Clinical Psychologist-and a member of the multidisciplinary team-whose ideas and buoyant enthusiasm serves to maintain team morale. Georgia uses EMDR in her work. Blanche Savage is a Clinical Psychologist-and a member of the multidisciplinary team-who grounded steadfastness serves to maintain team stability. Blanche uses hypnosis in her work. Ubaid Shah is a Paediatric Neurologist. Ubaid's training included a rotation in Psychological Medicine, where he was involved in the treatment of many children and adolescents with functional neurological symptoms. Ubaid is now disseminating his knowledge and skills about functional neurological symptoms and their treatment at the Lady Cliento Hospital in Queensland, Australia. Averil Cook is a Clinical Psychologist and a previous member of the multidisciplinary team. Averil has extended the team's family therapy skills and strengthened the team's systemic perspective. Averil now runs a community child and adolescent service in which she is helping community mental health clinicians accept children and adolescents with functional somatic symptoms as part of their clinical brief. Stephen Scher has degrees in philosophy and law, and has an ongoing appointment in psychiatry at Harvard Medical School. He has particular interests in clinical ethics, health policy, and philosophical dimensions of medicine. Stephen has supported the clinician team involved in the current project by helping them articulate objections to traditional, but counterproductive, medical terminology (e.g., the term psychogenic) and to distinctions, such as the mind-body split, that undercut efforts to understand and explain PNES and other functional somatic symptoms. He has also supported the current team in their efforts to develop and maintain an ethical, collaborative stance in working with families, and to disseminate their results through publication. Pascal Carrive is a Neuroscientist who works with animals in a basic science setting. Pascal has done cuttingedge research on the brain stem systems that are involved in fear responses. He has mentored the clinical team to ensure that their understanding of patient physiological and neuroanatomy is informed by basic science research. Deepak Gill is a Paediatric Neurologist who runs the epilepsy service at The Children's Hospital at Westmead, NSW, Australia. Deepak's training also included a rotation in Psychological Medicine. He has promoted close collaboration between the Neurology and Psychiatry Departments.
2018-04-03T03:20:01.989Z
2017-09-28T00:00:00.000
{ "year": 2017, "sha1": "9eb9de7b2fc26225102b5fcc0db576e408e1e146", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1359104517732118", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "03b23003e22e4bb2c3c6dfa51c83c1470a3daf6c", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
113696528
pes2o/s2orc
v3-fos-license
The impacT of operaTion of elasTomeric Track chains on The selecTed properTies of The sTeel cord wires The track running systems enable movement of heavy vehicles on unpaved and rough terrain, snow-covered, marshy or swampy surfaces, as well as overcoming natural or artificial barriers [5, 7, 28]. This is possible by distributing the vehicle mass on greater surface which causes significant drop in unit pressure, increase in adherence of a vehicle and achieving greater driving force. The systems also improve quality of operation and maneuvering a vehicle in the difficult terrain conditions by reducing the rolling resistance and the tendency of a vehicle to sink. The track is a closed band girding wheels and rolls of the track running system on the circumference of which four zones can be distinguished: the upper one rolled over tension rollers, the retaining one cooperating with the ground, since it determines the size of the resulting driving force necessary for the motion of the vehicle, and the two inclined ones contained between one of the support wheels and the drive wheel, as well as the carrier wheel and the directional wheel [5, 28]. The track chain bears all forces, vertical, longitudinal and transverse, appearing in contact of a vehicle with foundation. Due to the construction the metal, rubber-metal and elastomer tracks can be distinguished. The elastomeric tracks are designed on the principle of the inner chain created by links responsible for carrying the drive from the driving wheel and preventing sliding the track off [5, 7, 28]. Additionally, the track is reinforced with steel cord sunk in elastomer creating the tread pattern aimed at stiffening the structure, maintaining its proper deflection and giving the proper resistance to tensile forces. The cord is a structure composed of strands created by several intertwined individual wires. The cord may also be the strand itself made of several wires [3, 11, 16, 19, 20]. Individual wires have diameters from 0,15 to 0,38 mm, are produced as brass or zinc plated wires and have the following properties [16, 18, 19, 20]: very high dynamic module, – high stiffness, – high strength, – low creep capacity, – high compressive modulus, – dimensional stability, – high resonance frequencies. – Introduction The track running systems enable movement of heavy vehicles on unpaved and rough terrain, snow-covered, marshy or swampy surfaces, as well as overcoming natural or artificial barriers [5,7,28].This is possible by distributing the vehicle mass on greater surface which causes significant drop in unit pressure, increase in adherence of a vehicle and achieving greater driving force.The systems also improve quality of operation and maneuvering a vehicle in the difficult terrain conditions by reducing the rolling resistance and the tendency of a vehicle to sink. The track is a closed band girding wheels and rolls of the track running system on the circumference of which four zones can be distinguished: the upper one rolled over tension rollers, the retaining one cooperating with the ground, since it determines the size of the resulting driving force necessary for the motion of the vehicle, and the two inclined ones contained between one of the support wheels and the drive wheel, as well as the carrier wheel and the directional wheel [5,28].The track chain bears all forces, vertical, longitudinal and transverse, appearing in contact of a vehicle with foundation.Due to the construction the metal, rubber-metal and elastomer tracks can be distinguished. The elastomeric tracks are designed on the principle of the inner chain created by links responsible for carrying the drive from the driving wheel and preventing sliding the track off [5,7,28].Additionally, the track is reinforced with steel cord sunk in elastomer creating the tread pattern aimed at stiffening the structure, maintaining its proper deflection and giving the proper resistance to tensile forces. The cord is a structure composed of strands created by several intertwined individual wires.The cord may also be the strand itself made of several wires [3,11,16,19,20].Individual wires have diameters from 0,15 to 0,38 mm, are produced as brass or zinc plated wires and have the following properties [16,18,19,20]: very high dynamic module, high stiffness, high strength, low creep capacity, high compressive modulus, dimensional stability, high resonance frequencies.-Wires designated for production of steel cord reinforcing tracks are manufactured from unalloyed pearlitic steel.The pearlitic steels containing from about 0,70 to 0,95% C belong to the group of unalloyed steels of the quality class designated for cold drawing or rolling [1-3, 8, 11, 18, 20].Their chemical composition and mechanical properties are in compliance with the PN-EN 10323:2006 (U) standard [12].As standard the steel wires for cord are characterised with tensile strength within the range of 2573 to even 4116 MPa [11,16]. One of the problems widely discussed in the subject literature is cracking of the pearlitic steel subjected to plastic working and during its operating [4,18,10,14,16].The problem is the more serious that it concerns many industrial branches where the steel cord is applied and therefore it has been the subject of years of research conducted in numerous scientific centres all over the world [14,25,26,31]. Dominika GryGier The impacT of operaTion of elasTomeric Track chains on The selecTed properTies of The sTeel cord wires wpływ eksploaTacji gąsienic elasTomerowych na wybrane własności druTów sTalowego kordu* The track running systems enable movement of heavy vehicles on unpaved and rough terrain, snow-covered, marshy or swampy surfaces, as well as overcoming natural or artificial barriers.The important structural component of the elastomeric tracks is a steel cord sunk in the elastomer creating the tread with the purpose of stiffening the structure, maintaining its proper deflection and giving the adequate resistance to tensile forces.The results of the studies presented in the work have shown that operation of the elastomeric track chains in conditions where they are continuously exposed to contact with foundation, frequent braking and bumping against roughness lead to damage of the steel cord material and a change in its mechanical properties. Purpose and object of the tests The elastomeric track chains of mini excavators are exposed during operation to continuous contact with base, frequent breaking and numerous strokes against roughness.Such working conditions may lead to damaging the material of the steel cord constituting stiffening of the track structure and, as a consequence, to a change in its functional properties.The results of questionnaires conducted among Lower Silesian companies of general building have shown that the average life of the elastomeric tracks applied in mini excavators is about 825 moto-hours, and the most frequent cause of their damage is rupture of the steel cord resulting in breaking continuity of the tread (Fig. 1). The fractographic tests of wires from the defective steel cord performed using the scanning electron microscope have shown the presence of surfaces indicating for fatigue character of the damage (Fig. 2).The fatigue part of the fracture was smooth with the characteristic fatigue striae arranged almost parallel to the direction of crack development.The fracture origin was localised at the external edge of the wire, i.e. in the area of the highest accumulation of the complex operating stresses.At the external circumference of the wire fractures the immediate zone of the plastic character and the expanded surface topography were visible.Thus, the results of those analyses have shown that damages were not created as a result of the operating overload of the tracks, but as a result of another factors causing decrease in the fatigue strength in the cord wires. The research results presented in the work aimed to determining the impact of the elastomeric track operation in general purpose mini excavators on structure and selected mechanical and technological properties of the pearlitic steel applied to the wires of their cord. The fatigue strength of steel wires greatly depends on metallurgical purity of the material, and especially on content of oxygen, silicon and sulphur.Presence of non-metallic inclusions in steel strongly decreases the fatigue resistance as it is around the impurities that the strong accumulation of stresses appear leading in effect to component cracking [1,4,11,16,20]. According to Golis [16] the maximum permissible content of nonmetallic inclusions in unalloyed pearlitic steel of the D75, D78, D80 and D83 grades designated for production of cord wires should not exceed the size of the standard No. 2 according to EN 10247:2007 standard.The non-metallic inclusions, especially including those of minimum plasticity such as oxides or brittle silicates, cause lowering of the wire ductility hindering the technological processes.Presence of the inclusions constitutes the main reason for lowering the degree of the material deformation. The research performed by Zelin [30] have shown that as a result of elastic deformations preceding the permanent plastic deformation of the material, the horizontal micro delamination of the single lamellae of cementite takes place.The changes were observed both, at tension and twisting of the wires.In the course of increasing the applied force the coalescence of the created micro delaminations follows and the cracking propagation resulting with decohesion of the whole component. Sauvage and Ivanisenko [21,22,29] have shown that the cause of cracking of pearlitic steel subjected to plastic working is segregation of carbide precipitations at the phase boundaries.The studies have confirmed the theory earlier described by Gridnev and Gavriluk [15,17,20], according to which as a result of plastic deformation the carbon atoms occupy vacancies appearing in the cementite lattice, increasing by that the concentration of carbon atoms at the phase boundary ferrite-cementite and, as a consequence, resulting in brittleness of those structures.Both teams have shown the simple relation between the degree of plastic deformation and the susceptibility of the pearlitic steel to cracking.The deformation increase is accompanied by increase in the density of lattice defects and thus the intensity of carbide segregations at phase boundaries increases. The structural analysis performed by Izotov et al. [23,24] have unequivocally shown that cracking of the cementite lamellae follows as a result of dislocations pile-up at the phase boundary between ferrite and cementite precipitations.As a result of the applied force the movement of edge dislocations in the ferrite precipitation follows, as well as dislocation of crystal fragments over the slip planes and because the fact that each ferrite lamella has different crystallographic orientation the dislocations are crossing at the cementite lamellae initiating finally creation of microcracks.The studies have confirmed the earlier literature reports, the works of Langeford, Wilson, Embury and Fisher [12,13,31] indicating that in the course of plastic deformation of pearlitic steel it is just on the cementite lamellae that the strongest concentration of lattice defects appears which, in the consequence is the cause of cracking of these structural components. The different theory explaining the mechanism of cementite lamellae cracking was presented by Languillaume et al. [27].According to his research, as a result of plastic pearlite deformation the uncontrolled, very strong increase in energy at the contact of both phases takes place, i.e. in the inter-lamellar spaces of pearlite.The observed increase in energy leads to thermodynamic destabilisation of cementite resulting in cracking of its lamellae.The results of this research have been confirmed many times by the other research teams including Danoix and Sauvage [9,29,31].The object of the research were two elastomeric track chains of the 332/ L5153 type coming from mini excavators from JCB, model 8018 CTS (Fig. 3).The tested tracks were differing with the degree of wear, the track No. 1 was the new sample, without operation, and the track No 2 was the sample in the afteroperation state used for the general building for the period of 1228 moto hours. For evaluating the impact of elastomeric track operating on structure and the selected mechanical properties of pearlitic steel the wire samples of d = 0,3 mm diameter were collected from the steel cord.Macroscopic tests of the type of the applied reinforcement have shown that in all tracks 36 lines of the steel cord appear arranged in two parallel layers of 18 lines each (Fig. 4).Each of the cord lines is built of 7 identical strands of 0,9 mm diameter.One of them is a core on which the remaining strands are wrapped in the form of one layer (Fig. 5).A line of the cord is wrapped in the right direction.All strands are built of 12 wires of 0,3 mm in diameter and consist of one convolution wire and two layers wrapped.The strand is made with wrapping in the left direction. For microscopic tests in the etched and non etched states the NIKON ECLIPSE MA200 light microscope with the NIS Elements BR software was used, and the observations were conducted at magnification from 100x to 1000x.The observations of microstructures of the tested steel were performed also with application of the JEOL JSM 6610A scanning electron microscope at magnifications from 1000× to 10 000×.In course of the studies the accelerating voltage of 15 and 25 kVW was used.The observations were conducted in the material contrast using the SE detectors. Hardness measurements of the specimens were performed with the Vickers method with the use of the MMT-X3 micro hardness meter in conditions complying with the PN-EN ISO 6507-2:1999 standard.The measurement time was 15 s and the load was 500 g. The static tensile test was performed in conditions compliant with the PN-EN ISO 6892-1:2010 standard.The tests were performed at the MTS 858 Mini Bionix type testing machine.The specimens were made of wire of the initial measurement length L 0 = 100 mm.The tensile tests were performed with constant stretching rate controlled by the rate of straining (method A according to the standard) and amounting to e Lc = 0,0067 1/s until the rupture.The basic strength properties of the material were determined: the tensile strength R m and necking Z. Analysis of technological properties involved the test of unidirectional turning and the test of bidirectional contraflexure of wires taken from the cord.The test of unidirectional turning was performed in conditions compliant with the PN-ISO 7800:1996 standard, which is the trial aimed at determining suitability of the material to the production processes.It involves turning the loaded wire around its own axis in one direction until rupture of the specimen.The tested specimens had length in accordance with the PN-ISO 7800:1996 standard and the applied load did not exceed 2% of the nominal load breaking the wire. The trial of bidirectional contraflexure according to PN-ISO 7801: 1996 is applied to determining the resistance of wires to plastic deformations.It involves multiple bidirectional contraflexure of a specimen by the angle of 90° around rolls of diameters defined in the PN-ISO 7801: 1996 standard.Both tests were performed with constant rate until fracture, the measurements were performed in the ambient temperature. The test results The macroscopic tests of the track No. 1 have shown that the contact surface of the track with foundation was ideally flat and the remnants of the production process were observed at surface of the trade in the form of linear shoulders (Fig. 6).Edges of the tread of the tested track were ended sharply and walls of the segments were uniform.Material of which the tested component was made was uniform and did not show symptoms of ageing and thickness of the tread was about 47 mm. The macroscopic tests of the track No. 2, in the after-operating condition have shown that it bears significant traces of wear resulting from utilizing of the machine.At its working surface many changes and damages resulting from direct contact with hard basement have been observed (Fig. 7).Edges of the tread were significantly rounded, which influences the decreased traction of the machine during maneuvering in the slushy terrain.Another trace of operating were losses in the form of ripped pieces of rubber as well as cuts and undercuts Microscopic observations of the material of the tested wires performed in the non-etched state have shown presence of large amount of non-metallic inclusions, mainly in the form of oxides.The impurities were distributed punctually and appeared in the number from the standard 3 to 4 according to the EN 10247:2007 standard, which, according to the subject literature data [11,16,20] exceeds the maximum permissible content of non-metallic inclusions in the pearlitic steel designated for production of cord wires (Fig. 8 and 9).Fatigue strength of steel wires is greatly dependent on metallurgical purity of the material.Presence of that large number of non-metallic inclusions, especially in the form of brittle oxides, may cause lowering of material ductility making the technological processes difficult, and in particular cases leading even to cracking of the wires during operation. The microscopic tests have shown that all the tested wires, according to recommendations of the PN-EN 10323:2005 (U) standard, were made of the unalloyed pearlitic steel and the applied cold drawing process disabled achieving the high degree of plastic deformation of the 80-90% row (Fig. 10 and 11).The initial microscopic tests have not shown significant differences in the structure of the tested samples but further observations with application of the scanning electron microscopy have shown the clear influence of operating on structure of the tested material (Fig. 12 and 13).Results of the microstructure analysis have shown that during operation of the track chain the material of steel cord is destroyed.Presence of numerous structure discontinuities oriented in line with the plastic working of the material was observed caused by presence of non-metallic inclusions in the tested materials (Fig. 13). Results of these studies have confirmed the Golis theories [16] on negative impact of non-metallic inclusions on strength of the cord wires, as well as the theory of Zelin [30] indicating that the main cause of cracking of the pearlitic steel subjected to operating are micro delaminations created between the cementite lamellae.The obtained results have shown that around impurities the material discontinuities are created capable of exceeding the critical size of the defect and causing the wire cracking.The process can have the following course: micropores are created at plastic deformation around non-me-tallic inclusions, at proceeding the plastic deformation the micro pores grow and approach each other, when the bridges between pores become narrow they are break-ing in sequence, as a result of micro bridges breaking the coalescence of dis-continuities follows in the direction perpendicular to the acting load, as a result of discontinuities coalescence the pores created around the non-metallic inclusions reach the size of the critical defect at which the cracking develops unstably and leads to the fatigue damage of a component. The results of strength tests have shown that the structure discontinuities observed in the microscopic tests significantly influence the mechanical and technological properties of wires (Table 1).The effect of operating the elastomeric track, the specimen No. 2, is clear drop in strength properties of the wires of its cord.The tensile strength for the wire sample coming from the cord of the non-operated track No. 1 amounted to about 3430 MPa, and for the wire sample coming from the track No. 2 -2050 MPa.The material is less tough, which results from the fact, that during the static tensile test the discontinuities located around the non-metallic inclusions grow in the perpendicular direction to the acting load and accelerate destruction of the wire sample. At the same time, it has been observed in the results of the strength tests that the plasticity of the material increased.The narrowing from the value of 15% for the new specimen increased to 18% for the specimen in the after-operating state.The narrowing in the operated specimen is most probably related not to the increase in the material plasticity but with annihilation of material discontinuities observed in the microscopic tests. The significant decrease in hardness of the material from afteroperating specimens also seems interesting.The measurements have been performed for materials collected directly from the track cord.Hardness for the wire sample from the new track was equal to 742 HV0,5, and for the wire sample from the after-operating track it dropped to the value of about 621 HV0,5 (Table 1).The hardness drop can be explained by presence of discontinuities in the structure of the tested wires.Increase in the material porosity directly converts to decrease in hardness. As a result of the performed technological trials of unidirectional torsion and bidirectional contraflexure of the tested wires it has been found that the increase in necking observed in the previous tests on the operated specimens definitely is not related to increase in plasticity of the material (Table 2).The plasticity of the material clearly drops, the number of turns in the torsion test falls from 171 to 84, and the number of contraflexures in the test decreases from 103 to the value of 38.Thus, this confirms the assumption that the increase of necking in the operated specimen is not related to the improvement in plasticity, but is a result of annihilation of the created material delaminations. Conclusions In the recent years the ever growing demand is observed for the fast and reliable transport means additionally characterised with large loading capacity.Pneumatic tires for cars, delivery trucks lorries, busses and agricultural or mining machines, or the specialist building equipment cannot be further reinforced with yarn, viscose or nylon.For strengthening tyres, track chains, conveyor belts, as well as the pressure hoses the high resistance wires made of unalloyed pearlitic steels are currently applied.Unfortunately, one of the problems widely discussed in the subject literature is cracking of the pearlitic steels during the operating.Results of the fractographic tests of the defective track chain have shown that frequent breakage of the steel cord wires continuity in the general building practice is not the result of its improper operation.The fatigue character of the fracture should not be attributed to the way and time of operation but directly from the metallurgical quality of the cord wire material. There is plenty of theories describing causes of pearlitic steel cracking during operation.According to Golis [16], the fatigue strength of the cord wires is greatly dependant on the degree of contamination of the material with non-metallic inclusions, and particularly on the content of oxides, silicates and sulfides.The Author clearly indicates that the maximum permissible content of non-metallic inclusions in the unalloyed pearlitic steel of the D75, D78, D80 and D83 grades designated for production of cord wires must not exceed the size of the standard No. 2, according to the EN 10247:2007 standard. Microscopic observations of the cord wire material coming from the tested track chains performed in the non-etched state have shown the presence of large number of non-metallic inclusions, mainly in the form of oxides.The impurities were distributed punctually and appeared in quantities from the standard 3 to 4 according to the EN 10247:2007 standard.At the same time, results of further microscopic analysis have shown that during operation of the tracks the material of the steel cord is clearly eroding.The presence of numerous structure discontinuities was observed and the detailed microscopic analysis has shown that around the brittle oxides precipitations the micro pores are created, which with the progress in plastic deformation are growing and approach to each other reaching the ever growing sizes. In the work of Zelin [30] it can be read that as a result of the elastic deformations the horizontal micro delaminations of the individual cementite lamellae are created.During operation the cord wires are subjected to the complex state of stresses in the elastic range, which explains the fact, that the observed structure discontinuities were oriented in parallel to the cementite lamellae arranged in bands. The consequence of the observed structural changes in the material of the tested cord wires was the significant decrease in the strength and technological properties.The drop in the tensile strength resulted from the fact, that during the static tensile test the discontinuities located around the non-metallic inclusions are growing in the perpendicular direction to the acting load and after reaching the size of the critical defect accelerate damage of the wire sample.The same relationship explains the results of the technological tests of unidirectional torsion as well as bidirectional contraflexure.Results of both trials indicate for a decrease in the material plasticity as a result of its operating, which is the consequence of annihilations of the described material discontinuities.The decrease in hardness can also be explained by presence of discontinuities in the structure of the tested wires.The longer the material was subjected to operation the porosity of the material was growing, translating directly to the decrease in its hardness. Fig. 1 . Fig. 1.Defective elastomeric track coming from mini excavator model 8018 CTS from JCB. Visible fragment of the steel cordu, which punched the layer of rubber and led to damaging the tread (see arrow) Fig. 5 . Fig. 5. Schematic diagram of the steel cord shown in Fig. 3, visible seven strands of 0,9 mm in diameter. Fig. 6 .Fig. 8 .Fig. 9 . Fig. 6.The elastomeric track No. 1, the non-operated specimen.There are no cracks and damages at its surface, the tread edges are sharp and the segment walls are uniform, Visible remnants ot the production process in the form of linear shoulders Fig. 10 . Fig. 10.Microstructure of the wire sample coming from cord of the new track No. 1. Visible strong material deformation texture amounting to about 90%.Longitudinal section.LM Table 1 . Results of mechanical properties measurements for the tested wire samples. Table 2 . Results of the technological trials on the tested wires.
2018-12-05T20:45:05.255Z
2016-12-15T00:00:00.000
{ "year": 2016, "sha1": "296e96e0bf450808d13a8ed033e9331036f766b2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.17531/ein.2017.1.13", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "296e96e0bf450808d13a8ed033e9331036f766b2", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
23780747
pes2o/s2orc
v3-fos-license
Dietary habits of colorectal neoplasia patients in comparison to their first-degree relatives with colorectal neoplasias (143 men, 99 women; mean age: 64 ± 12 years) and 160 first-degree relatives (66 men, 94 women; mean age: 48 ± 11 years). Fifty-five of the first-degree relatives were found to have a neoplastic lesion upon colonoscopy, while the remaining 105 were without neoplasia. The control group con-tained 123 individuals with a negative family history for neoplastic lesions (66 men, 57 women; mean age: 54 ± 12 years). Two hypotheses were tested. In the first, the dietary habits of first-degree relatives with neoplasia were more similar to those of patients with neoplasia, while the dietary habits of first-degree relatives without neoplasia were similar to those of the control group. In the second, no sex-related differences in dietary habits were expected between the particular groups. Indeed, no significant differences were observed in the dietary habits between the groups of patients, controls and first-degree relatives with/without neoplastic lesions. Nevertheless, statistically significant sex-related differences were observed in all groups, wherein women had healthier dietary habits than men. CONCLUSION: In all groups examined, women had healthier dietary habits than men. Modification of screening guidelines according to sex may improve the efficiency of screening programs. Abstract AIM: To compare the dietary habits between colorectal neoplasia patients, their first-degree relatives, and unrelated controls. METHODS: From July 2008 to April 2011, we collected epidemiological data relevant to colorectal cancer from patients with colorectal neoplasias, their first-degree relatives, and also from a control group consisting of people referred for colonoscopy with a negative family history of colorectal cancer and without evidence of neoplasia after colonoscopic examination. The first-degree relatives were divided into two groups following the colonoscopic examination: (1) patients with neoplasia or (2) patients without neoplasia. Dietary habits of all groups were compared. A χ 2 test was used to assess the association between two dichotomous categorical variables. RESULTS: The study groups consisted of 242 patients dietary habits of men and women in all groups. In all groups, women had healthier dietary habits. Modification of screening guidelines according to sex may improve the efficiency of screening programs, although further studies are needed to support this hypothesis. INTRODUCTION Colorectal cancer is the second leading cause of cancerrelated death in developed countries. The Czech Republic has the highest prevalence of colorectal cancer in the world. In 2008, the incidence of colorectal cancer in the Czech Republic was 94.2/100000 men and 61.8/100000 women [1] . It is well established that colonoscopic screening reduces both the occurrence and mortality of colorectal cancer [2] . In 2000, the Czech Republic introduced a nationwide cancer-screening program that included fecal occult blood testing of people over 50 years of age. The program was then updated in 2009 to include the possibility of a primary colonoscopy screening for those over 55 years of age [3,4] . Colorectal neoplasias (CRN) are associated with nonhereditary as well as hereditary risks. Colorectal cancer is the most common familial form of cancer. More than 30% of cases can be attributed to hereditary causes, of which only 5% are due to hereditary cancer syndromes such as familial adenomatous polyposis syndrome and hereditary non-polyposis colorectal cancer [5] . First-degree relatives (FDR) of patients with CRN (either colorectal cancer or advanced adenomas) show up to a 4-fold increased risk for CRN when compared with the general population and are at increased risk for advanced or multiple adenomas [6][7][8][9] . Non-hereditary risk factors for colon cancer include advanced age, male sex, alcohol consumption and smoking [10][11][12] . Dietary factors, such as elevated red meat consumption and low intake of fruit, vegetables, dairy products and dietary fiber, have been associated with an increased risk for CRN [13] . Obesity, sedentary lifestyle, inflammatory bowel diseases and several other conditions such as acromegaly, diabetes mellitus and ischemic heart disease have also been shown to increase risk for colon cancer [14][15][16][17] . The goal of this study was to compare the dietary habits of patients with CRN and a control group with the dietary habits of FDR with regard to the findings obtained after a colonoscopy screening. The first tested hypothesis was that dietary habits of FDR with neoplasia are similar to those of patients with CRN and that the dietary habits of FDR without neoplasia are similar to those of the control group. The second tested hypothesis was that there are no sex-related differences of dietary habits between the particular groups. Study subjects and clinical data From July 2008 to April 2011, we collected epidemiological data relevant to colorectal cancer, both from patients with CRN and their FDR as well as from a control group. Epidemiological data, including smoking status (current/ former vs never), fat intake (low vs high), body mass index (BMI; < 30 vs ≥ 30 kg/m 2 ), beer consumption (daily/ occasionally vs never), consumption of dairy products, fruits, vegetables and red meat (daily vs less frequent) and education attainment (primary vs secondary/tertiary), were collected from the patients with CRN, FDR and controls by a medical doctor. A single specialist in gastroenterology and nutrition performed the interview about the respondent's dietary habits (the amounts of red meat, fat, dairy products, etc.) and made a categorization according to the answers (high intake/low intake in each category). Collection of epidemiological data was part of The Family Project, a unique direct medical counseling project targeting FDR that took place at a single center (non-university), Hospital Frydek-Mistek. The goals of the project were to promote proper colonoscopic surveillance of FDR and to identify FDR at highest risk for CRN. The project was approved by the local ethics committee. All participants signed an informed consent. Simultaneously, an informative campaign was launched in the local media to promote and support public awareness of the project. FDR were referred to colonoscopic examinations and, dependent on the findings, were divided into FDR with or FDR without neoplasia. The control group contained people with a negative family history that had been referred for colonoscopy and were confirmed to be without neoplasia according to the findings from the colonoscopic examination. Statistical analysis Ages are presented as mean ± SD. The dietary habits of all groups (patients with CRN, FDR with neoplasia, FDR without neoplasia, and control group) were compared. A χ 2 or Fisher's exact test was used to assess the association between two dichotomous categorical variables. Because of a heterogeneous representation of men and women in the FDR without neoplasia group, the men and women in all groups were compared separately. RESULTS The study groups consisted of 242 patients with CRN (143 men, 99 women; 64 ± 12 years) and 160 FDR (66 men, 94 women; 48 ± 11 years). Fifty-five patients in the FDR group were found to have neoplastic lesions upon colonoscopy, while 105 patients had no evidence of neo-plasia. The control group consisted of 123 individuals with a negative family history of colon cancer and without neoplastic lesion following colonoscopic examination (66 men, 57 women; 54 ± 12 years). Characteristics of all groups are presented in Table 1. We first tested the hypothesis that dietary habits of FDR with neoplasia are similar to those of patients with CRN and that dietary habits of FDR without neoplasia are similar to those of the control group. We next tested the hypothesis that there are no sex-related differences in the dietary habits between the particular groups. Comparisons of the groups are presented in Tables 2 and 3. The comparison between men and women in all groups is shown in Table 4. In summary, both of our hypotheses were disproven. There were no significant differences in the dietary habits between the groups of patients, controls and FDR with/ without neoplastic lesions. In all groups, however, there were statistically significant differences in the dietary habits between men and women, despite no differences in education attainment among them. DISCUSSION Our study was based on epidemiological data relevant to colorectal cancer that was obtained from patients with CRN, their FDR with neoplasia, FDR without neoplasia, and from a control group. sociation between poor dietary habits and occurrence of neoplasia in patients with CRN and their FDR with neoplasia, despite all the proven non-hereditary risk factors. The second tested hypothesis was that there would be no sex-related differences between the particular groups. Regardless of the colonoscopic findings in all groups, however, males had worse dietary habits than females, despite no difference in education attainment between the men and women. It is well known that women gain more health resources in their screening programs. This fact, together with a known higher incidence of CRN in men, places men at a disadvantage. Thus, we can assume that the one-third higher incidence of colorectal cancer in men could be, in part, attributed to their less healthy lifestyle. Media campaigns should, therefore, be targeted to the male population, since there is a great need for improvement of their lifestyle and dietary habits. This study has several limitations. The sample size of each group was relatively small and made up of individuals stemming from a population with the highest prevalence of colorectal cancer in the world. The results, therefore, are specific and may only apply to the Czech population surveyed. Diabetes mellitus was not observed throughout all groups (only in the CRN group of patients), so we cannot evaluate obesity and dietary habits with respect to diabetes mellitus. The mean ages across the groups examined were different and represent another weakness of the study. In conclusion, we did not find significant differences between patients and their FDR with/without neoplastic lesions, although we did identify statistically significant differences between the habits of men and women in all groups. Women in all groups had healthier dietary habits. We propose that media campaigns should be targeted to the male population, due to a need to improve their lifestyle. Modification of screening guidelines according to sex may improve the efficiency of screening programs but further studies are needed to support this hypothesis. Background Colorectal neoplasias are associated with hereditary and non-hereditary risks. Colorectal cancer is the most common familial form of cancer. First-degree relatives of patients with colorectal neoplasia, both colorectal cancer and advanced It is well established that risks for colorectal cancer can be either hereditary or non-hereditary. Nonhereditary risks are well described, as mentioned in the Introduction. There is also an association of colorectal cancer with the gut microbiome. Intestinal microbiota can transform food compounds into genotoxic agents, activate proto-oncogenes, or inactivate tumor suppressor genes [18][19][20] . Genetic factors associated with an increased risk for CRN include low-penetrant susceptibility loci and specific polymorphisms. Certain genetic variants and polymorphisms in a number of genes have been associated with increased colon cancer risk; APC-I1307K, HRAS1-VNTR and MTHFR variants represent the strongest candidates for low penetrance susceptibility alleles [21,22] . In genome-wide association studies, as many as 170 common but separate genetic variations have been implicated in CRN susceptibility [23] . Based on current data, there are three main pathways of colorectal carcinogenesis: chromosomal instability, microsatellite instability, and hypermethylation [24,25] . One important question, however, is how hereditary risks may be confounded by familial similarities in diet, physical activity level, or other environmental exposures. Our first tested hypothesis was that the dietary habits of FDR with neoplasia are similar to those of CRN patients, while the dietary habits of FDR without neoplasia are different and more similar to those of the control group. We hypothesized that both the controls and FDR without neoplasia have a healthier lifestyle, while patients with CRN and FDR with neoplasia have worse, shared dietary habits. Because of the heterogeneous representation of men and women FDR without neoplasia, men and women in all groups were compared separately. To our surprise, all groups had very similar dietary habits. We only observed a difference in the male CRN patients, where there were significantly more smokers than in the group of FDR males without neoplasia. It has been shown that smoking can increase risk of colorectal cancer by up to 18% [12] . Paradoxically, male controls consumed more beer and lower amounts of fruits and vegetables than FDR males with neoplasia. Female controls consumed more red meat than FDR females without neoplasia. It is surprising that we did not observe any as-
2018-04-03T05:59:37.507Z
2014-05-07T00:00:00.000
{ "year": 2014, "sha1": "4a6bfced4ce001a59a6ffb0fb76cb05b5b8925e9", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v20.i17.5025", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "09821402c2db21375c95325c1f83e8a6dd208832", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15850760
pes2o/s2orc
v3-fos-license
Grand Challenge for Ion Channels: an Underexploited Resource for Therapeutics In the last decade, only a few new ion channel drugs have reached the market according to the Food and Drug Administration. These include nateglinide, a non-sulfonylurea blocker of KATP channel used in type II diabetes; ziconotide, a N-type calcium channel blocker against severe chronic pain; pregabalin, a calcium channel α2δ subunit ligand indicated for neuropathic pain; ranolazine, a blocker of late sodium current for chronic angina pectoris; and lubiprostone, a ClC-2 chloride channel activator for chronic idiopathic constipation. In the meantime, the sodium channel blocker mexiletine, indicated as a class IIb antiarrhythmic drug but used off-label in many disorders of membrane excitability, was withdrawn from the market except in Japan. While a clinical trial was launched last year to confirm the effectiveness of mexiletine against muscle stiffness in non-dystrophic myotonia, myotonic patients, and their physicians have now to consider shifting to other drugs and defining new therapy rules. Nonetheless it is widely acknowledged that research activity of the pharmaceutical community in ion channel field has been tremendously increasing in recent years (Conte Camerino et al., 2007). Such activity continues to investigate sectors traditionally covered by ion channel drugs, such as cardiovascular diseases, epilepsies, neuropathic pain, and diabetes. For instance, new calcium channel blockers, which constitute one of the major classes of therapeutic agents, have been identified by pharmacological and radiochemical techniques and largely studied by patch-clamp. Thus this research field is not abandoned and novel compounds are currently considered (Godfraind, 2005, 2006). Now it also regards new sectors with great potential such as cancer, inflammation, immunomodulation, kidney diseases, and pathogen infection. In addition, ion channels are straightforward druggable targets for a number of rare diseases, including the ion channelopathies. Ion channelopathies are monogenic hereditary diseases due to mutations in genes encoding ion channel subunits (Conte Camerino et al., 2008). The term was coined about 20 years ago to define the skeletal muscle disorders due to mutations in sodium, chloride, and calcium genes with symptoms ranging from loss of excitability to over-excitability. Since then many other genetic diseases affecting the whole range of tissues have been linked to ion channel gene defects. The discovery of ion channelopathies may be considered as a revolution for the study of ion channels, the third one after the pioneering work of Nobel prize-winning Hodgkin and Huxley in the 1950s, who have defined the basis for selective ion currents across the plasma membrane, and the invention of the patch-clamp technique by Neher and Sakmann in the 1970s, Nobel prize in 1991, which allows to record any kind of ion channel activity in almost all cell types (Hodgkin and Huxley, 1952; Hamill et al., 1981). An other recent outstanding advancement in ion channel research was the 3D resolution of channel structure by X-ray crystallography (MacKinnon, 2004). The discovery of channelopathies coincides with the combination of electrophysiological and molecular genetic techniques. The possibility to study the dysfunction of a mutated channel in an heterologous system of expression and to correlate it with the clinical phenotype have tremendously increased our knowledge about the role of ion channel in physiology and pathology. Importantly the ion channelopathies can also serve as paradigms to understand the more complex and frequent multifactorial diseases. At the same time, the use of knock-down and knock-out animal experiments have also significantly contributed to the understanding of ion channel contribution to diseases. Altogether these progresses have underscored the therapeutic potential of ion channel modulators. Drugs acting on ion channels were already in use before the channelopathies were known, but most of these drugs were used empirically and were found to act on ion channels afterward. Now our knowledge about their role in diseases have validated ion channels as very promising druggable targets. What are the difficulties encountered in ion channel drug discovery? A major problem of ion channel drugs regards the side effects. An ion channel type is often expressed in many cell types, and it is important to target the ill tissue without altering the normal function of the others. Let's take the example of voltage-gated sodium channels, which are expressed in many excitable tissues including the heart and the central nervous system. The use of sodium channel blockers to counteract hyper-excitability in peripheral tissues, such as the skeletal muscle in myotonia or the nociceptive neuron in neuropathic pain, can be significantly limited by the side action on cardiac or CNS sodium channels. Thus the International Association for the Study of Pain recommends orally available sodium channel blockers as third-line treatment in neuropathic pain, mainly because of their potential side effects (Dworkin et al., 2007). Up to now, to limit these unwanted effects, a peculiarity of sodium channel blockers was exploited, that is the use-dependence – the higher the frequency of action potential firing, the greater is the block of sodium channels – allowing a rather specific action of drugs in hyper-excited cells. A more recent successful therapy based on a channel gating-dependent mechanism of action is that of ranolazine in angina pectoris, which appears to inhibit quite specifically the guilty late sodium current that is increased in the ischemic myocyte (Chaitman, 2006). An other manner to limit the side effects is to restrain body exposure when it is possible, such as with topical lidocaine for post-herpetic pain or intrathecal ziconotide for severe chronic pain. On an other hand, the search for drugs specific to channel subtypes has proven challenging. For instance, nine genes in the human genome are known to encode voltage-gated sodium channel subtypes with high sequence homology, which are expressed in a tissue-selective manner. No specific drug for the skeletal muscle hNav1.4 sodium channel is available yet, and drugs relatively selective for hNav1.7 and hNav1.8 channels, which are mainly expressed in peripheral nociceptive neurons, have been described only very recently (Jarvis et al., 2007; Williams et al., 2007). Drug screening on ion channels has been a slow process for a long time, due to the lack of high throughput techniques. The gold standard for studying ion channel function and pharmacology is the patch-clamp technique (Hamill et al., 1981). Although the amazing signal and time resolutions of this technique allow an in-depth description of intimate drug–channel interaction, it remains a laborious and time consuming experiment requiring skilled operators, which limits its application as a screening platform. In the last decade, high throughput automated electrophysiological platforms have been developed, which are more suited for large drug screenings such as those performed in the modern pharmaceutical industry (Castle et al., 2009). This is expected to accelerate the individuation and lead optimization of drug candidates. Nevertheless, because of the complex interaction of drugs with channel gating, it is likely that the patch-clamp technique will remain irreplaceable for many years in many circumstances. This later consideration is especially true for searching drug candidates in ion channelopathies, because the mutations may modify drug–channel interaction either directly or secondarily to alteration of gating. More drastically, voltage-sensor mutations of hNav1.4 sodium channels responsible for periodic paralysis have been shown to create a new pathway for cations parallel to the sodium ion conducting pore of the channel, which implies searching drugs acting radically differently on mutant compared to wild-type channels (Sokolov et al., 2010). Moreover, other technologies would be also needed to screen ion channels for pharmacological chaperones able to improve ion channel surface membrane expression, which may be impaired by pathogenic mutations (Amaral, 2006). In addition to pharmacotherapy, ion channels have been often neglected as targets for toxicants or as undesired targets for drugs. Now, preclinical testing is required by the FDA and other agencies for registration of pharmaceuticals, which consists in assessing the risk potential of the test substance to block cardiac hERG potassium channels and delay ventricular repolarization. Many pharma/biotech(s) also routinely test their drug candidates on cardiac hNav1.5 sodium channels before to go ahead in development. This may allow to discard potential cardiotoxic compounds early during drug development. The progress of high throughput systems to investigate ion channel pharmacology will certainly allow to increase this kind of toxicological assays. In conclusion, the human genome project has identified more than 400 genes encoding ion channel subunits, and many have been already shown to play a critical role in diseases, either directly or indirectly. It is likely that this knowledge will further increase by taking advantage of the most modern methodologies, such as the omics technologies. In the next future, the grand challenge of ion channel research will consist in finding more specific drugs able to block selectively ion channel subtypes or ion channel mutants to allow the development of new and safer pharmacotherapy and concretize ion channel pharmacogenetics from the bench side to the clinics. channelopathies were known, but most of these drugs were used empirically and were found to act on ion channels afterward. Now our knowledge about their role in diseases have validated ion channels as very promising druggable targets. What are the difficulties encountered in ion channel drug discovery? A major problem of ion channel drugs regards the side effects. An ion channel type is often expressed in many cell types, and it is important to target the ill tissue without altering the normal function of the others. Let's take the example of voltage-gated sodium channels, which are expressed in many excitable tissues including the heart and the central nervous system. The use of sodium channel blockers to counteract hyper-excitability in peripheral tissues, such as the skeletal muscle in myotonia or the nociceptive neuron in neuropathic pain, can be significantly limited by the side action on cardiac or CNS sodium channels. Thus the International Association for the Study of Pain recommends orally available sodium channel blockers as third-line treatment in neuropathic pain, mainly because of their potential side effects (Dworkin et al., 2007). Up to now, to limit these unwanted effects, a peculiarity of sodium channel blockers was exploited, that is the use-dependence -the higher the frequency of action potential firing, the greater is the block of sodium channels -allowing a rather specific action of drugs in hyperexcited cells. A more recent successful therapy based on a channel gating-dependent mechanism of action is that of ranolazine in angina pectoris, which appears to inhibit quite specifically the guilty late sodium current that is increased in the ischemic myocyte (Chaitman, 2006). An other manner to limit the side effects is to restrain body exposure when it is possible, such as with topical lidocaine for post-herpetic pain or intrathecal ziconotide for severe chronic pain. On an other hand, the search for drugs specific to channel subtypes has Grand challenge for ion channels: an underexploited resource for therapeutics Diana Conte Camerino* and Jean-François Desaphy Section of Pharmacology, Department of Pharmacobiology, Faculty of Pharmacy, University of Bari "Aldo Moro" , Bari, Italy *Correspondence: conte@farmbiol.uniba.it In the last decade, only a few new ion channel drugs have reached the market according to the Food and Drug Administration. These include nateglinide, a non-sulfonylurea blocker of KATP channel used in type II diabetes; ziconotide, a N-type calcium channel blocker against severe chronic pain; pregabalin, a calcium channel a2δ subunit ligand indicated for neuropathic pain; ranolazine, a blocker of late sodium current for chronic angina pectoris; and lubiprostone, a ClC-2 chloride channel activator for chronic idiopathic constipation. In the meantime, the sodium channel blocker mexiletine, indicated as a class IIb antiarrhythmic drug but used off-label in many disorders of membrane excitability, was withdrawn from the market except in Japan. While a clinical trial was launched last year to confirm the effectiveness of mexiletine against muscle stiffness in nondystrophic myotonia, myotonic patients, and their physicians have now to consider shifting to other drugs and defining new therapy rules. Nonetheless it is widely acknowledged that research activity of the pharmaceutical community in ion channel field has been tremendously increasing in recent years (Conte Camerino et al., 2007). Such activity continues to investigate sectors traditionally covered by ion channel drugs, such as cardiovascular diseases, epilepsies, neuropathic pain, and diabetes. For instance, new calcium channel blockers, which constitute one of the major classes of therapeutic agents, have been identified by pharmacological and radiochemical techniques and largely studied by patch-clamp. Thus this research field is not abandoned and novel compounds are currently considered (Godfraind, 2005(Godfraind, , 2006. Now it also regards new sectors with great potential such as cancer, inflammation, immunomodulation, kidney diseases, and pathogen infection. In addition, ion channels are straightforward druggable targets for a number of rare diseases, including the ion channelopathies. Ion channelopathies are monogenic hereditary diseases due to mutations in genes encoding ion channel subunits (Conte Camerino et al., 2008). The term was coined about 20 years ago to define the skeletal muscle disorders due to mutations in sodium, chloride, and calcium genes with symptoms ranging from loss of excitability to over-excitability. Since then many other genetic diseases affecting the whole range of tissues have been linked to ion channel gene defects. The discovery of ion channelopathies may be considered as a revolution for the study of ion channels, the third one after the pioneering work of Nobel prize-winning Hodgkin and Huxley in the 1950s, who have defined the basis for selective ion currents across the plasma membrane, and the invention of the patchclamp technique by Neher and Sakmann in the 1970s, Nobel prize in 1991, which allows to record any kind of ion channel activity in almost all cell types (Hodgkin and Huxley, 1952;Hamill et al., 1981). An other recent outstanding advancement in ion channel research was the 3D resolution of channel structure by X-ray crystallography (MacKinnon, 2004). The discovery of channelopathies coincides with the combination of electrophysiological and molecular genetic techniques. The possibility to study the dysfunction of a mutated channel in an heterologous system of expression and to correlate it with the clinical phenotype have tremendously increased our knowledge about the role of ion channel in physiology and pathology. Importantly the ion channelopathies can also serve as paradigms to understand the more complex and frequent multifactorial diseases. At the same time, the use of knockdown and knock-out animal experiments have also significantly contributed to the understanding of ion channel contribution to diseases. Altogether these progresses have underscored the therapeutic potential of ion channel modulators. Drugs acting on ion channels were already in use before the to improve ion channel surface membrane expression, which may be impaired by pathogenic mutations (Amaral, 2006). In addition to pharmacotherapy, ion channels have been often neglected as targets for toxicants or as undesired targets for drugs. Now, preclinical testing is required by the FDA and other agencies for registration of pharmaceuticals, which consists in assessing the risk potential of the test substance to block cardiac hERG potassium channels and delay ventricular repolarization. Many pharma/biotech(s) also routinely test their drug candidates on cardiac hNav1.5 sodium channels before to go ahead in development. This may allow to discard potential cardiotoxic compounds early during drug development. The progress of high throughput systems to investigate ion channel pharmacology will certainly allow to increase this kind of toxicological assays. In conclusion, the human genome project has identified more than 400 genes encoding ion channel subunits, and many have been already shown to play a critical role in diseases, either directly or indirectly. It is likely that this knowledge will further increase by taking advantage of the most modern methodologies, such as the omics technologies. In the next future, the grand challenge of ion channel research will consist in finding more specific drugs able to block selectively ion channel subtypes or ion channel mutants to allow the development of new and safer pharmacotherapy and concretize ion channel pharmacogenetics from the bench side to the clinics. proven challenging. For instance, nine genes in the human genome are known to encode voltage-gated sodium channel subtypes with high sequence homology, which are expressed in a tissue-selective manner. No specific drug for the skeletal muscle hNav1.4 sodium channel is available yet, and drugs relatively selective for hNav1.7 and hNav1.8 channels, which are mainly expressed in peripheral nociceptive neurons, have been described only very recently (Jarvis et al., 2007;Williams et al., 2007). Drug screening on ion channels has been a slow process for a long time, due to the lack of high throughput techniques. The gold standard for studying ion channel function and pharmacology is the patch-clamp technique (Hamill et al., 1981). Although the amazing signal and time resolutions of this technique allow an in-depth description of intimate drug-channel interaction, it remains a laborious and time consuming experiment requiring skilled operators, which limits its application as a screening platform. In the last decade, high throughput automated electrophysiological platforms have been developed, which are more suited for large drug screenings such as those performed in the modern pharmaceutical industry (Castle et al., 2009). This is expected to accelerate the individuation and lead optimization of drug candidates. Nevertheless, because of the complex interaction of drugs with channel gating, it is likely that the patch-clamp technique will remain irreplaceable for many years in many circumstances. This later consideration is especially true for searching drug candidates in ion channelopathies, because the mutations may modify drug-channel interaction either directly or secondarily to alteration of gating. More drastically, voltage-sensor mutations of hNav1.4 sodium channels responsible for periodic paralysis have been shown to create a new pathway for cations parallel to the sodium ion conducting pore of the channel, which implies searching drugs acting radically differently on mutant compared to wild-type channels (Sokolov et al., 2010). Moreover, other technologies would be also needed to screen ion channels for pharmacological chaperones able
2014-10-01T00:00:00.000Z
2010-08-20T00:00:00.000
{ "year": 2010, "sha1": "b25646a749d1643d136d1dce6932910853719efd", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2010.00113/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "647e97f35a627601842204106b77aeecfff63053", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15541970
pes2o/s2orc
v3-fos-license
Mathematical Model of Colorectal Cancer with Monoclonal Antibody Treatments We present a new mathematical model of colorectal cancer growth and its response to monoclonal-antibody (mAb) therapy. Although promising, most mAb drugs are still in trial phases, and the possible variations in the dosing schedules of those currently approved for use have not yet been thoroughly explored. To investigate the effectiveness of current mAb treatment schedules, and to test hypothetical treatment strategies, we have created a system of nonlinear ordinary differential equations (ODE) to model colorectal cancer growth and treatment. The model includes tumor cells, elements of the host's immune response, and treatments. Model treatments include the chemotherapy agent irinotecan and one of two monoclonal antibodies - cetuximab, which is FDA-approved for colorectal cancer, and panitumumab, which is still being evaluated in clinical trials. The model incorporates patient-specific parameters to account for individual variations in immune system strength and in medication efficacy against the tumor. We have simulated outcomes for groups of virtual patients on treatment protocols for which clinical trial data are available, using a range of biologically reasonable patient-specific parameter values. Our results closely match clinical trial results for these protocols. We also simulated experimental dosing schedules, and have found new schedules which, in our simulations, reduce tumor size more effectively than current treatment schedules. Additionally, we examined the system's equilibria and sensitivity to parameter values. In the absence of treatment, tumor evolution is most affected by the intrinsic tumor growth rate and carrying capacity. When treatment is introduced, tumor growth is most affected by drug-specific PK/PD parameters. Introduction According to the American Cancer Society, colorectal cancer is the third most commonly diagnosed cancer and the third leading cause of cancer death in both women and men in the United States [4]. Monoclonal antibodies have been explored as an adjuvant treatment for colorectal cancer, but there are still many unanswered questions about their effectiveness and optimal use. The goal of this work is to contribute to the understanding of how best to incorporate monoclonal antibodies into colorectal cancer treatment. We present a system of nonlinear ordinary differential equations (ODEs), that models the growth of a colorectal tumor, its interactions with the host's immune system, and the effects of three treatment options: the chemotherapy drug irinotecan, and two monoclonal-antibody (mAb) treatments, cetuximab and panitumumab. We use this model to run clinical trial simulations over cohorts of virtual patients with varying response rates. After validating our outcomes against published clinical trial data, we then explore alternate hypothetical treatment scenarios. Figure 1: Three methods of mAb-induced tumor cell death are represented in this model. If an NK cell is present then the cell can undergo ADCC, if a chemotherapy molecule is present then the cell will increase death from the chemotherapy drug, and otherwise, the mAb molecule will cause tumor cell death on its own, through a variety of mechanisms. reasonable use tumor size as a measure of treatment efficacy in our model as well. Other mathematical models of colonic cancer focus on the initiation of the disease. For example, in [27] a mathematical model is developed that supports the hypothesis that two types of genetic instability can lead to the tumorigenesis in individuals with colorectal cancer. More recently, Lo et al ( [33]) propose a mathematical model of the initiation of colorectal cancer that explores a possible link with colitis. The model presented here is an extension of the work of de Pillis et al. [15], in which a tumorcell population, immune-cell populations, and drug concentrations are modeled with a system of nonlinear ODEs. The model by de Pillis et al. also includes patient-specific parameters representing the strength of the patient's immune system, and has been validated with published studies on mice and humans [17]. It has successfully demonstrated the need for immunotherapy in addition to chemotherapy to prevent the tumor from growing again after drug therapies have been completed, and was used to study the importance of the patient-specific parameters in the effectiveness of immunotherapy treatment [15,16]. The new model includes terms for monoclonal-antibody treatment and its effects on the cell populations, and parameter values have been adjusted to reflect dynamics specific to colorectal cancer. The goal of this mathematical model is to describe tumor growth, immune response, and treatments, including chemotherapy and monoclonal antibody (mAb) treatments. Our model tracks the following populations and quantities: The specific treatments that we will explore are the chemotherapeutic drug irinotecan (CPT11), and mAbs Cetuximab or Panatumumab. • Treatments: vM (t) the amount of irinotecan injected per day per liter of blood (mg/liter per day); vA(t) the amount of monoclonal antibodies injected per day per liter of blood (mg/liter per day). In the following section, we give a description of the equations describing the evolution of each of population. In Section 3 examples of the evolution of simulated cell populations over time are presented, and treatments and clinical trials were simulated. Finally, we present a parameter sensitivity analysis and discuss the results. Details of the parameter estimation, a discussion of equilibria and their stability and further sensitivity analyses are given in the Supplementary Materials. Equations The full system of ODEs of the model is given below. The equations are based on the model proposed in [15], with additions necessary to describe mAb and combination treatments. These additional terms are shown in bold face. A summary of the purpose of each model term can be found in Tables 1-6. Model Terms Describing Growth and Interactions Each of Equations (2.1) -(2.7) describes the time evolution of one of the eight system variables. Each equation contains a growth, or source, term, and a decay term. Most of the equations also contain interaction terms that describe how one population of cells or molecules affects another. For example, in Equation (2.1), the tumor is assumed to grow logistically in the absence of other cells or antibodies. The competition term between tumor and NK cells follows a mass action law, where the effectiveness of the NK cells in killing tumor cells, or the per cell kill rate, is enhanced by the presence of monoclonal antibodies (see also the discussion below). The interaction between cytotoxic T lymphocytes (CTLs) and tumor cells is described by a ratio-dependent law, articulated in Equation (2.8). The derivation of this term is described in detail in [17]. A recruitment term is included for the tumor-specific CD8 + T cells, as well as a production term in Equation (2.6) that reflects the increased presence of IL-2 when CTLs are present. Interleukin, whose concentration is denoted by the variable I(t), activates the production of NK cells and CD8 + cells, indicated by the positive saturating terms in Equations (2.2) and (2.3). However, IL-2 can also also aid in the inactivation of CD8 + cells. From Abbas et al. [1], we find that the deactivation of CD8+T cells occurs through a pathway that requires IL-2 and the action of CD4+T cells (found in the circulating lymphocytes). Moreover, it occurs only at high concentrations of activated CD8+T cells. This deactivation is represented by the term − uL 2 CI κ+I in Equation (2.3). Also described in the model are the effects of a cytotoxic drug such as irinotecan. This drug is assumed to have a detrimental effect on all of the cell populations. For more details on the derivation of these terms see [15] and [14], and for parameter values, sources, and derivations see the Supplementary Materials. Discussion of Terms Describing Treatment In this section we give details on terms in the model that were added to the one proposed in [15]. In Equation (2.1), three terms represent the three pathways of mAb-induced tumor-cell death (see Figure 1). • The term −ξ A h 1 +A N T represents the rate of tumor-cell death caused by ADCC. Some monoclonal antibodies have protein structures which, when bound to a tumor cell, allow them to simultaneously activate NK cells and to direct them to the invader [19]. Thus, when a mAb/tumor-cell complex and NK cell meet, the tumor cell is more likely to be killed than when an NK cell meets an unbound tumor cell. Kurai and colleagues [28] found that cetuximab has a threshold concentration above which ADCC activity no longer increases. So, we assume that ADCC activity increases with mAb concentration until it becomes saturated, and we model this with a sigmoid function. • The term −KAT A(1 − e −δ T M )T represents the rate of chemotherapy-induced death of tumor cells, assisted by monoclonal antibodies. When tumor cells are not able to proliferate, they are much more susceptible to chemotherapy-induced death [19]. So, when mAbs are bound to tumor cells, blocking their EGFRs and thus inhibiting tumor cell proliferation, they increase the tumor-cell death caused by chemotherapy. • The term−ψAT accounts for the rate of tumor-cell death caused directly by tumor cell interactions with mAbs. This term includes tumor-cell death from CDC, from a reduction in EGF binding and thus tumor-growth rate, [19]. The term −pA A h 1 +A N T in Equation (2.2) represents the rate of NK-cell death due to ADCC interactions with tumor cells and monoclonal antibodies. We assume that ADCC activity increases with mAb concentration until it becomes saturated. As with the term −pN T , it is assumed that NK cells experience exhaustion of tumor-killing resources after multiple interactions with tumor cells [7]. We note that we chose not to incorporate mAb interactions in Equations (2.3), (2.4) and (2.5), since the literature suggests that effects of mAbs are specific to tumor cells [41,34,22,44,19]. The evolution of the monoclonal antibody population is described in Equation (2.7). The term vA(t) represents mAb treatments. Because mAbs are not produced naturally in the body, no additional growth terms are included. The term −ηA represents the natural degradation of the mAb protein in the body. The term −λT A h 2 +A represents the loss of available mAbs as they bind to tumor cells. mAbs have a very strong binding affinity for their target growth-factor receptors, and there are many growth factor receptors on every cell, so we assume that many mAbs are lost with each tumor cell. Also, we assume that the growth factor receptors are fully saturated when the mAb concentration is significantly higher than the growth factor receptor concentration. That is, we can approximate the number of mAbs lost with each tumor cell as the number of growth-factor receptors on that cell, as long as mAb concentration is not close to zero. Clinical Trial Simulations for Common Treatment Regimens We used the model to explore expected responses to treatment at a population level. In particular, we simulated response to treatment for patients with a range of immune 'strength'. The effectiveness of the CD8 + T-cells is described in the model by the term D described in Equation 2.8. In order to describe a group of patients with different immune strengths, we allow the three parameters in Equation 2.8 to take on one of four values taken from a biologically reasonable range, [15]. These three patient-specific parameters are d, the maximum kill-rate by effector cells; s, the steepness of the effector-cell response to the presence of tumor; and l, a measure of the non-linearity of the response. Table 1 lists the specific values used. Using four different values for each of the three parameters yields 64 virtual patient types, each with a different immune system. To account for variation between patients in tumor response to therapy, we also varied the values of the parameters KT , the rate of tumor-cell death from chemotherapy, and ψ, the rate of cell death induced by mAb agents. For each simulation, the values of these parameters were randomly sampled from a distribution given by the density function where xmax is the maximum value of each parameter, either Kmax or ψmax. (See Table 1.) In these clinical trial simulations, we assume that the simulated patients have slightly compromised immune systems after already having been through other immuno-depleting therapies. MAb therapy is currently used mainly as a last resort, after other treatments have been attempted unsuccessfully, so we expect the tumor population size to initially be large. Initial values for the state variables reflect this, with with a large initial tumor size, T (0) = 10 9 cells, and relatively low levels of NK and CD8 + lymphocytes. All initial values are given in Appendix A. Simulated treatments were administered to each patient, represented by vM (t) and vA(t) in model equations (2.5) and (2.7). Clinical trial simulations were run over the set of 64 virtual patients multiple times. Final tumor size and lymphocyte counts were recorded for each patient. Lymphocyte count was used as a marker for patient health-if the lymphocyte count dropped low enough for the patient to be considered grade 4 leukopenic, the treatment was considered to be too harsh and not useful. This minimum lymphocyte count was determined to be 1.4 × 10 8 , based on the WHO criteria of grade 4 leukopenia being less than 10 9 total white blood cells per liter, ( [50], and see also the discussion of the parameter KC in the Supplementary Materials). Final simulated tumor sizes were categorized as a "Complete Response" (CR), "Partial Response" (PR), or "No Response" (NR). Tumors that continue to grow are categorized as NR, and any tumor smaller than ≈ 2.2 mm in diameter is categorized as CR. This value was chosen since it is significantly below the clinical detection level of 5 mm in diameter, [20]. In our analysis, we assume a spherical, homogeneous tumor, so that 2.2 mm in diameter corresponds to 2 7 cells. Finally, those tumors that don't continue to grow, but are larger than 2 × 10 7 cells, are categorized as PR. We compare the results of the simulated trials to those reported in [19,31,13,23,22]. See Table 7 for a summary of these clinical trial outcomes. Note that the published clinical trial results for cetuximab and panitumumab that we used for comparison reported results as "Response" or "No Response" almost exclusively, so for our clinical trial simulations of the commonly used treatments, we group PR and CR together under "Response". This facilitates comparison between our simulation results and the results of reported clinical trials. Monotherapy clinical trial simulations were performed for each of the three drugs used in our model. An irinotecan monotherapy clinical trial was simulated, using a common treatment regimen, and results were compared with clinical data. (The treatment details can be found in the "Treatments" section of the Supplementary Materials). Irinotecan monotherapy simulations resulted in a total response of 18.7%, (15.6% PR, 3.1% CR), versus an overall reported response rate of 30% (see Figure 2(A)). This is consistent with practice, since patients getting mAb treatments are often not very responsive to chemotherapy. Cetuximab and panitumumab clinical trials were also simulated, using the common treatment regimens found in "Treatments", to verify that the desired response rate was achieved. Parameter calculations for each mAb drug involved choosing a value for ψ that resulted in accurate clinical trial results for the mAb monotherapies, but verification of these values is important. Cetuximab monotherapy simulations matched the expected results with a total response rate of 10%, (10% PR, 0% CR), versus an overall reported response rate of approximately 10% (see Figure 2(B)), and panitumumab monotherapy simulations matched the expected results with a total simulation response rate of 12.15% (10.9% PR, 1.25% CR), versus an overall reported response rate of 10-13% (see Figure 2(C)). Combination therapies, using either irinotecan and cetuximab or irinotecan and panitumumab, were also simulated. These simulations used the common treatments for each drug and gave the two treatments simultaneously. Our simulation results closely match published results for both cetuximab and panitumumab montherapies. For irinotecan monotherapies, the reduced response seen in our simulations is intended, since the patients receiving mAb therapy are often not as responsive as most patients to other treatments. We do not currently have a way to adjust severity classification for the tumor based on patient health. A smaller tumor in a very sick patient can be just as dangerous as a larger tumor in healthier patient. Therefore, when examining monotherapies, which do not have particularly damaging effects on the immune system, we measured responses after one week in order to capture the less dramatic and potentially transient effects, which could still be helpful to patients whose immune systems have not been severely compromised by treatments. However, in the case of combination treatments, we chose to wait longer after treatment before measuring results. The clinical trial studies summarized in Table 7 did not report when tumor was measured after the last treatment, so we chose to measure tumor size four weeks after the final treatment for all simulations. Although many more patients experienced an initial drop in tumor size as a result of the combination treatments, this drop was frequently unhelpful to the patient because of the additional loss of immune strength associated with the harsher combination treatments. Our simulations match reported clinical trial results fairly closely (see Figure 3). The results from these simulations are also provided in Table 7, along with the associated clinical trials data. Impact of Patient Specific Response Parameters on Treatments We also ran the model to simulate individual patients, using set values for the patient-specific parameters, to examine how the tumor and immune system interact with strong or weak responses to the medications. The results from these simulations were plotted as cell populations/concentrations versus time. In Figure 4, we first see how the initial tumor size determines whether the tumor ultimately shrinks or grows to carrying capacity in the absence of treatment. In our remaining simulations that include treatment, we ensure that the initial tumor size is chosen to be sufficiently large so that it would grow to carrying capacity in the absence of treatment. In Figure 5, we simulate irinotecan/cetuximab combination therapy, and can see how a modification in an individual's CD8+T cell response to tumor, via response function D, affects treatment outcomes. We also simulated the tumor response to irinotecan/panitumumab combination therapy (figure not shown) with l = 1.6 and s = 7 × 10 −3 , resulting in a moderate response D, and with l = 1.3 and s = 4 × 10 −3 , resulting in a high response D. With the moderate D, the tumor will increase to carrying capacity with the cessation of treatment, but the stronger D allows the patient's immune response to eradicate the tumor. In Figure 6, we observe how individual tumor sensitivity to either chemotherapy or mAb therapy affects tumor size. In particular, we simulate four possible combinations: a strong response to both chemo and mAb therapy (A), a weak response to both chemo and mAb (B), a strong response to chemo but a weak response to mAb (C), and a strong response to mAb but a weak response to chemo (D). tumor with a small initial size will quickly shrink toward zero, a tumor with a larger initial size will quickly grow to the carrying capacity of the system. All other initial values are the same in both simulations. Tumor response to irinotecan/cetuximab combination therapy with l = 1.6 and s = 7 × 10 −3 (A), resulting in a moderate response D, and with l = 1.3 and s = 4 × 10 −3 (B), resulting in a stronger response D. With the moderate D, the tumor will increase to carrying capacity with the cessation of treatment, but the stronger D allows the patient's immune response to eradicate the tumor. In order to explore which patient-specific parameters play a role in whether a patient will respond to treatment, the effect of the variable parameters, d, l, s, KT , and ψ, was also examined. d, l, and s were fixed at three sets of values from the set of patient-specific parameters used for clinical trial modeling, a "weak D" (d = 1.3, l = 2, s = 4 × 10 −2 ); "moderate D" (d = 1.6, l = 1.4, s = 8 × 10 −3 );and "strong D" (d = 2.1, l = 1.1, s = 5 × 10 −3 ) response. The variables KT and ψ were then varied over their range of 0 to their maximum values, using cetuximab as the mAb drug, and the model was run Figure 6: Tumor responses to combination therapy with irinotecan and panitumumab. When the tumor has a strong response (high K T and ψ, 100% strength) to both medications (A), the tumor shrinks during the course of the treatment. When the tumor has a weak response (low K T and ψ, 10% strength) to both medications (B), the tumor grows toward the carrying capacity. When the tumor has either a strong response to irinotecan and a weak response to panitumumab (C) or a weak response to irinotecan and a strong response to panitumumab (D), the tumor will fluxuate in size, but will stay approximately the same size overall during the treatment course. for 28 days with each pair of values. Figure 9(A) shows that a patient with a weak inherent immune system cannot have a complete response, even with a full-strength response by the tumor to the chemotherapy and mAb treatments. A strong response by the tumor to either treatment will result in a partial response for the tumor overall. Figure 9(B) shows that a patient with a moderately strong immune system has a chance of overpowering the tumor and obtaining a complete result, with strong tumor responses by the tumor to both the chemotherapy and mAb drugs. The patient is more likely however to have a partial overall response, resulting from a strong response by the tumor to only one medication, or to have no response. Figure 9(C) shows that a patient with a strong immune system has a good chance for a complete overall response, with a strong response by the tumor to either the mAb or chemotherapy treatments. However, the patient will still not respond to the treatment if the tumor is only weakly affected by both of the medications. Clinical Trial Simulations for Hypothetical Treatments Clinical trial simulations with hypothetical treatment combination regimens were also performed. We explored various timings and dosing levels of irinotecan in combination with cetuximab, and separately, irinotecan in combination with panitumumab. Many of the combination treatments we experimented with, which used different doses, dosing frequencies, and different start times for each medication, were not as successful at shrinking the tumor as the current standard treatments. However, we did find some treatment regimens which appear to result in a smaller final tumor size, one with each of the mAb medications. These results are shown in Figure 7. For comparison, we include one set of simulation results for tested treatments that can also be found in Table 7, as well as the results of the two separate hypothetical dosing schedules. In Figure 7, top panel, we compare population responses to three different combination doses of irinotecan combined with panitumumab, and in the bottom panel, we compare irinotecan combined with cetuximab. Hypothetical Treatment 1: One hypothetical treatment improvement can be seen when using irinotecan combined with panitumumab, required no change in dosing levels, but a change in the timing of the dose administration. In this case, we dose first with panitumumab, and then wait four days to begin irinotecan doses. Irinotecan is then continued every 7 days for the remainder of the treatment, while panitumumab continues to be administered once every two weeks, as with a standard dosing schedule. This treatment decreased the total number of patients who did not respond to treatments from 14.4% to 8.4%, although it also decreased the number of patients who demonstrate a complete response from 18.1% to 11.4%. Simulation results can be seen in Figure 7, top panel. Since the medications are not being given at the same time, the patient may experience fewer simultaneous side effects with this treatment schedule. However, the treatment also requires the patient to make extra trips to the hospital for treatment administration. Hypothetical Treatment 2: In Figure 7, top panel, we also show a second hypothetical treatment, in which the doses of both irinotecan and panitumumab are increased: The irinotecan dose is 2.8 times the standard dose, and panitumumab is 1.5 times the standard dose. However, dosing frequency is decreased to once every three weeks for both medications. This results in a slightly higher complete response rate of 12.2%, and a partial response rate of 71.3%. Hypothetical Treatment 3: In the third hypothetical scenario, shown in Figure 7, bottom panel, we look at irinotecan combined with cetuximab. In this case, we modify the dose timing only, and leave dose amounts at standard levels. We dose first with irinotecan, and follow up with a cetuximab dose four days later. This strategy was not particularly successful. The complete response rate was only 12.2%, as opposed to the 17.2% achieved by the standard dosing schedule. Hypothetical Treatment 4: Treatment option 4 combines a higher dose of irinotecan and a higher dose of cetuximab, both administered less frequently than standard treatment would require. Results are pictured in Figure 7, bottom panel. Irinotecan is administered once every three weeks, and cetuximab is administered once every two weeks. Treatment lasts nine weeks, so the individual receives three irinotecan doses, and four cetuximab doses. The use of these drugs at the higher doses, at least as monotherapies, has been reported in the literature [22,45,31]. The higher dosed irinotecan/cetuximab combination increases the overall response rate from 98.9% for the standard treatment to 100%, and increases the complete response rate from 17.2% to 60.9%. Of all four hypothetical treatments presented, the high-dose irinotecan/cetuximab combination appears to be the most effective. In our simulations, the lymphocyte count stayed above a specified minimum, which is one way to measure the degree of immune system damage from the chemotherapy. With this treatment schedule, the medications are not always given in the same weeks, which has the benefit of the tumor population being kept low with frequent medications, while side effects for the patient may be reduced. However, this treatment schedule also requires that the patient receive medication every week, which may be an inconvenience (versus, for example, the treatment with irinotecan and panitumumab being given only every 3 weeks). Sensitivity to Parameters Parameter sensitivity analysis was performed to determine which model parameters have the greatest effect on tumor size, both in the absence of treatment and with different treatments. We found seven parameters that significantly affected tumor size in our simulations. In order to separate short term and long term effects, we looked at tumor size seven days after initiation of the simulation, and again at twenty eight days after initiation. In most cases, parameters that had a significant impact on tumor size at day seven were also significant at day twenty eight. A full description of the parameters and their values can be found in the supplementary information, and in Tables 1-6, but we will briefly explain here the parameters found to be most significant. Each parameter value was individually increased and decreased by 5% while all other parameter values were held constant. Tumor size was measured at 7 days, when the tumor is still growing very quickly in our model, and at 28 days, when it is close to its maximum volume in our model. First, we analyzed parameter sensitivity in simulations with no treatments given, so treatment-related parameters did not affect simulation outcomes. Results for parameters with the most significant impact on outcomes are shown in Figure 8(A) and (B) at days 7 and 28. Note that, while b (which represents the inverse of the carrying capacity) is by far the most important parameter in determining final tumor size, a (the intrinsic tumor growth rate) is important in determining how quickly the tumor reaches its maximum volume. The parameter l, which affects the functional form of the CTL kill rate, has the most significant effect on non-medicated initial tumor growth of all the immune system parameters. A sensitivity analysis with treatment-related parameters was then performed. For chemotherapy irinotecan treatment parameters, the final tumor size was found to be very sensitive to KT and δT , which determine the model's response to the chemotherapy drug, and to γ, which represents the excretion of the chemotherapy drug (see Figure 8(C) and (D)). Tumor regrowth between treatments was much more dependent on γ than was the decrease in tumor size following treatments. This makes sense, because when the chemotherapy remains in the body longer, it will be more effective at maintaining lower tumor volumes between treatments. We next tested the monoclonal antibody therapies, cetuximab and panitumumab, separately. Dose timings for cetuximab and panitumumab are different, so we measured parameter sensitivity according to the different lengths of a standard course of treatment for each treatment type. For cetuximab, we consider one course of treatment to be on days 0, 7, 14 and 21 (four treatments total, once per week over four weeks), whereas for panitumumab, one course of treatment is on days 0, 14, and 28 (three treatments total, once every other week for three weeks). We then measured tumor size one week after the last dose of the treatment course. Therefore, long term sensitivity for cetuximab treated tumors was measured at day 28, and for panitumumab at day 35. In both cases, the final tumor size was found to be sensitive to ψ, the strength of the tumor's response to mAb drugs, and to η, the mAb turnover rate (see Figure 8(E-H)). This is reasonable, since the main anti-tumor activity of mAb medications is through interference with the ability of EGF to bind to EGFR on the tumor cell surface, an activity which is included in the term ψ [19]. In the short term, cetuximab also shows some sensitivity to ξ and pA, which are the parameters that determine the strength of ADCC activity. We note that a five percent change in all the remaining paramaters negligibly affected final tumor size. In particular, the parameter KT A, which represents the increase in effectiveness of chemotherapy when it is used in conjunction with mAb therapy, had very little effect on final tumor size. In this case, the final tumor size after 28 days changed by less than 0.5 percent with a five percent change in KT A (figure not shown). Discussion We have extended the mathematical model presented in [15] to include monoclonal antibody treatment. We have tuned the parameter values of the model to make them specific to colorectal cancer, the chemotherapy treatment irinotecan, and the monoclonal antibody treatments cetuximab and panitumumab. Two stable equilibrium states were found numerically, a no tumor equilibrium and a large tumor equilibrium. Tumors can be driven to either of these states in simulations, depending on the relative strength of the patient's immune system and the treatments given. Colorectal tumors can have a wide variety of mutations, and some of these mutations limit a medication's ability to function fully. The parameters KT and ψ represent a range of different tumor responses to the same chemotherapy and mAb treatments. At the beginning of a simulation for an individual, values for these parameters can be chosen randomly from within proscribed biological ranges. Use of these randomly chosen variables allows us to replicate the population level results seen in clinical trials. A clinical study can be simulated by numerically solving the model multiple times to represent each individual outcome in the study. In our simulations, we solved the model with 64 different combinations of patient parameters. When simulating individuals receiving mAb monotherapy, the resulting population level response rates are quantitatively very close to the reported rates from clinical trials. The simulation response rates for irinotecan chemotherapy was lower than the response rates reported in [19]. We intentionally chose model parameters to yield this outcome. This is because we are assuming that our cohort of 64 individuals have already have had chemotherapy with less success than would be seen in a general population, and are therefore in need of additional mAb therapy [13]. On the other hand, the cohort in [19] was from the general population. For combination therapies, our tumor population responded too well to the medication short term, although in the long term, our simulated responses matched experimental response rates well. The short-term over-responsiveness could be caused by inaccuracies in the model parameters or by time frame differences in the reported response rates. One possible inaccuracy in the model is that the variability in tumor responses to medication may not be accurately represented by the random variables. Tumors cells that aren't destroyed by one medication may be less likely to respond to another medication as well, such as cells in the center of the tumor, to which the medications would have limited access. Because mAbs and chemotherapy drugs generally have very different targets and mechanisms, a mutation causing the tumor to be refractory to one medication won't necessarily cause it to be refractory to the other, but it is possible. If that were the case, one model improvement might be to use one random variable to represent the tumor's response to both medications, instead of two variables. Most response rates are not actually reported with a time frame, so the response rates found with this model from four weeks post-treatment may be more consistent with real life measurements than the response rates from one week post-treatment. If this is the case, our model closely matches response rates for combination therapies as well, and the apparent over-responsiveness seen in our model with combination therapies is just caused by taking tumor measurements too early. Reported clinical trials for the dual treatments also often did not specify irinotecan dosing, whether the patients previously received treatment, or how long the treatment was given, so this may be responsible for part of the difference in response rates as well. Overall, our model gives a qualitatively good prediction of likely results for various dosing schedules. In many of the experimental treatments, particularly in the high-dose treatments, the simulated individuals's immune system was also greatly weakened by the treatments, particularly by irinotecan. Thus, although the tumor cell population was greatly reduced by the treatments, the individual's immune system was still unable to destroy the remaining tumor cells. Although we did not find much information about the use of CD8+T-cell treatments for colorectal cancer in the literature, the addition of this treatment during the chemotherapy and mAb drug courses could help to bolster the immune system and allow the patient's immune response to lyse tumor cells more effectively. This model, with the addition of a CD8+T cell treatment component, could be used to test this treatment hypothesis. The parameter sensitivity analysis yielded results that were intuitively reasonable. The analysis also serves to highlight which parameters could be possible targets for reducing tumor size. For example, if we can get a better sense of biologically how to influence l, a parameter that affects the functional form of the CTL kill rate, a large decrease in l would result in an immune system that is able to conquer the tumor much more easily than the immune system resulting from a change in the other immune system parameters. In the future, two modifications to this model could yield even more realistic outcomes. First, we could tailor the parameters KT and ψ to have a more specific biological meaning. For example, the KRAS mutation is known to be present in about 40% of all colorectal tumors, and is known to reduce the effectiveness of mAb treatment to almost zero [18,2]. Information such as the EGFR counts on the tumor cells and the presence or absence of the KRAS mutation in an individual's tumor could allow for more personalized and specific parameter values, chosen from a smaller distribution based on features of the tumors cells, instead of from a larger random distribution. Second, an equation representing patient well-being could be very useful for predicting effective treatments. Although using lymphocyte count allows us to determine that the patient's immune system has not been completely destroyed by the medication, it doesn't take into account factors such as the inconvenience of frequent treatments, the fact that high doses of cytotoxic medication may result in side effects harmful to cells of the body other than immune cells (such as those of the stomach lining). There are several notable clinical observations that are important in informing the next stages of model development. One of these is that tumor cells become resistant to chemotherapeutic drugs, making disease progression very sensitive to the timing and dosing used in treatments, [6]. The expansion of the model to include a tumor population resistant to a particular drug would allow in silico testing of a variety of treatment scenarios. Another aspect of treatment to bear in mind is the effect of an individual's circadian fluctuations on the tumor's susceptibility to cytotoxic agents. These periodic fluctuations can be captured in our model by allowing time-varying parameters or by introducing delays into the model. Some models of colon cancer that do include circadian rhythms are discussed in [6] and in the references therein. Concentration of mAbs for half-maximal increase in ADCC Descriptions of the biological relevance of each term and parameter and the parameter values in N (t), which tracks the concentration of NK cells. a For cetuximab. b For panitumumab. Descriptions of the biological relevance of each term and parameter and the parameter values in L(t), which tracks the concentration of CD8+ T cells. Descriptions of the biological relevance of each term and parameter and the parameter values in C(t), which tracks the concentration of other lymphocytes. Descriptions of the biological relevance of each term and parameter and the parameter values in I(t), which tracks the concentration of interleukin-2. Loss of mAbs due to tumor-mAb binding λ Rate of mAb-tumor cell complex formation Concentration of mAbs for half-maximal EGFR binding Descriptions of the biological relevance of each term and parameter and the parameter values in M (t) and A(t), which track the concentration of chemotherapy and mAb therapy, respectively. a For cetuximab. b For panitumumab. B Parameters In order to determine parameter values, we searched peer-reviewed literature for in vitro and in vivo studies of colorectal tumor growth that could provide data for the following cases: no treatment, chemotherapy treatment with irinotecan, mAb treatment with cetuximab, and mAb treatment with panitumumab. Some of the parameters used here are those found by de Pillis and colleagues [15] and their derivation is not repeated. The description and values for each parameter can also be found in Tables 1-6. Initial Conditions We determine initial conditions for both a healthy individual and for a colorectal cancer patient who has previously undergone treatment for the tumor. The initial values of N , L, C, and I can be determined for each individual by considering biological arguments for reasonable cell concentrations of patients with a "strong" and "weak" immune system. The no tumor equilibrium was found by considering a healthy individual with no tumor (T = 0) and receiving no cancer treatments (M = A = 0). Because we are assuming that healthy individuals are in homeostasis, we can set each time derivative equal to zero. N and C were found by assuming a lymphocyte count of 3.333 × 10 9 cells per liter of blood, which is within the range for a normal lymphocyte count, and assuming natural killer and CD8 + T cell counts to be 10 percent and < 1 percent, respectively [1]. This gives us that C = 3.333 × 10 9 × 0.9 = 3 × 10 9 and N = 3.333 × 10 9 × 0.1 = 3.333 × 10 8 . The values for L and I are taken from [15], in which L is derived from [39] and [47], and I is taken from [38] and information provided by [37]. These calculations give us the following values for our no tumor equilibrium: As discussed section A, these initial conditions correspond to a stable equilibrium state of the system. As shown in Figure 4, left panel, values starting close to these will be drawn toward this zero tumor equilibrium. The large tumor equilibrium was found by considering a healthy individual who has a large tumor but is not receiving any treatment (M = A = 0). We again set the time derivatives to zero under the assumption of homeostasis. Under conditions of an untreated tumor, we leave N and C at the same values, but use larger values for I and L, since the presence of a tumor increases the production of cytokines [15]. We take the values of I = 1173 and L = 5.268×10 5 from [15], in which L is taken from [29] and Janeway's book on Immunobiology [25]. The value of I is from [38] and information provided by [37]. With these initial values and the parameters that can be directly calculated from available literature, we solve for the size of a large tumor in equilibrium while solving for the parameter p in the section on NK cell parameters. Note that the resulting value, T = 4.65928 × 10 9 , is slightly less than the theoretical carrying capacity of 4.66 × 10 9 which we find during the calculation of the parameter b in the section on tumor parameters. This is expected, because interactions with the immune system prevent the tumor from reaching its theoretical carrying capacity. These initial values give us the following large tumor equilibrium: T = 4.65928 × 10 9 , N = 3.333 × 10 8 , L = 5.268 × 10 5 , C = 3 × 10 9 , M = 0, I = 1173, A = 0. These initial conditions also correspond to a stable equilibrium state of the system, as discussed in Appendix A. As illustrated in Figure 4, right panel, values starting close to these will be drawn toward this high tumor equilibrium. Since the majority of the individuals we are considering have previously undergone various treatments and do not have very strong immune systems, we reduce the initial values for N, L, and C in our simulations. A normal leukocyte count is 4.5 − 11 × 10 9 cells/L, and lymphocytes can make up 16-46% of the total leukocytes [1]. Thus a normal lymphocyte count is 0.72 − 5.06 × 10 9 cells/L. We set the initial total lymphocyte count in our simulated individuals to 9.9 × 10 8 cells/L, a value within the normal range for lymphocyte concentration, but close to being low. Natural killer cells and activated CD8 + T cells interact more directly with the tumor than the other lymphocytes, so we assume that they are deactivated at a slightly higher rate. Thus NK cells constitute a slightly smaller percentage of the total lymphocytes than the normal value of 10%. We set N (0) to 9% of total lymphocytes, so N (0) = .9 × (9.9 × 10 8 ) = 9 × 10 7 . NK cell population was reduced to approximately 1 3 − 1 we leave I = 1173 as the initial value for I. The initial value of T can be varied, and is stated with simulations. These calculations give us the following initial conditions for the "sick" populations in our model: These initial values represent patients who are not in homeostasis, and depending on the initial tumor size, the strength of interactions between the patient's immune system and the tumor, and whether any medication is given, their cell populations can be driven either to the no tumor equilibrium or to the large tumor equilibrium. Sample conditions for a tumor that is reduced to the no tumor equilibrium and for a tumor that grows to the large tumor equilibrium are found in Figure 4. For a summary of the terms, parameters, and parameter values, see Table 1. a = 2.31 × 10 −1 day −1 , the tumor growth rate, was calculated from the doubling time of colorectal tumors during exponential growth, which was found in [12] to be 3 days. We can calculate a from the equation for exponential growth with a half-life of t = 3 days. So, 2t 0 = t 0 e at , giving us a = ln(2) 3 = 2.31 × 10 −1 . This is approximately half of the value for a found by de Pillis's team for melanoma [15], but colon tumors are known to have slower growth rates than most of cancers, so this is not an unreasonable value [8,9]. It is important to note that in [12] tumors were grown in non-immunodeficient mice, and our model considers patients who do not have a full-strength immune response, however this was the only study in our literature search that provided the doubling time specifically during exponential growth. The growth rate that we calculated also agrees with the initial growth rates found in [30], who grew colon tumors in immunodeficient mice. b = 2.146 × 10 −10 cells −1 , is the inverse of the carrying capacity. The theoretical carrying capacity (in volume) of colorectal tumors was taken from Leith and colleagues [30], who collected tumor growth data, fit them to the Gompertz equation, and found the maximum tumor size as t → ∞. The carrying capacity derived from the Gompertz model has the same biological interpretation as in our model, so we were able to use the results of [30] to find a value for b. Multiple carrying capacities were found from different colorectal tumor lines, with an average of approximately 10,000 mm 3 = 10 13 µm 3 . This size was then converted to a cell population using 2145 µm 3 as the average tumor cell volume [11], giving 10 13 µm 3 /(2145µm 3 /cell)=4.66× 10 9 cells. Thus, b = (4.66 × 10 9 cells) −1 = 2.146 × 10 −10 . c = 5.156 × 10 −14 L cells −1 day −1 , the rate of NK-induced tumor death, is set equal to p (see the section on NK cell parameters), as was done in [15], under the assumption that when an NK cell kills a tumor cell, the NK cell also is deactivated. Recent research [7] suggests that natural killer cells may be able to kill up to six tumor cells before deactivation. However we have not found further confirmation of this and so have chosen to continue using the assumption that NK cells are only able to kill one tumor cell each. s+(L/T ) l is a patient-specific term that involves three parameters to which we assign four separate values each, in order to reflect a variety of patient-specific states. These parameters are: d (day −1 ), the immunesystem strength coefficient; l, the immune-system strength scaling coefficient; and s (L), the value of ( L T ) l necessary for half-maximal CD8 + T-cell effectiveness against tumor. We base our values for d, l, and s on the values of d ∈ {1.88, 2.34}, l ∈ {1.81, 2.09}, and s ∈ {3.5 × 10 −2 , 3.8 × 10 −3 } used in [15], and slightly weaken the patient immune system (represented by lowering d and l and raising s) to represent individuals who are not in good health from having gone through multiple cancer treatments. We use d ∈ {1.3, 1.6, 1.9, 2.1}, l ∈ {1.1, 1.4, 1.7, 2.0}, and s ∈ {4 × 10 −3 , 7 × 10 −3 , 9 × 10 −3 , 3 × 10 −2 }, which results in sixty-four different individual immune profiles over which we can run simulations to represent clinical trials. ξ = 6.5 × 10 −10 L cells −1 day −1 for cetuximab, and = 0 for panitumumab, is the rate of NK-induced tumor death through ADCC. The value for cetuximab was set to match the expected increase in NK cell activity found by Kurai and colleagues [28]. Kurai's team varied concentrations of tumor cells and NK cells, left them for 4 hours with and without 0.25µg/mL cetuximab, and measured the resulting NK activity. They measured the activity at much higher concentrations of NK cells than are present in the body, but based on their results we approximated that at the ratio of one NK cell to ten tumor cells, NK activity is increased by 10 percent. We found an appropriate value for ξ by running simulations with varying values of ξ and simulating their experimental conditions: t = 4 hours, T 0 = 10 9 , N 0 = 1 4 × T 0 = 2.5 × 10 8 , and an initial 130 treatment of 0.25 mg/L cetuximab over 15 minutes. The other immune system components, as well as natural growth and decay, were not included. A value of ξ = 6.5 × 10 −10 was found to give the desired 10 percent decrease in NK cells in this experiment, which we use as a proxy for an increase in NK activity of 10 percent. Panitumumab is unable to activate the ADCC pathway, so ξ is set to zero in that case [23]. h 1 = 1.25 × 10 −6 mg L −1 for cetuximab, and 0 for panitumumab, is the concentration of mAbs necessary for a half-maximal increase in ADCC. The ADCC activity level indicated by ξ is reached when the cetuximab concentration is above 0.25µg/mL, and so h 1 was set to .5 × 0.25µg/mL= 1.25 × 10 −6 mg/L. Cetuximab levels in the body are usually above this threshold during treatment, and we have chosen to use a sigmoid function to capture this threshold. Although we do not have evidence to support that ADCC activity increases according to a saturation function, this model captures two important characteristics: that the threshold concentration for maximal ADCC activity is much lower than the normal cetuximab dose, and that the ADCC activity level approaches zero as mAb concentration approaches zero. We chose h 1 so that, when the cetuximab concentration is half of the threshold value, the term A h 1 +A equals one half, resulting in half-maximal ADCC activity. Because panitumumab does not play a role in ADCC, panitumumab does not have an h 1 (h 1 = 0). We chose this distribution since it is supported on [0, 1], has a high probability of being close to one, and a mean of E[X] = 0.75. Therefore, K T ∈ [0, 8.1 × 10 −1 ], and has a mean value of K T = 6.075 × 10 −1 . Note that, in the clinical trial simulations, each patient is assigned a value for K T , but a different K T is randomly generated for each patient according to the distribution given above. The maximal value of K T was calculated from in vitro data collected by Vilar and colleagues [49] on irinotecan concentration and growth reduction of various colon cancer cell strains. We chose to use values from the HT-29 cell line, in accordance with much of the literature that we reviewed. We estimated five coordinates from data in [49], which gave irinotecan concentration (in mol/L) versus growth of tumor cells, as a percentage of tumor cell growth with no irinotecan. Since the reported data was from an in vitro study run over the course of only a few days, we set all but tumor size and chemotherapy concentration to zero and assumed that the natural cell death was zero. We also assumed that chemotherapy concentration would be held constant, so dM dt = 0. Thus the differential equation for the tumor population becomes We converted the irinotecan concentration at each point to units of mg/L using 677 g/mol as the molecular weight of irinotecan [48]. We then used tumor sizes and chemotherapy concentrations from each data point reported in [49] to write five equations with δ T and K T as unknowns. Since the system is overdetermined (five equations, two unknowns), we chose values for δ T and K T that produced a reasonable fit. We found δ T = 0.2 and K T ≈ 0.85. K T was then separately confirmed by running multiple simulations with our set of patient-specific parameter values to look for a tumor response rate of approximately 15-20% after 6 weeks of treatment. The reported response rate (the percentage of patients whose tumor was not larger after treatment) for irinotecan is around 30 percent, however patients receiving mAb treatment have usually already received a variety of chemotherapy treatments and did not respond strongly to them, so we aimed for a response rate lower than this [19]. These simulations confirmed that a value of K T = 0.81 gives an average response rate of approximately 19 percent. K AT = 4 × 10 −4 L mg −1 day −1 for both cetuximab and panitumumab, is the additional chemotherapy-induced tumor death due to mAb-tumor interactions. Even with K AT set to zero, our simulation response rates are much higher than those reported in clinical trials, however, it is known that mAb therapy can help to increase chemotherapy responses in tumors, and even restore partial response in chemotherapyrefractory tumors, so we have chosen to give K AT a non-zero value of K AT = 4 × 10 −4 [19]. At maximal mAb concentrations, which are on the order of 10 2 mg, this results in an increase in chemotherapy activity δ T = 2 × 10 −1 L mg −1 , the medicine efficacy coefficient, was found as part of the calculation for K T . ψ = 2.28 × 10 −2 Y L mg −1 day −1 for cetuximab and 3.125 × 10 −2 y L mg −1 day −1 for panitumumab is the rate of mAb-induced tumor death, where Y is a random variable with probability density function p(y) = 1 3 (1−y) −2/3 , 0 ≤ y < 1. Therefore, ψ ∈ [0, 2.28×10 −2 ] for cetuximab, with a mean value of 1.71×10 −2 , and ψ ∈ [0, 2.58 × 10 −2 ] for panitumumab, with a mean value of 1.94 × 10 −2 . As with K T , multiplying the maximum value for ψ by a random variable between zero and one allows us to represent that each tumor has a different response to treatments. Each patient (each simulation) has one constant value for ψ, but a different ψ is randomly generated for every patient. The maximum value of ψ was found by running simulations of mAb therapy over a range of possible values for ψ, using the full set of patient-specific parameters. The values of ψ we chose yielded a 10% response rate for cetuximab at four weeks, and a 12.2% response rate for panitumumab at six weeks. These response rates reflect those reported in [19]. dN dt : Natural killer cells For a summary of the terms, parameters, and parameter values, see Table 2. e f = 1 9 , the ratio of the NK cell synthesis rate to the turnover rate, is found using the same method as was used in [15]. The value for e f is found by assuming the no tumor equilibrium and thus setting T = 0 and setting Equation 2.2 to zero. We then ignore the term p N N I g N +I , which has only a very small effect on NK proliferation. This gives us f ( e f C − N ) = 0, and so e f = N C . As in the equilibrium calculations, NK cells make up approximately 10 percent of all lymphocytes, and T cell count is negligible, giving us 10% 90% , or 1 9 [1]. f = 1 × 10 −2 day −1 , the rate of NK cell turnover, is based on the value of f = 1.25 × 10 −2 found by de Pillis and colleagues [15]. We lowered the value slightly to agree with our assumption of a patient with a weakened immune system whose body may not be able to produce new cells as quickly as normal healthy individual. g N = 2.5036 × 10 5 IU L −1 , the concentration of IL-2 needed for half-maximal NK cell proliferation, is unchanged from the value found by in [15]. p N = 5.13 × 10 −2 day −1 , the rate of IL-2 induced NK cell proliferation, is calculated using the same method as in [15]. They use data from [35] to find that 5.0073 × 10 4 IU stimulates NK cells to reach a count of 2.3×10 9 cells, and so using these as I and N respectively and assuming T = 0, we then set Equation 2.2 equal to zero and solve for p N : Using C = 3 × 10 9 from our no tumor equilibrium and the previously calculated values for e, f , and g N , we find that p N = 5.13 × 10 −2 . p = 5.156 × 10 −14 L cells −1 day −1 , the rate of NK cell death due to tumor interaction, is calculated using the same method as in [15]. We consider the large tumor equilibrium with no medication, assume (as explained in the calculation for c) that p = c, and can thus set Equations 2.1 and 2.2 equal to zero and to solve for T and p: We were then able to use the values for p N , g N , e, f, a, b, the equation for D with the moderate patientspecific parameter values of d = 1.9, l = 1.6, and s = 7 × 10 −3 , and the state values for the immune system populations from the large tumor equilibrium to find that T = 4.65928 × 10 9 in the large tumor equilibrium and p = 5.156 × 10 −14 . p A = 6.5 × 10 −10 L cells −1 day −1 for cetuximab and 0 for panitumumab is the rate of NK cell death due to interactions with mAb-tumor complexes. We set p A = ξ, under the approximation used for the calculation of parameter c that for each tumor cell killed through ADCC, one NK cell also dies. K N = 9.048 × 10 −1 day −1 , the rate of NK depletion from chemotherapy toxicity, is calculated using the same method as in [15], by linearly scaling K C by the ratio of cell metabolic rates. That is, δ N = 2 × 10 −1 L mg −1 , the chemotherapy toxicity coefficient, is assumed to equal δ T . The drug has a different efficacy (K) for each cell type, but we assume that a similar concentration of irinotecan is needed to affect each cell, regardless of cell type [15]. 132 British Journal of Mathematics and Computer Science X(X), XX-XX, 2013 dL dt : CD8 + T cells For a summary of the terms, parameters, and parameter values, see Table 3. m = 5 × 10 −3 day −1 , the rate of activated CD8 + T-cell turnover, is based on the value of m = 9 × 10 −3 found by de Pillis and colleagues [15]. We lowered the value slightly to agree with our assumption of a patient with a weakened immune system, whose body may not be able to produce new cells as quickly as normal healthy individual. θ = 2.5036 × 10 −3 IU L −1 , the concentration of IL-2 to halve CD8 + T-cell turnover, is unchanged from de Pillis and colleagues [15]. q = 5.156 × 10 −17 cells −1 day −1 , the rate of CD8 + T-cell death due to tumor interaction, is set equal to p × 10 −3 because, as de Pillis and colleagues [15] point out, we expect q to be approximately three orders of magnitude less than p since L is approximately three orders of magnitude less than N . r 1 = 5.156 × 10 −12 cells −1 day −1 , the rate of NK-lysed tumor cell debris activation of CD8 + T cells, is calculated using the same method as in [15]. We set r 1 = 100 × c, based on the approximation that a lysed tumor cell can stimulated 10-300 T cells per day [15]. r 2 = 1 × 10 −15 cells −1 day −1 , the rate of CD8 + T-cell production from circulating lymphocytes, is based on the value of r 2 = 5.8467×10 −13 found by de Pillis and colleagues [15]. We reduced it from the value in [15] to reflect that a weakened immune system may not be able to produce activated CD8 + T cells as effectively. p I = 2.4036 day −1 , the rate of IL-2 induced CD8 + T-cell activation, was found using the same method as in [15]. A system of equations was created by considering the no tumor equilibrium and the large tumor equilibrium. Setting Equation 2.3 to zero and using these two sets of initial values for T, N, L, C, and I, we can obtain two equations each with p I and u as unknowns, and thus solve for the p I and u necessary to make satisfy the equilibrium conditions. g I = 2.5036 × 10 3 IU L −1 , the concentration of IL-2 necessary for half-maximal CD8 + T-cell activation, is unchanged from the value found in [15]. u = 3.1718 × 10 −14 L 2 cells −2 day −1 , the CD8 + T-cell self-limitation feedback coefficient, is obtained from the system of equations used to calculated p I . κ = 2.5036 × 10 3 IU L −1 , the concentration of IL-2 to halve the magnitude of CD8 + T-cell self-regulation, is unchanged from the value found in [15]. j = 1.245 × 10 −4 day −1 , the rate of CD8 + T-cell lysed tumor cell debris activation of CD8 + T cells, is based on the value of 1.245 × 10 −2 found by de Pillis and colleagues [15], and was decreased to indicate that the weak immune system may not be able to activate CD8 cells as effectively. k = 2.019 × 10 7 cells, the tumor size for half-maximal CD8 + T-cell lysed tumor debris CD8 + T cell activation, is unchanged from the value found in [15]. K L = 4.524 × 10 −1 day −1 , the rate of CD8 + T-cell depletion from chemotherapy toxicity, is found in the same way as we found K N . We calculated it using the same method as was used in [15], by linearly scaling K C . That is, δ L = 2×10 −1 L mg −1 , the chemotherapy toxicity coefficient, is found in the same way as δ N , with the assumption that it is equal to δ T [15]. dC dt : Lymphocytes For a summary of the terms, parameters, and parameter values, see Table 4. α β = 3 × 10 9 cells L −1 , the ratio of the rate of circulating lymphocyte production to turnover rate, is taken from considering the steady state assumption of dC dt = 0 in a healthy, tumor free individual. Considering Equation 2.4 with M = 0, we find that α β = C, where C = 3 × 10 9 refers to the equilibrium value of C in the no tumor equilibrium. β = 6.3 × 10 −3 day −1 , the rate of lymphocyte turnover, is unchanged from the value found in [15]. K C = 5.7 × 10 −1 day −1 , the rate of lymphocyte depletion from chemotherapy toxicity, was calculated to achieve the results given by Catimel and colleagues [10] on the number of patients with leukopenia after irinotecan treatments. Catimel's team found that when 100 mg/m 2 was given to patients daily for three days, three out of eleven patients had leukopenia, and when 115 mg/m 2 was given daily for three days, four out of ten patients had leukopenia. A patient is considered to have leukopenia when the leukocyte count drops below 1.9 × 10 9 [50], and as discussed in section B, lymphocytes can comprise up to 46% of leukocytes, so the highest possible lymphocyte count in a patient with leukopenia is 46% of 1.9 × 10 9 , or 8.74 × 10 8 cells. We assume that the lymphocyte count for all patients drop equally, and so those who begin initially with a lower lymphocyte count become leukopenic, and those who begin with a higher lymphocyte count will have a reduced cell count, but remain within the normal range. So, the lowest three elevenths of patients will have lymphocyte levels below 1.904 × 10 9 cells, and the lowest four tenths of patients will have lymphocyte levels below 2.456 × 10 9 cells. We ran simulations considering only lymphocyte counts, with irinotecan delivered once daily over 1.5 hours for a total of 3 days, and found a value for K C that made an initial lymphocyte count of 1.904 × 10 9 drop to approximately 8.74 × 10 8 with a 100 mg/m 2 dose and an initial lymphocyte count of 2.456 × 10 9 drop to approximately 8.74 × 10 8 with a 115 mg/m 2 dose. The two doses resulted in K C values of 0.52 and 0.63 respectively, so these were averaged to find K C = .57. δ C = 2 × 10 −1 L mg −1 , the chemotherapy toxicity coefficient, is found in the same way as we found δ L , with the assumption that it is equal to δ T [15]. dI dt : Interleukin For a summary of the terms, parameters, and parameter values, see Table 5. µ I = 11.7427 day −1 , the rate of excretion and elimination of IL-2, is unchanged from the value found in [15]. ω = 7.88 × 10 −2 IU cells −1 day −1 , the rate of IL-2 production from CD8 + T cells, is calculated using the same method as was used in [15], from the no tumor and large tumor equilibria. dI dt is set to zero, and the known parameters and initial values are used to find two equations with the two unknowns ω and φ. We then solve for these two parameters. φ = 1.788 × 10 −7 IU cells −1 day −1 , the rate of IL-2 production from CD4 + and naive CD8 + T-cell IL-2 production, is found as part of the system of equations solving for ω. ζ = 2.5036 × 10 3 IU L −1 , the concentration of IL-2 for half-maximal CD8 + T-cell IL-2 production, is unchanged from the value found in [15]. dM dt : Irinotecan chemotherapy treatment For a summary of the terms, parameters, and parameter values, see Table 6. γ = 4.077 × 10 −1 day −1 , the rate of excretion and elimination of chemotherapy drug, is calculated using the assumption of exponential decay from For a summary of the terms, parameters, and parameter values, see Table 6. η = 1.386 × 10 −1 day −1 for cetuximab and 9.242 × 10 −2 day −1 for panitumumab is the rate of mAb turnover and excretion. The parameter η is calculated using the assumption of exponential decay from = 0.139 [23]. For panitumumab, the half life in tissue is 7.5 days, so η = ln(2) 7.5 = 0.092 [23]. λ = 8.9 × 10 −14 mg cells −1 L −1 day −1 for cetuximab and 8.6 × 10 −14 mg cells −1 L −1 day −1 for panitumumab is the rate of mAb/tumor-cell complex formation. Average cells have around 20,000 EGFRs [3]. The binding affinity of cetuximab is 400 pM (picomolar, which measures the ratio of the concentration of unbound molecules to the concentration of bound molecules) and for panitumumab it is 50 pM [21]. We first consider cetuximab, which has a molecular weight of 152 kD=152 × 10 6 mg/mol [42]. A binding affinity of 400 pM means that 400 pM = [cetuximab][EGFRs]/[cetuximab-EGFR complexes]. We first need to find the number of cetuximab-EGFR complexes per cell: 400 pmol 1L × 1 mol 10 12 pmol = 4 × 10 −10 mol/L. So, for each free cetuximab molecule and EGFR, there are 2.5×10 9 cetuximab-EGFR complexes. So, out of the 20,000 EGFRs per cell, we expect < 1 (8 × 10 − 6) EGFR per cell to be free. Thus we will assume that all EGFRs are filled. We can convert this back into concentration of cetuximab lost per tumor cell: 20, 000 mAbs × 1 mol 6 × 10 23 mAbs × 152 × 10 6 mg 1 mol × 1 57L = 8.9 × 10 −14 mg/L. Thus, for cetuximab, λ = 8.9 × 10 −14 . For panitumumab, we perform a similar computation, using instead panitumumab's binding affinity and its molecular weight of 147 kD=147 × 10 6 mg/mol [43]. We first find the number of panitumumab-EGFR complexes per cell: h 2 = 4.45 × 10 −5 mg L −1 for cetuximab and 4.3 × 10 −5 mg L −1 for panitumumab is the concentration of mAbs for half-maximal EGFR binding. We first consider cetuximab, and use 10 9 as the number of tumor cells and 57 L as the volume of an average person [15]. Assuming that 20,000 cetuximab molecules bind to each tumor cell, we want to find the number of mg/L at which the EGFRs are saturated: Treatments In this section we show the calculations performed to find the treatment functions (v M and v A ) for the most common treatment schedules. We also used other dosing schedules in section 3, but the methods for computing them were the same as those we show here. Unless otherwise noted, the treatment regimens have been adapted from De Vita's book titled Cancer: Principles and Practice of Oncology [19]. Irinotecan Treatments v M (mg/L/day) has been changed to fit a common treatment regimen for irinotecan. A 125 mg/m 2 dose of irinotecan is usually given over 90 minutes once weekly, and we give it in our simulations for 4 weeks. We assume 1.73 m 2 to be the average surface area of an adult [40]. Because the medication quickly leaves the blood stream, we use 59.71 L, the average volume of an adult, as the volume over which the medication is spread [15]. So, we would like each dose to infuse We check for treatments at time (t − 2/24) because irinotecan needs to be converted by the body to its active form, SN-38, and SN-38 levels reach their peak two hours after irinotecan levels [19]. Cetuximab Treatments For cetuximab, a loading dose of 400 mg/m 2 is usually given over two hours, followed by a weekly 250 mg/m 2 dose over 60 minutes. Cetuximab is given on a six-week periodic schedule, during which it is given weekly for the first four weeks, then not given for two weeks. We assume the same surface area and volume as in the previous section. For the loading dose, we would like to infuse if treatment was not given at time t. Panitumumab Treatments The value of v A for panitumumab was found in the same way, except that panitumumab does not require a loading dose. We assume a treatment regimen of 6 mg/kg every two weeks, for a total of three treatments. We assume that the medication is given over 60 minutes, and that an average adult weighs of 70 kg [32]. This gives us v A (t) = 168.816 mg/L/day if treatment was given at time t 0 if treatment was not given at time t. Response rates for common treatment schedules from clinical trials and from our simulations. a Abbreviations: Pmab=panitumumab; Cmab=cetuximab; q1w=every week; q2w=every two weeks; q2w=every three weeks; load=loading dose; N=number of patients; NR=no response; R=response, NP=not provided. b Most response rates for irinotecan found in the literature are for irinotecan as a first-line treatment, including this one. However, patients receiving mAb therapy are usually receiving it because they did not respond well to chemotherapy [13]. c Irinotecan dosing schedule was varied during the study. d The first response rates (RRs) are measured 7 days after completion of first treatment. The second RRs for each are measured 4 weeks after treatments have ended.
2013-12-11T02:46:11.000Z
2013-12-11T00:00:00.000
{ "year": 2013, "sha1": "0455fe295fab753c1c09abfaedd0db373c514679", "oa_license": null, "oa_url": "https://journaljammr.com/index.php/JAMMR/article/download/15079/27842", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "c3eb9801040161c6e838985ba5906fd2f1f608bd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
239468030
pes2o/s2orc
v3-fos-license
Experimental Research on Rapid Fire Zone Sealing and Explosion Venting Characteristics of an Explosion Venting Door Using a Large-Diameter Explosion Pipeline To study the law of influence of an explosion venting door on gas explosion characteristics and verify its venting effect and fast sealing performance, a large-sized explosion pipeline experimental system was used. The gas explosion tests were carried out under the conditions of 5.5, 7.5, 9.5, and 11.5% gas concentration. The gas explosion characteristic parameters were measured by a data acquisition system. The laws of change in characteristic parameters and the flame-proof effect were analyzed. The results showed that the pressure peak was attenuated by 42.25, 50.54, 53.27, and 52.88% under the aforementioned four working conditions. As the gas volume fraction increased, the peak explosion pressure decayed as a quadratic function, and the average closing time of the fire zone was 13 h. This showed that the explosion venting door had significant explosion venting characteristics and the function of quickly closing the fire zone. The law of temperature change was basically the same, no matter how the gas concentration changed, and the explosion venting door had no inhibitory effect on the gas explosion flame. Under the four operating conditions, the maximum average values of the flame propagation speed were 103.56, 105.73, 136.67, and 138.34 m/s. The results of the study provide theoretical support for explosion-proof technology and emergency rescue technology in coal mines. INTRODUCTION In China, 92% of coal production is through underground mining, with an average mining depth of nearly 500 m. The geological conditions for underground coal seams are complicated. As the mining depth increases, the coal seam gas content and spontaneous combustion risk also increase. When a fire accident occurs in a coal mine, if it is not possible to directly extinguish the fire, the fire area needs to be closed. When gas and spontaneous combustions occur at the same time, a gas explosion accident is caused during the sealing process; 1−3 gas explosion is one of the major disasters in coal mines that causes a lot of casualties, and damages to roadways and facilities, and it is more likely to cause secondary explosions and secondary disasters, which further aggravate the severity of the accident damage and severely restrict the safety of coal production. How to effectively use explosion-proof technology and equipment is of great significance to effectively reduce the destructive power of disasters and reduce casualties. 4−6 At present, the existing explosion-proof devices in coal mines mainly include rock powder sheds, explosion-proof walls, explosion-proof water curtains, explosion-proof doors, and fire doors. 7−9 For this reason, scholars have actively studied these technologies. Liu et al. installed explosion venting doors of different qualities in the transparent glass cavity and found that the greater the quality of the explosion venting door, the longer it took to open the explosion venting door. 10 Xie et al. found that as the area of the explosion vent increased, the pressure peak and temperature peak decreased and the flame propagation speed increased. 11 Zhang et al. found that the explosion suppression effect of carbon dioxide was better than that of nitrogen through inert gas suppression experiments. 12 When the gas volume fraction was higher, the explosion suppression effect was more significant. Fan et al. and Yang et al. found that water mist can effectively attenuate the propagation of explosion shock waves and flame waves. 13,14 Zhou et al. used simulation software to study the effect of explosion venting door on gas explosion and concluded that explosion venting door could effectively suppress the explosion shock wave. 15 Wang et al. studied the explosion venting characteristics of the explosion venting door through different opening methods and found that the pressure ratio of the opening method was better than the Mach number. 16 Pei et al. and Yu et al. developed a CO 2 -ultrafine water mist explosion venting device. 17,18 The explosion suppression effect of gas− liquid coupling was better than that of other single-explosionsuppression agents, and no explosion promotion phenomenon occurred. Wei et al. designed a pressure relief type explosionproof door, it could realize that the blower stopped and the explosion vent door was opened, and the blower ran and the explosion vent door was reset. 19 Gao et al. designed a closed explosion-proof system for coal mines composed of an airtight buffer system, an airtight explosion-proof door and wall system, a sprinkler barrier system, and an airtight explosion-proof exhaust system. Experiments showed that the system could provide stable normal temperature and pressure while preventing the entry of toxic and harmful gases. 20 Sun et al. and Shu et al. developed the PLC control system to automatically isolate the explosion venting door and block the airflow, which solved the problems of the current air door without explosion venting system and lack of safe escape channels. 21,22 Zhang designed a new type of foam ceramic flameproof shed to address the disadvantages of the current flameproof watershed. The study found that the flame-proof shed could effectively suppress the propagation of explosion shock waves and could be operated within the safety range of personnel and underground equipment. 23 Rong et al. made a purely mechanical flame-retardant and flame-proof device to solve the problems of explosion-proof water bag and rock powder shed. The experiment proved the effectiveness of the device. 24 Sun et al. established a similar simulation experiment model and conducted a gas explosion propagation law experiment with or without an explosion vent door, which proved the effectiveness of the explosion vent door. 25 Through the efforts of scholars, new measures and methods for preventing and controlling gas explosion accidents have been developed, such as new fireproof materials, 26−28 inhibitors, 29−35 explosion suppression technologies, 36−40 etc, and these technologies have been widely promoted and applied to the coal mine sites. However, these technologies have many shortcomings in addressing the problems of secondary explosions, continuous explosions, and secondary disasters. Among them, the rock powder shed has high sensitivity and low trigger pressure. When a gas explosion accident occurs, the preshock wave activates the rock powder shed, which makes the detonation suppressant fail in advance and cannot suppress the delayed shock wave. 41,42 The airtight wall is the main technology for an enclosed fire area, but it takes a long time to build and is quite dangerous. There is no rapid pressure relief device and failure escape channel, and it is easy to be in an explosive environment during the building process. 43 Single-function explosion-proof doors and explosion-proof water curtains cannot be reused after venting, and there is a risk of secondary explosions and secondary disasters when fresh air enters. 44−46 The above technologies have certain drawbacks, so it is necessary to develop a venting technology that can be reused and can close the fire area. In this work, the methane−air (gas) mixed gas was selected as the object of the explosion, and a large-scale gas explosion pipeline experimental system was self-made. The explosion experiment was carried out with four different volume fractions of gas under the action of the explosion vent, and the data acquisition system was used. The gas explosion characteristic parameters were collected, and the influence of different volume fractions of gas on the explosion pressure, flame temperature, and flame propagation speed were compared and analyzed. At the same time, the pressure peak, temperature peak, and explosion venting mechanism were analyzed. Indeed, these research results are particularly important for further research on explosion prevention and emergency rescue technology, especially in the area of disaster recovery of ventilation systems. Large-Diameter Explosion Pipeline Experimental System. The large-diameter explosion pipeline experimental system included an explosion tank and an explosion pipeline, as shown in Figure 2. The explosion pipe had a total length of 17.5 m and an inner diameter of 0.61 m, which was a circular pipe with a length-to-diameter ratio of 29. The entire explosion pipeline was equally divided into five sections. A circular flange was used to connect adjacent pipelines, and a sealing ring was set in the middle of the flange to prevent leakage, the fifth section of the pipeline opens to the outdoors, and the pipeline was placed on the pulley bracket. The explosion tank simulated the spontaneous fire area and the explosion source, and the first and second sections of the pipeline simulated the mining face and the enclosed area in the roadway. 2.2. High-Performance Data Acquisition System. The high-performance data acquisition system composition was as follows. (1) The explosion pressure acquisition system adopted a 2200V1 piezoelectric high-sensitivity sensor manufactured by Dytran and PCI-1712L pressure data acquisition device. (2) The explosion flame temperature acquisition system adopted a DT9805 temperature data acquisition module and C2-7-K thermocouple produced in the United States. (3) The flame propagation velocity acquisition system used AD620 series operational amplifiers produced in the United States and D749 high-speed infrared phototubes produced in the United Kingdom. Explosion pressure, flame temperature, and flame propagation speed data were first collected by the corresponding high-speed data acquisition module, and finally the data was displayed and processed by the software installed on a computer. The placement position of each measuring point sensor is shown in Figure 3. To collect the pressure signal, temperature signal, and speed signal more effectively, the sensor of the corresponding signal was arranged at the center of each section of the pipeline. Since only the pressure and temperature changes near the explosion venting door needed to be collected, four sets of pressure sensors and temperature sensors were respectively arranged in the middle positions on the left and right sides of the first four pipe sections, corresponding to the measuring points 1−4. Because the speed could not be displayed from time to time, the speed could only be measured in intervals, and five sets of speed sensors were arranged in the middle position on the upper side of each pipe section, corresponding to the measuring points 1−5. The horizontal distances between the measuring points 1−5 and the explosion tank were 1.75, 5.25, 8.75, 12.25, and 15.75 m, respectively. 2.3. Explosion Venting Door. As shown in Figure 4, the explosion venting door was a circular steel plate placed in the middle of measuring points 2 and 3. With an outer diameter of 840 mm and a thickness of 15 mm, the door was fixed between the two pipelines by flanges and sealing iron rings. Four explosion venting windows of equal size were set in the middle of the explosion venting door. The opening and closing of the explosion venting window could play the role of explosion venting and sealing. To meet the damage resistance and reusability of the explosion venting door, it was designed according to "GB50017-2003 code for the design of steel structures". Figure 4a shows the opening surface of the explosion venting window when the explosion shock wave hits the explosion vent door. It pulls four rectangular sealing plates through four horizontal shafts. The rectangular sealing plates block four vents with a side length of 160 mm. To prevent the explosion vent window from leaking when it is closed, a sealing rubber ring is installed at the explosion vent window. Figure 4b shows the impact surface of the explosion shock wave. To make the explosion venting characteristics more obvious, a fire barrier is installed on this surface to prevent the flame from passing through the explosion venting device. 2.4. Ignition Control System. The ignition control system was equipped with an RXFD-20 explosion-proof remote control high-energy igniter, as shown in Figure 5. The igniter could control the remote control board at a safe distance. The ignition time could be controlled by pressing the ignition button of the remote control board. When the igniter started to ignite, the ignition indicator on the control box flashed for a countdown of 10 s. If there was a fire at the vent after 10 s, then the ignition was successful. If the ignition was not successful, the ignition stop button on the remote control board was pressed to reset and then the ignition button was pressed to re-ignite. After the ignition was over, the ignition stop button was pressed and the power switch on the right side of the remote control board was turned off. 2.5. Design of Experimental Conditions. The volume fraction of the methane−air premixed gas of 5.5, 7.5, 9.5, and 11.5% concentration was introduced into the explosion tank and the explosion venting door was placed at the same time. Explosion experiments were carried out under four different working conditions. The data of each measurement point were collected through the high-performance data acquisition system and the change characteristics of the gas explosion characteristic parameters and the explosion venting effect of the explosion venting door were analyzed. Due to the short distance between measuring points 1 and 2 and measuring points 3 and 4, the propagation speed of the explosion shock wave was too fast, and the change rule and characteristic value of the explosive characteristic parameters of measuring points 1 and 2 and measuring points 3 and 4 were similar, Therefore, while analyzing the explosion venting characteristics of the explosion venting door and the change law of gas explosion characteristic parameters, only the measurement points 2 and 3 were selected for comparison and analysis. 2.6. Experimental Steps. The experimental process consisted of three parts: preliminary preparation of the experimental system, inspection of the experimental system, and detonation gas. (1) Preliminary preparation of the experimental system involved (1) preparing the high-concentration gas and detonation source for the experiment; (2) assembling the explosion pipeline and explosion tank, and installing the explosion venting door at the junction of the second and third pipelines; (3) according to the experimental system, installing the pressure sensor, speed sensor, and temperature sensor at the corresponding measuring points and recording the corresponding acquisition channels of the sensors at different measuring points. (2) Inspection of the experimental system included (1) checking whether the vacuum pump was working normally and whether the combustible gas pipeline was normally delivering gas; (2) turning on the power supply of the control system and checking whether the emergency button on the control system panel was sensitive and whether the valve of the vacuum digital display meter was closed; (3) checking whether the sensor circuit could collect normally; (4) turning on the power supply of the safety monitoring interlock system, and checking whether the vent ball valve and quick opening door on the explosion tank worked normally; (5) checking whether all valves were in a closed state; and (6) checking whether the pipe flange and sensor screws were tightened. (3) Gas was detonated by (1) using a vacuum pump to mix high-purity methane with high-purity air to form a 9.5% methane−air mixed gas according to Dalton's law of partial pressure, (2) injecting 1 m 3 of mixed gas into the explosion tank and controlling the acquisition module software of each measuring point to enter the ready state, and (3) using an explosion-proof remote-controlled highenergy igniter to ignite the ignition head and trigger an explosion. 3. RESULTS AND DISCUSSION 3.1. Analysis of Rapid Sealing and Explosion Venting Characteristics of Explosion Venting Door. Figure 6 shows the change characteristics of the pressure with time at the measuring points 2 and 3 before and after the explosion venting door was released when the gas volume fraction is 9.5%. It can be seen from Figure 6a that there were only two peaks at 0.095 and 0.29 s in the pressure change curve at measuring point 2, and the peak at 0.095 s was the largest, with a peak value of 107 kPa. This was because the initial shock wave energy was not enough to open the explosion venting window, and part of the shock wave was rebounded and superimposed with the lagging shock wave. At this time, the explosion pipeline formed a closed space for a short time, so that the pressure in the closed space continued to increase. When the explosion venting window was opened, the first wave crest appeared in the change curve. It could be seen from the change curve that the negative pressure began to appear after 0.21 s, and the maximum negative pressure was −31 kPa and appeared at 0.236 s, followed by small fluctuations at 0.29 s, and the pressure began to reach a stable state after 0.35 s, and it had been in a negative pressure state for a long time, the change curve was shown in Figure 6b. Because the explosion shock wave passed through the explosion venting window instantly, the energy of the impact on the explosion venting window disappeared instantly and the explosion venting window was automatically closed by its own weight. At this time, there was a pressure difference between the enclosed area and the area outside of the explosion venting door so that the explosion venting door was firmly closed. The pressure at measuring point 3 was attenuated due to the opening of the explosion vent window by the shock wave, and part of the energy was attenuated. The wave crest dropped from 107 to 50 kPa, which was attenuated by 53.27%. After 0.4 s, the pressure curve oscillated and decreased until it became 0 kPa. At this time, all of the explosion shock waves in the pipeline rushed out of the pipeline. It can be seen from Figure 5b that the pressure in the enclosed area was kept at −20 kPa from the point of measurement 2 at 0.6 s. At the time of the gas explosion, a high-temperature and highpressure environment was formed inside the pipeline to expand the volume of the sealing ring. After 6 s, the internal temperature of the pipeline dropped and the volume of the sealing ring shrank, causing it to slightly deform. Therefore, the negative pressure inside the pipeline in the "vacuum cavity" state continuously sucked in external air, and the pressure began to rise slowly. Finally, the enclosed area returned to normal Figure 7 shows the pressure change characteristics of measuring points 2 and 3 with time under the action of an explosion venting door when the gas volume fractions are 5.5, 7.5, 9.5, and 11.5%. When propagating from measuring point 2 to measuring point 3, the maximum peak values dropped from 71, 93, 107, and 103 kPa to 41, 46, 50, and 48 kPa, which were attenuated by 42.25, 50.54, 53.27, and 52.88%, respectively. With an increase in the gas volume fraction, the pressure peak attenuation characteristic showed a quadratic function relationship, which increased first and then decreased. Measuring point 2 began to generate negative pressure at 0.17, 0.19, 0.202, and 0.195 s, and the average recovery time to positive pressure was 13 h, indicating that the closing time remained consistent. 3.2.2. Temperature Change Characteristics of Measuring Points 2 and 3. Figure 8 shows the flame temperature change characteristics of measuring points 2 and 3 with time under the action of the explosion venting door when the gas volume fractions are 5.5, 7.5, 9.5, and 11.5%. From the change curves of the two measuring points, it can be seen that the characteristics of the curve change under different working conditions were basically the same. The temperature and time of the entire gas explosion propagation process were in a quadratic function. When the explosion flame passed through measurement point 3, the peak flame temperature was only slightly attenuated; no matter how the gas volume fraction changed, the explosion vent door had no effect on the explosion flame. Since there was no high-speed camera, it was impossible to capture the flame shape changes with time, so only the law of the flame temperature could be analyzed. In the follow-up, experimental methods similar to those in the literature 47−49 are used to obtain the change law of the flame shape of the explosion vent door by shooting the flame pictures. Figure 9 shows the change characteristics of the pressure peak and temperature peak at different measuring points when the gas volume fractions are 5.5, 7.5, 9.5, and 11.5% under the action of the explosion venting door. From the gas chemical reaction equation, it can be seen that the volume fraction is 9.5%, which is exactly reflected and the most violent. Therefore, the pressure peak and temperature peak change characteristics were analyzed with the volume fraction of 9.5% as an example. Variation Characteristics of Pressure Peak and Temperature Peak under Four Working Conditions. 3.3.1. Characteristics of Pressure Peak Changes. It can be seen from Figure 9a that the pressure peaks of measuring points 1−4 were 104.37, 107.37, 57.44, and 39.05 kPa, respectively. The peak pressure of measurement point 2 was 2.9% higher than that of measurement point 1. The peak pressure of measurement point 3 was 46.5% lower than that of measurement point 2. The peak pressure of measurement point 4 was 32.0% lower than that of measurement point 3. The pressure peak change curve showed a quadratic function relationship. This was because, in the early stage of the gas explosion, the explosion tank continuously released a large amount of energy; this was blocked by the explosion venting door, which caused the shock wave pressure in the enclosed area to increase continuously, resulting in the pressure peak of measuring point 2 to become greater than that of measuring point 1. Under the action of the explosion venting window, the shock wavefront and the flame wavefront were stretched and deformed, and part of the energy was consumed at the same time. At this time, the pressure peak of measuring point 3 was also much smaller than that of measuring point 1. In the process of shock wave propagation, the front gas was continuously heated while being subjected to the frictional resistance and heat dissipation of the pipe; then, the energy continued to decrease, so the pressure peak at measuring point 4 decreased again. In short, the energy loss during the propagation of the gas explosion was mainly due to the suppression by the explosion venting door. Due to the effect of explosion and heat dissipation, the intensity of the reaction gradually weakened, and the pressure peak decreased accordingly. It can be seen from the pressure peak change curve that the peak pressure of each measuring point was the highest during the entire explosion propagation process in the 9.5% working conditions, while the explosive strength and peak pressure were between 7.5 and 9.5%, respectively, in the 11.5% working condition due to insufficient oxygen. In other working conditions, the maximum value of the gas explosion pressure peak appeared at measuring point 2, and the pressure peak change characteristics were consistent with those in the gas fraction of 9.5%. 3.3.2. Characteristics of Temperature Peak Changes. It can be seen from Figure 9b that the temperature peaks of measuring points 1−4 were 1177, 891, 873, and 819 K, respectively, and the temperature peaks of adjacent measuring points were dropped by 24.3, 2.0, and 6.2%, respectively. The peak temperature of measuring point 1 was the largest, and the temperature drop was the maximum from measuring point 1 to measuring point 2. This was because the high-temperature gas produced by the explosion tank propagated forward rapidly, and the overpressure generated at the moment of the explosion caused the temperature around measuring point 1 to increase rapidly, so the temperature peak of measuring point 1 was the largest. Because the high-temperature gas propagated forward and mixed with the gas around the measuring point 2 and the pipeline wall dissipated heat at the same time, the temperature peak eventually dropped significantly. When the explosion flame passed through the explosion venting window, a turbulent flame was formed due to the action of the obstacle. The flame wave was not in contact with the pipeline wall in a large area, and the temperature attenuation became weaker when it reached measuring point 3. As the flame continued to propagate forward, the flame began to contact the pipeline wall in a large area, resulting in a large loss of energy, and the temperature attenuation became stronger when it reached measuring point 4. The temperature peak change characteristics also showed that the temperature attenuation had nothing to do with the explosion venting door. Characteristics of Gas Explosion Speed Variation. The flame propagation speed in this experiment was to record the time when the flame reached each measuring point and the distance between two adjacent measuring points through the infrared sensor. The average flame propagation speed v f between the two adjacent measuring points is where t is the time difference between the flame reaching two adjacent infrared sensors (t) and L is the distance between the measuring points of two adjacent infrared sensors (m). Table 1 and Figure 10 show the change characteristics of the flame average speed table and flame propagation speed at adjacent measuring points when the gas volume fractions are 5.5, 7.5, 9.5, and 11.5%, respectively. It can be seen from the characteristics of speed change in Figure 10 that the gas explosion state under the 7.5% working condition was more severe than that under the 5.5% working condition when the speed range was 1.75−5.25 m, and the speed under the 7.5% working condition was greater than that under the 5.5% working condition. However, the gas content in the later stage of the gas explosion was insufficient, and the chemical reaction energy supply was insufficient, which made the two velocity curves coincide in the velocity measurement range 12.25−15.75 m. When the 11.5% working condition was in the speed range of 1.75−5.25 m, the speed curve basically coincided with that in the 9.5% working condition, but the oxygen was lacking in the later stage of the gas explosion, and the gas activity CONCLUSIONS To study the influence of the explosion venting door on the characteristic parameters of gas explosion and verify its explosion venting effect and fast sealing performance, we constructed a large-sized gas explosion pipeline experimental system and performed explosion experiments under four working conditions. The gas explosion characteristic parameters were collected through the data acquisition system, and the influence of the explosion vent door on the gas explosion pressure, flame temperature, and flame speed was analyzed. The conclusions were summarized as follows: (1) We independently developed a rapid closed explosion venting experimental system. Large-sized explosion pipelines and explosion venting devices were used to simulate the suppression effect of explosion venting doors on gas explosions when the fire zone was closed, which effectively made up for the lack of experimental research on small-sized pipeline explosions. (2) Under different working conditions, when the explosion shock wave propagated from measuring point 2 to measuring point 3, the pressure peaks were attenuated by 42.25, 50.54, 53.27, and 52.88%, respectively. As the gas volume fraction increased, the peak pressure of the explosion attenuated. The characteristics showed a quadratic function relationship, and the average closing time of the fire area was 13 h, which showed that the explosion venting door had significant explosion venting characteristics and the function of quickly closing the fire area. It can be seen from the pressure peak and the temperature peak change curves that the peak pressure and temperature peak of each measuring point were maximum during the entire explosion propagation process in the 9.5% working condition, while they were between 7.5 and 9.5% in 11.5% working condition due to insufficient oxygen and explosive strength. In other working conditions, the maximum values of pressure peak and temperature peak appeared at measuring point 2, and its change characteristics were consistent with those in the 9.5% working conditions. (6) This article has achieved some results in the rapid closed explosion venting technology, but the following issues still need to be explored and studied. (1) The experimental platform needs to be improved. Due to the limitation of the experimental system, the experimental speed can only be calculated by the distance between adjacent sensors and the time difference between the flame reaching the adjacent infrared sensor, thus affecting the accuracy of the experimental data and conclusions. Therefore, it is necessary to install a visible window that can observe the flame on the side of the explosion pipe and the highspeed cameras and a schlieren were used to observe the flame propagation process and pressure relief process more intuitively and to monitor the flame propagation speed in real-time. At the same time, a large-volume explosion tank that matches the explosion pipeline is replaced, which can make the explosion characteristics more obvious and the experimental data more accurate. (2) Follow-up research works are required. First, for the explosion venting mechanism of the explosion venting door, this article only conducted a preliminary study at the level of gas explosion characteristics. However, the force of the shock wave generated by the gas explosion on the explosion venting door needs to be further studied by ANSYS mechanics calculation software. Second, the explosion test of gas mixtures such as multi-alkanes will be supplemented. Under the conditions of different materials of explosion venting doors and different numbers of explosion venting windows, a closed explosion venting characteristic experiment is carried out, and the influence of explosion venting door on the law of gas explosion propagation is analyzed. Finally, the explosion venting door suitable for mines and industrial fields is designed and manufactured, and the actual objects are implemented in the experiment and field. At the same time, the performance of the explosion venting door is further verified and performance is optimized and improved.
2021-10-15T15:55:22.583Z
2021-10-10T00:00:00.000
{ "year": 2021, "sha1": "cee6a76c59d094cd13e1c9cfd92bff3581f88735", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.1c04561", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8e5e6c0724bd7ed0065f50a75c83325fe1f215c1", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
215798579
pes2o/s2orc
v3-fos-license
A Review of Nonanesthetic Uses of Ketamine Ketamine, a nonselective NMDA receptor antagonist, is used widely in medicine as an anesthetic agent. However, ketamine's mechanisms of action lead to widespread physiological effects, some of which are now coming to the forefront of research for the treatment of diverse medical disorders. This paper aims at reviewing recent data on key nonanesthetic uses of ketamine in the current literature. MEDLINE, CINAHL, and Google Scholar databases were queried to find articles related to ketamine in the treatment of depression, pain syndromes including acute pain, chronic pain, and headache, neurologic applications including neuroprotection and seizures, and alcohol and substance use disorders. It can be concluded that ketamine has a potential role in the treatment of all of these conditions. However, research in this area is still in its early stages, and larger studies are required to evaluate ketamine's efficacy for nonanesthetic purposes in the general population. Introduction Ketamine has been used as an anesthetic drug for over 65 years [1]. An enantiomeric, lipid-soluble phencyclidine derivative, ketamine is one of the most commonly used drugs in anesthesia. As a nonselective NMDA receptor antagonist, it has equal affinity for different NMDA receptor types. NMDA is a subgroup of ionotropic glutamate receptors, along with AMPA and kainite. Ketamine is inexpensive and therefore widely used in developing countries. It additionally has particular utility for anesthesia induction in hemodynamically unstable patients [2]. Ketamine administration has long been known to mediate a wide variety of pharmacological effects, including dissociation, analgesia, sedation, catalepsy, and bronchodilation. ough ketamine is known most widely for its anesthetic properties, recent research has uncovered multiple novel uses for this drug, including neuroprotection, combatting inflammation and tumors, and treatment of depression, seizures, chronic pain, and headache [3][4][5]. Racemic ketamine, a mixture of (S)-and (R)-ketamine (Figure 1), is commonly used in this research, though both (S)-ketamine and (R)-ketamine alone are also subjects of study. While (S)-ketamine carries roughly 3-to 4-fold greater potency as an anesthetic, it also carries a greater risk of psychotogenic side effects [6]. However, ketamine has an extensive side-effect profile and a potential for abuse that cannot be ignored, which has historically led to its avoidance in favor of other agents, and its safety is an area of ongoing research [3]. Additionally, there are a variety of adverse reactions that have been associated with ketamine use which must be considered, including self-resolving sinus tachycardia, neuropsychiatric effects, abdominal pain, liver injury, and dose-dependent urogenital pathology including ulcerative cystitis [7][8][9]. Currently, there are roughly 800 or more clinical trials exploring aspects of nonanesthetic uses of ketamine registered on ClinicalTrials.gov, illustrating the extensive ongoing interest in this area. e nonanesthetic clinical uses of ketamine have been the focus of extensive recent research, some of the most applicable and prevalent of which are explored here. For this scoping study, we sought to utilize the Arksey and O'Malley methodological framework to provide a broad overview of the field, with attention to ongoing research and current knowledge gaps [10]. Relevant literature from 2010 through the present was queried through the MEDLINE, CINAHL, and Google Scholar databases. Keywords included "ketamine" combined with terms including "non-anesthetic uses," "depression," "headache," "neuroprotection," "pain," "pain syndromes," "chronic pain," "alcohol use disorder," "substance use disorder," and "seizure." Sentinel research from prior to 2010 was also incorporated. Relevant original articles including randomized trials, retrospective studies, review articles, case reports, and preclinical animal studies were included. is paper will discuss some of the most common and promising nonanesthetic uses of ketamine, including its utility in the treatment of depression, pain syndromes including headaches, neurologic disorders including seizures, and alcohol/substance use disorders. Ketamine and Depression Despite the high prevalence of depression, which affects roughly 1 in 5 people over their lifetime, currently available pharmacologic treatments, the most commonly utilized of which are selective serotonin reuptake inhibitors (SSRIs), have limited efficacy [11]. SSRIs achieve adequate effect in as little as 30% of patients [12], while having a high burden of side effects ranging from nausea and headaches to weight gain and sexual dysfunction [13]. Pharmacologic treatment of depression has also historically been limited by the fact that conventional antidepressants typically take weeks to reach effect [14]. Nearly all antidepressants target monoaminergic systems, and research on new molecular targets (including corticotropin-releasing factor 1 antagonists, neurokinin 1 antagonists, and vasopressin V1b antagonists) has not yet led to alternative treatments [15]. Depression is known to be associated with alterations in glutamatergic neurotransmission and dysfunctional activity of the resting state network [16]. Additionally, depression is thought to be caused by enhanced subcortical and limbic activity, which affects cognition and emotion regulation [15]. Ketamine offers a promising alternative to conventional antidepressants due to its rapid onset and apparent efficacy. More broadly, ketamine appears to have efficacy in treating multiple internalizing disorders including depression, anxiety, and obsessive-compulsive disorder [17][18][19]. Ketamine is thought to affect these brain areas directly through modification of glutamatergic neurotransmission [20], although it has also been shown to mediate its effects through modulation of dopaminergic neurotransmission [21] and serotonergic neurotransmission [20]. Ketamine also indirectly acts through several other neurochemical pathways. It induces upregulation of the mammalian target of rapamycin (mTOR) pathway, shifting activity away from subcortical and limbic regions and toward the medial and lateral prefrontal cortex [15], and has the potential to reverse the mTOR signaling pathway impairment that is seen in major depressive disorder (MDD) [22]. Ketamine additionally upregulates the expression of glutamate transporters, specifically EAAT2 and EAAT3, in the rat hippocampus [23]. Modulation of hippocampal plasticity is another mechanism by which ketamine is thought to mediate its antidepressant effects [14], the mechanism of which may be related to EAAT3 regulation of AMPA receptor trafficking and redistribution [23]. A single subanesthetic dose (0.5 mg/kg) of intravenous (IV) ketamine hydrochloride has been shown to have a rapid antidepressant effect, which begins as early as 2 hours after ketamine administration, peaks at 24 hours, and lasts for up to 7-14 days [24,25]. is effect has been noted in both unipolar and bipolar depression [26], although effect duration may be shorter in patients with bipolar disorder [27]. Promisingly, efficacy from ketamine is seen in people with treatment-resistant depression, who have failed multiple antidepressant regimens [28,29]. In one study of 67 patients (including 45 women), IV ketamine administered at 0.5 mg/ kg twice per week has been shown to achieve rapid-onset and sustained antidepressant effect for a 15-day period [30]. Broadly there are 2 generations of studies evaluating ketamine for unipolar depression: (1) studies on safety/efficacy of one subanesthetic dose of IV ketamine and (2) studies on alternate drug delivery routes, MDD relapse prevention, and mechanistic analysis [15]. e first study on single-dose IV ketamine in seven patients with mood disorders was published in 2000 by Berman et al. and found a significant but transient improvement of depression severity with a single subanesthetic dose of IV ketamine (0.5 mg/kg) [31]. While the improvement to depression symptoms was transient, this improvement did exceed the elimination halflife of ketamine [15]. A larger replication study with 18 subjects was published by the Intramural Research Program of the National Institute of Mental Health (NIMH) in 2006 and also found that subanesthetic ketamine (0.5 mg/kg) has a significant antidepressant effect [17]. is clinical trial has largely been credited with launching the field of research into ketamine's antidepressant effects [32]. Multiple open-label case series have demonstrated similar results with a single ketamine infusion [33]. Subsequent research has shown that a regimen of serial IV ketamine (0.5 mg/kg) infusions achieves a greater response rate, without more significant side effects [33,34]. Other routes of ketamine administration have also been examined. For example, intranasal ketamine hydrochloride (50 mg) has been shown to mediate an antidepressant effect, though the magnitude of the effect may be less than that of IV ketamine [35]. Intranasal esketamine (which is the S(+) enantiomer of ketamine) in combination with an oral antidepressant was recently approved by the Food and Drug Administration for the treatment of treatment-resistant depression, though the long-term effects of this regimen remain preliminary [36]. However, when used for its antidepressant effect in mice, (R)-ketamine appears to have more potent and persistent effects than (S)-ketamine, as well as no psychotomimetic side effects [6]. IV ketamine may have increased utility in specialized populations, such as the military, cancer patients, and patients with Alzheimer's disease. In active duty military populations, long-term psychiatric admission for suicidality may create unique problems including separating the patient from his or her support network and leading to administrative obstacles in returning to duty [37]. In one study on 10 soldiers in the United States, a single dose of IV ketamine (0.2 mg/kg) was found to significantly decrease suicidality and hopelessness [37]. Ketamine appears to be a rapidly efficacious antidepressant and antisuicidal pharmacologic agent which may be well suited for this particular population and others in which long-term psychiatric hospitalization creates significant challenges, though it is possible that the particular applications for the US military may not translate to broader military use around the world. IV ketamine (0.5 mg/kg) has also demonstrated utility in treating acuteonset depression and suicidal ideation in one study of 39 newly diagnosed cancer patients [38]. Furthermore, ketamine may have unique utility in treating depression associated with Alzheimer's disease, as ketamine appears to have neuroprotective properties against soluble amyloidbeta protein-mediated toxicity, according to one study utilizing 15 mg/kg intraperitoneal ketamine in mice [39]. e rapid-onset antisuicidal properties of ketamine are possibly mediated by enhancing neuroplasticity [15,24,40]. is effect is even seen in patients who are nonresponders to the antidepressant effect of ketamine [41]. Improvement in suicidal ideation occurs as early as within 40 minutes of subanesthetic dose ketamine administration (0.5 mg/kg) and may last as long as 10 days, according to one study of 57 patients [42]. While change in depressive symptom severity correlates with change in suicidal ideation, even when depressive symptom severity is controlled for, the antisuicidal effect of ketamine persists [43]. Ketamine has been demonstrated to effectively treat anhedonia independent of depressive symptoms, and this effect can last up to 14 days [44]. It has been theorized that reducing anhedonia is the mechanism by which ketamine reduces suicidal thoughts [45]. Ketamine has also been shown to have anxiolytic and procognitive effects in a rat model of depression, using doses between 5 and 30 mg/kg [46]. Additionally, when used as the anesthetic during electroconvulsive therapy (ECT), ketamine decreases Hamilton Depression Rating Scale scores earlier and more significantly than when propofol is used [47]. However, while meta-analysis of 16 articles with 346 patients has demonstrated superior treatment effect in patients with depression who receive ketamine in ECT over other anesthetics, these patients were also noted to have more side effects and longer recovery times [48]. While ketamine has many clinically promising features, it has a number of drawbacks to consider. One of these is effect duration: the antidepressant effect of ketamine lasts on average only 1-2 weeks [49]. ere is substantial variation in the duration of treatment response, with many patients reporting less than 1 week of depressive symptom improvement from a single ketamine infusion [27]. However, a ketamine maintenance infusion regimen, in which infusions of ketamine 0.5 mg/kg are administered up to every 2 weeks, has shown promising results in a study of 8 patients [50]. Due to poor bioavailability resulting from high first-pass hepatic metabolism, ketamine is typically administered by injection, which is a drawback for medications that require ongoing dosing [21,51]. IV administration every 2-3 days requiring hospital or clinic visits is impractical. However, alternative routes of administration offer promising alternatives (for example, very-low-dose sublingual ketamine has been shown to improve mood, cognition, and sleep, when 10 mL of a 10 mg/mL solution is administered sublingually for 5 minutes and swallowed) [24]. Another drawback of ketamine is that, as a derivative of phencyclidine (PCP) [11], ketamine causes a transient increase in psychotomimetic symptoms [15,41] and dissociative symptoms, though these return to baseline by 4 hours posttransfusion [41]. Ketamine has abuse and addiction potential [1,37] and causes cognitive deficits, which have been shown to be reversible with cessation of ketamine use [15]. However, concerningly, chronic recreational ketamine use has also been shown to produce cognitive and affective deficits including depression [52], which raises concern about the use of ketamine as an option for long-term antidepressant therapy. Other potential adverse effects of ketamine include transient tachycardia and hypertension [1]. ere are also concerns about neurotoxicity, bladder toxicity, and tolerance with repeated ketamine infusion use [34]. As a result of these drawbacks, many clinicians and researchers view IV ketamine infusions not as an end-all replacement for conventional antidepressants, but a promising new direction for antidepressant therapy that warrants further research and an effort to develop "ketamine-like" drugs that do not carry the side effects that currently limit the use of ketamine [1,25,32]. For example, the possible antidepressant effects of other glutamatergic modulators, including riluzole, dextromethorphan, nitrous oxide, and GLYX-13 (rapastinel), are currently being examined [53]. Ketamine and Pain Syndromes Ketamine has been widely used to manage acute and chronic pain, both alone and as an adjunct to opiates. e primary analgesic mechanism of ketamine is through NMDA receptor antagonism, though ketamine has also been shown to Anesthesiology Research and Practice act on opioid, nicotinic, and muscarinic receptors. Ketamine's anti-inflammatory qualities may also contribute to its efficacy in pain relief [54,55]. While ketamine's effect on acute pain is driven primarily by inhibition of NMDA receptors and prevention of wind-up, ketamine is thought to mediate its effect on chronic pain through desensitization of upregulated NMDA receptors [56][57][58]. Routes of ketamine administration for analgesia include parenteral, oral, sublingual, topical, and intranasal [54]. It appears that administration of high-dose ketamine over a short time course (42-480 mg daily for 1-10 days) produces analgesia more effectively than lower doses for longer durations (such as 18 mg daily for 90 days) [59]. e level of evidence and consensus for the utility of ketamine in pain management varies between types of pain. Acute Pain. Ketamine appears to reduce analgesic requirement in the setting of acute pain. For example, in 160 patients undergoing cesarean section, a single postoperative intravenous ketamine bolus (0.25 mg/kg) was shown to reduce the severity of postoperative pain and decrease analgesic requirements [60]. As a result, ketamine can prevent opioid tolerance [61] and may reduce the rate of opioidinduced hyperalgesia following surgery [62], while also mitigating adverse effects linked to opiates such as respiratory suppression, oversedation, and hypotension [63]. In addition its opioid-sparing effects, ketamine has been shown to reduce nausea and vomiting in the perioperative period at doses of <0.5 mg/kg [64]. While ketamine has generally been shown to reduce intraoperative opioid requirements in both opioid-naïve and opioid-dependent populations [62,65], this is somewhat controversial. Some studies have demonstrated decreased average pain scores when continuous ketamine (0.2 mg/kg/hour) is used intraoperatively, but no decrease in overall opioid requirement [66]. Furthermore, other studies show no difference in postoperative pain levels or postoperative opioid requirement in postsurgical patients when ketamine is used, including several studies that demonstrated no benefit to the use of ketamine infusion in patients undergoing spinal surgery [67,68]. Because of its efficacy in treating acute pain, ketamine has utility in the acute care setting. Ketamine has wellestablished utility in the emergency department (ED) as short-term analgesia for indications such as acute long bone fractures, trauma victims, and opioid-dependent patients with acute pain, which in one study was administered as ketamine 15 mg IV once followed by a continuous ketamine infusion at 20 mg/hour for 1 hour [69]. When used alone for pain management in the ED setting, low-dose ketamine (<1 mg/kg) provides comparable pain relief to opiates, with the benefit of producing less respiratory depression [70]. Ketamine has also been shown to decrease opioid consumption for acute pain in ED patients, in a study of 30 patients with severe pain where ketamine 15 mg IV and hydromorphone 0.5 mg IV were administered together [63]. Chronic Pain. e role of intraoperative ketamine in the reduction of chronic postoperative pain development is unclear. Some reports suggest that ketamine decreases the rate of chronic postoperative pain when administered as a 0.15-1 mg/kg preincisional loading dose followed by intraoperative infusion [71], and intravenous ketamine has been shown in meta-analysis of 40 papers including 1388 participants to significantly reduce chronic pain incidence following certain types of surgery [72]. is effect may be mediated through a reduction in primary and secondary hyperalgesia in the postoperative period, which decreases the incidence of chronic pain [73]. However, there appears to be a reduction in acute pain but not chronic pain development following amputation, thoracotomy, or mastectomy with the use of ketamine as a coanalgesic agent [74]. Epidural ketamine and intravenous ketamine have not been shown to decrease the incidence of development of chronic postthoracotomy pain [75][76][77]. Additionally, meta-analysis has also not shown intravenous and epidural ketamine to significantly reduce the rate of persistent postsurgical pain (PPSP) at three or six months [78]. ere is moderate evidence that ketamine effectively reduces chronic noncancer pain [59]. A recent systematic review and meta-analysis of 7 studies showed short-term analgesic benefit from IV ketamine in patients with chronic pain, which appears to occur in a dose-response relationship [79]. In a study of 49 patients, ketamine infusion was shown to decrease visual analog scale (VAS) scores in patients with intractable chronic pain, in whom ketamine 0.5 mg/kg was administered over 30-45 minutes, followed by either continuation at this dose in subsequent infusions every 3-4 weeks, or increase in the dose up to the highest tolerated dose providing analgesia [80]. Daily oral ketamine (up to 64 mg/day) has also been shown to be safe and opioidsparing in patients with chronic pain [81]. e combination of subcutaneous ketamine infusion and sublingual ketamine lozenges appears to reduce opioid use in patients with chronic nonmalignant pain [82]. In one retrospective study of 51 patients with refractory chronic pain, oral ketamine treatment (starting at 0.5 mg/kg/day, then increased in 15 to 20 mg increments as needed) led to the resolution of pain in 44% of patients, reduced opioid requirements by an average of 62%, and was ineffective in only 22% of patients [83]. ese results are especially promising because of the limitations of currently available treatments for chronic pain, with only 30-40% of patients with chronic pain achieving adequate to good relief [84]. Ketamine infusions (administered as infusions of 0.1-0.3 mg/kg/hour for 4-8 hours/ day, up to 16 hours over three consecutive days) have also been shown to significantly reduce pain intensity in children and adolescents with chronic pain, with the largest benefit seen in patients with CRPS [85]. However, the utility of ketamine in treating chronic pain is not universally accepted. For example, in one study of 36 patients, ketamine was shown not to improve long-term pain scores in patients who take chronic opiates, or to effectively reduce opiate requirements [86]. e utility of ketamine has been validated in neuropathic pain [87], especially in complex regional pain syndrome (CRPS). CRPS causes significant morbidity, and 80% of patients with CRPS are severely disabled [88]. Many patients with CRPS are unresponsive to traditional therapeutic approaches, and ketamine has been shown to reduce pain levels in some of these treatment-refractory patients [89]. When studied in mice, ketamine (administered subcutaneously at a dose of 2 mg/kg/day for 7 days) appears to decrease nociceptive sensitization in the chronic stage of CRPS, but not the acute stage [55]. When CRPS type 1 (CRPS-1) alone was studied using a cohort of 10 patients, S(+)-ketamine infusion (using the following regimen per 70 kg: min 0-5: 1.5 mg, min 20-25: 3.0 mg, min 40-45: 4.5 mg, min 60-65: 6.0 mg, min 80-85: 7.5 mg, min 100-105: 9.0 mg, and min 120-125: 10.5 mg) appears to reduce pain levels for 10 weeks or longer, therefore demonstrating a disease-modulatory role [56]. Despite the efficacy of ketamine as an analgesic in this population, functional improvement of affected limbs was not shown in one study of 5 patients who received anesthetic doses of ketamine over 10 days [90]. ough ketamine appears to be safe and effective in treating CRPS, further studies are warranted to evaluate dosing, timing, and routes of administration of ketamine for optimal efficacy in CRPS treatment [91]. For example, 10% ketamine cream applied three times daily in combination with oral palmitoylethanolamide has been shown in a case report to effectively treat refractory CRPS pain [92], but larger, controlled studies are warranted. ere are additional chronic pain conditions in which a small number of studies have shown promising results from ketamine, and further research is warranted. Ketamine infusions appear to be effective as an adjunct with gabapentin for managing chronic neuropathic pain in spinal cord injury patients, with a duration of efficacy lasting 2 weeks after infusion termination, according to a study in 40 patients, who received 80 mg IV ketamine over 5 hours daily for 1 week and gabapentin 300 mg 3 times daily [93]. In patients with phantom limb pain, ketamine also appears to mediate short-term analgesic effects [94]. S-ketamine has been shown to reduce chronic pancreatitis pain in a study of 10 patients when administered as an infusion of 2 μg/kg/min for 3 hours, though this effect disappeared following the end of infusion [95]. Ketamine may also have utility in scenarios where opioids alone often have inadequate efficacy, including vasoocclusive episodes in patients with sickle cell disease. Few studies have examined the role of low-dose ketamine in the treatment of sickle cell pain, though the majority of reported cases have shown that ketamine effectively reduces pain intensity and opioid requirements in patients with sickle cell pain [96]. e data on this topic are limited, and further studies are warranted to validate this finding [97]. Additionally, since pain disorders are highly correlated with suicidal ideation and attempts, the antisuicidal properties of ketamine may make ketamine a useful treatment option in patients with concomitant pain and suicidal ideation [40]. ough ketamine has anecdotally been reported to effectively treat cancer pain, when studied systematically, ketamine has not been found to be useful in the treatment of pain from advanced cancer as an adjunct to opioids, though difficulty in designing studies in the context of palliative care may contribute to these results [98,99]. Ketamine can be considered as an adjuvant therapy in patients with cancer who have failed standard therapy, though optimal dosing is unclear [100]. Headache. Chronic migraine affects 1% of the population within the United States, creating a significant economic burden, and treatment options for refractory cases are limited [101,102]. Due to its efficacy in the treatment of chronic pain, it has been hypothesized that ketamine might be a useful addition to headache and migraine control regimens. For example, while triptans effectively relieve acute migraine pain in 43-76% of cases [103], ketamine could play an important role in pain control for triptan nonresponders. Ketamine could also play a role in migraine management for patients in whom triptans are contraindicated, such as patients with cardiovascular diseases. e actions of ketamine on glutamate NDMA binding sites at the level of the secondary somatosensory cortex, insula, and anterior cingulate cortex have been associated with modulation of affective pain processing and the decrease of allodynia and central sensitization. ese effects associated with chronic pain might also be the basis of the mechanism of effect on headache pain [101,104]. It is useful to take these mechanisms into account when considering memantine, a noncompetitive glutamatergic NMDA antagonist, which has been previously shown to be an effective treatment for chronic and refractory migraine [101,105]. Minimal evidence exists surrounding the use of ketamine in chronic headache treatment. Individual cases suggest that ketamine administered IV (using an initial infusion rate of 0.1 mg/kg/hour, then increased by 0.1 mg/ kg/hour every 3-4 hours until a goal pain score 3/10 was reached and maintained for 8 hours, then downtitrated) in inpatient management of refractory migraine consistently reduces short-term pain severity, although no chronic relief has been observed [101]. A large review including 77 patients has demonstrated similar results, with intravenous ketamine administration (starting at an infusion rate of 0.1 mg/kg/ hour, increased as needed at 6-hour intervals to a maximum infusion rate of 1 mg/kg/hour) causing acute but not longterm improvement to refractory headache [106]. When considering alternate methods of delivery, randomized controlled trials and case studies of intranasal ketamine's effects on migraine with aura have demonstrated that 25 mg intranasal ketamine reduces the severity, and in some cases the duration, of the associated aura [107,108]. is further reinforces the potential of the use of drugs with action on glutaminergic pathways, such as ketamine, as headache modulators [107]. Ketamine has also been investigated in combination with other drug regimens. Magnesium sulfate, which binds to NMDA channels, might be administered concomitantly with ketamine to produce a heightened effect. When given intravenously to 2 chronic cluster headache patients, this combination (ketamine 0.5 mg/kg over 2 hours and magnesium sulfate 3000 mg over 30 minutes) was shown to produce immediate pain relief, a decrease in suicidal ideation, and a decrease in attack frequency and intensity for up Anesthesiology Research and Practice to six weeks [109]. Evidence exists that levels of kynurenic acid, an NMDA receptor antagonist, are decreased in cluster headache patients, providing further support for the theory that NMDA receptors are overactive in these patients and that a focus on therapeutic options targeting these receptors is warranted [109,110]. Unfortunately, there are also several pieces of contradictory evidence against the use of ketamine for the treatment of primary headache [111]. Small randomized studies have shown no improvement in acute headache pain outcomes with IV ketamine (0.2-0.3 mg/kg) when compared to both placebo and prochlorperazine, while also inducing increased side effects [111][112][113]. Additionally, most investigations of this use of ketamine are reported as small case series, and further study is required in order to make informed conclusions on the efficacy of ketamine in the treatment of headache. e current body of literature has led to the conclusion by some experts that there is not sufficient evidence for the widespread use of ketamine in headache patients [111,114]. Drawbacks. Ketamine has multiple drawbacks as a treatment for pain. Ketamine may have limited utility as a treatment for chronic pain syndromes given the potential risks associated with repeated IV administration of ketamine, including its neurotoxicity and potential to impair long-term memory [115]. While these risks have not yet been formally studied in a controlled fashion, the effect of frequent (defined as more often than twice per month) recreational ketamine use was shown in a study of 37 patients to cause long-lasting impairments in episodic and semantic memory [116]. Furthermore, both sensitization and tolerance are possible consequences of repeated ketamine use, and while the duration required to notice these effects from intermittent ketamine use has not been extensively studied in humans, in mouse studies sensitization has been shown to occur over the course of weeks and is clearly evident by 5 weeks of weekly administration of intraperitoneal ketamine (20 mg/kg or 50 mg/kg, in mice) [117]. Ketamine also has been known to cause hepatic toxicity due to mitochondrial impairment, urological toxicity including ulcerative cystitis, and immediate risks including tachyarrhythmias, hallucinations, and flashbacks [88,118,119]. Psychedelic effects are also associated with ketamine [59], and benzodiazepine coadministration may be required to treat its psychosis-like effects [58]. Most studies on the utility of ketamine in pain management have small sample sizes, and treatment effect may therefore be overestimated [72]. is suggests the need for larger trials evaluating the use of ketamine in pain control. Further research is also required to characterize the role of ketamine in cancer-related pain [120], including the role of oral ketamine in palliative care [121]. Furthermore, while ketamine infusions have been well studied, alternative routes of ketamine administration have been evaluated less extensively. For example, open studies of 2% topical ketamine preparations have suggested a therapeutic effect on chronic pain without adverse effects locally or systemically, though further research is needed to elucidate its efficacy [122]. e S-enantiomer of ketamine also appears to have a two-to threefold more potent analgesic effect than (R)-ketamine [123], and the utility of using (S)-ketamine alone as a treatment for pain warrants additional study. 4.1. Neuroprotection. In addition to mediating anesthetic effects, the noncompetitive antagonism of NMDA by ketamine has recently been postulated to play a role in neuroprotection. Ketamine was previously thought to increase intracranial pressure (ICP) [4] and therefore would be contraindicated in cases where ICP may already be elevated (such as trauma and neurosurgical patients). is conclusion was based on a few small studies with limited scope, but did result in an FDA package insert warning [124]. Several more recent studies challenged and disproved this theory [4,124]. ese reports of cases linking ketamine induction to elevated ICP may not have adequately taken ventilation into account; in one case of reported elevated ICP after ketamine induction, the patient was spontaneously breathing after induction, and ICP was noted to decrease dramatically with initiation of manual hyperventilation [125]. erefore, hypercarbia is the more likely underlying cause of ICP elevation rather than use of ketamine induction, and in patients who undergo ketamine induction and normocarbia is maintained using mechanical ventilation, rise in ICP is not seen [4]. Neurologic Applications of Ketamine Several mechanisms of action behind ketamine's neuroprotective qualities have been proposed. Ketamine has anti-inflammatory properties and is thought to reduce microglial activation and reduce cytokines TNF and IL-6, although studies have not been able to prove any differences in plasma inflammatory markers after ketamine administration [5,124,126]. It is known that unlike other anesthetic drugs including propofol, ketamine does not provide neuroprotection via inhibition of TLR-4-NF-κB-dependent signaling [127]. rough its NMDA inhibition, ketamine reduces glutamate excitotoxicity by preventing excitatory amino acid receptor stimulation, and this reduction has been proven through the use of MRI, in one study of 24 infants [124,126]. Excitotoxicity, defined as the excessive stimulation of neurons causing neuronal injury, has been suggested as the underlying process behind several types of central nervous system pathology [124]. Ketamine reduces neuronal death and injury through the blockade of calcium entry into vulnerable immature neurons [126,128]. NMDA receptor activation is also thought to cause the loss of mitochondrial membrane potential and apoptosis through cAMP response element binding protein shutoff, a process that NMDA inhibition by ketamine would also prevent [124]. Finally, it is well documented that ketamine protects against ischemic injury by reducing cell swelling and preserving cellular energy following anoxia-hypoxia injury, while also increasing neuronal viability and preserving cellular morphology [129][130][131]. It is hypothesized that inhibition of P-CREB dephosphorylation in the infarct area by low-dose ketamine is responsible for a decrease in infarct volume, edema ratio, and neurologic deficit [132]. ese are all processes known to be induced by cerebral injury such as stroke and trauma, which gives ketamine promising clinical implications [4]. Ketamine appears to be beneficial in neuroprotection following multiple types of neural injury. Studies have shown that ketamine reduces focal ischemia and hemorrhagic necrosis volumes as well as chronic cerebral hypoperfusion [4,5,[133][134][135]. In animal studies, outcomes following incomplete cerebral ischemia were improved with ketamine administration, thought to be related to reduced plasma catecholamine levels [5]. Additionally, ketamine causes an increase in blood flow regionally and globally and reduces resistance in the cerebrovasculature [4,136,137]. Ketamine provides some measure of cardiovascular stimulation as well, which may contribute to cerebral perfusion [138]. For example, ketamine might have utility as a hemodynamic agent in traumatic brain injury (TBI) patients with hypovolemia, as it is well documented that ketamine can cause an elevation in heart rate, systolic blood pressure, and cardiac index [124]. Studies have shown that ketamine also inhibits spreading depolarizations, which cause depression of neuronal activity. ese slow potential changes propagate in brains with previously existing ischemic damage to cause or increase damage, and their prevention could improve outcomes in TBI, subarachnoid hemorrhage, and malignant stroke cases [124,139,140]. In TBI specifically, which involves increased inflammation, autophagy, edema, and ischemia, ketamine produces several beneficial effects. At subanesthetic doses in animal models, it prevents IL-6 and TNF-α release, reduces deficits in dendrites, and possibly activates the mTOR signaling pathway to downregulate autophagic protein production [141]. is has been translated to clinical TBI research: in one study of 115 braininjured patients, ketamine administration (with a median dose of 200 mg) was found to reduce the occurrence of the isoelectric spreading depolarizations that are seen in traumatized human cortex [139]. In another study of 66 patients with aneurysmal subarachnoid hemorrhage, (S)-ketamine infusion (with a mean dose of 2.8 ± 1.4 mg/kg/hour) significantly decreased the incidence of spreading depolarizations [142]. While the role of ketamine in spinal cord injury has been shown in animal models [143,144], this has not yet been translated to human research. Ketamine's neuroprotective effects have also been proven clinically through functional assessment. In human cardiac surgery patients, single-dose ketamine (0.5 mg/kg) administration at surgery induction has been associated with reduced postoperative delirium and cognitive dysfunction, results which are attributed to the reduction in systemic inflammation secondary to ketamine usage [4,5,138]. In animal models, ketamine has reduced impaired cognitive behavior caused by cell death in the cortex and hippocampus [129,145] and has attenuated functional deficits in memory and behavior caused by TBI [141]. While multiple studies support ketamine's potential for neuroprotection, some others provide inconclusive evidence. Ketamine's effects on neurologic injury following cardiopulmonary bypass have been studied in both adults and children, with no resulting evidence for either neuroprotection or neurotoxicity [126,146]. A review of neuroprotective agents administered in the perioperative period reveals that intravenous ketamine is associated with no significant difference or change in new postoperative cognitive deficits or mortality and concludes that there is currently not enough evidence to show that ketamine has a neuroprotective effect [138,146,147]. Conversely, ketamine has also been shown to cause apoptotic cell death in neurons, specifically in the frontal cerebral cortex and hippocampal region, as well as long-term deficits in cognitive processing [124,128,139,[148][149][150]. In animal models, ketamine at anesthetic doses is observed to collapse cortical neuron growth cones [151]. Cell injury caused by ketamine seems to be dose-and time-dependent [129,150], secondary to an induced aberrant cell cycle reentry leading to apoptosis [129,152]. is window of neurotoxicity seems to be focused during early brain development and significant synaptogenesis [149], particularly toward the end of pregnancy and in the early postpartum period. In mice and rats, the window of greatest vulnerability to neurotoxic agents is the first 2-3 weeks after birth, and in humans, the time of greatest vulnerability spans from midgestation to 2-3 years of life [149]. In human forebrains, NMDA receptor expression peaks during gestational weeks 20-22, which coincides with the beginning of the brain growth spurt which lasts into the postnatal period [153]. In neonatal mice, high-dose ketamine causes severe degeneration of parietal cortical cells with resultant learning and memory deficits at 2 months [154]. Long-term neurofunctional outcomes are also impaired after three daily doses of ketamine, with increased numbers of apoptotic cells in the hippocampus and later defects in learning and memory [149,155]. Promisingly, one study has shown the potential of ketamine to counteract its own neurotoxic effects by inducing the production of the activity-dependent neuroprotective protein (ADNP); pretreatment with a subanesthetic dose of ketamine before sedation might upregulate the production of this protein and provide a neuroprotective effect in rats [151]. Other approaches to mitigating the risk of ketamine-induced neuronal apoptosis are being investigated; for example, clozapine has been shown to improve the viability of mouse neuronal stem cells that are exposed to ketamine [156]. e juxtaposition of ketamine's neurotoxic and neuroprotective effects provides an interesting conundrum. ese effects seem to vary not only by acute dosage and cumulative usage over time, but also by the state of the brain (absence versus presence of noxious stimuli) during the time of ketamine introduction [149]. Some have concluded based on the existing evidence that the neuroprotective effects of ketamine are largely dependent on the use of lower doses, as higher doses can result in ketamine-induced toxicities [129,157]. Further study is clearly required, specifically in the areas of ongoing brain development in pediatric populations as well as in the time period surrounding surgery [149]. Additionally, further study is required in human models, as much of the current evidence is based on animal models. Ketamine clearly has a great amount of promise as a Anesthesiology Research and Practice neuroprotective agent, although the exact parameters of its use require further elucidation. Seizures. While benzodiazepine monotherapy is the preferred treatment for isolated seizures and there is no broadly accepted role for ketamine as a treatment for isolated seizures, ketamine has the potential to play a role in the treatment of status epilepticus (SE), in which seizure activity persists for longer than 5 minutes [158]. e utility of ketamine in treating status epilepticus may be explained by the fact that sensitivity to GABA agonists decreases with seizure duration, but this is not as profound with NMDA receptor antagonism [159], and synaptic NMDA receptors may even be upregulated in prolonged seizures [160] and therefore represent an ideal pharmacologic target. Ketamine also appears to reduce glutamate uptake and may be protective against glutamate-induced neurotoxicity in the setting of seizure [161]. Ketamine appears to work synergistically with benzodiazepines to treat SE, and dual therapy using midazolam and ketamine (4.5 mg/kg midazolam with 45 mg/kg ketamine) has been shown to treat SE more effectively than either agent alone [162]. Furthermore, ketamine (10 mg/kg) in combination with a benzodiazepine (diazepam 1 mg/kg) and either valproate (30 mg/kg) or brivaracetam (10 mg/kg) has been shown to be both more effective and less toxic than benzodiazepine monotherapy for the treatment of SE [163]. Ketamine has a promising role in the treatment of refractory status epilepticus (RSE), which is defined as seizure activity that does not respond to two antiepileptic drugs at appropriate doses, and is seen in around 30% of cases of status epilepticus [164,165]. IV Ketamine appears to effectively terminate RSE (when administered as a 0.5 mg/kg IV bolus followed by a continuous infusion gradually uptitrated to 1.5 mg/kg/hour) [166], and while most studies evaluating the use of IV ketamine in status epilepticus are in adults, ketamine also appears to be both safe and effective in children with refractory status epilepticus, at a mean dose of 40 μg/kg/minute [167]. Since RSE is conventionally treated using anesthetics which require intubation, utilizing ketamine in the treatment of RSE can prevent the need for intubation and spare patients the associated risks [168]. RSE carries significant morbidity and mortality, with up to 90% of individuals with RSE suffering severe morbidity and up to 19% of individuals with SE lasting greater than 30 minutes experiencing death [169], making novel treatment options like ketamine valuable. Ketamine infusion (with a maximum dose range of 25-175 μg/kg/minute) with or without propofol has also been shown in a study of 67 patients to effectively control superrefractory status epilepticus (SRSE), in which seizures persist for at least 24 hours after anesthetics are initiated [170]. Ketamine (either as a 1.1-4 mg/kg bolus or as 1.0-1.1 mg/kg/hour infusion) also appears to be protective in cases with both RSE and traumatic brain injury, according to a retrospective review of a cohort of 11 patients [165]. In the context of chemical warfare, ketamine may have a role in neuroprotection and reducing neuroinflammation induced by organophosphorus nerve agents, which are known to cause seizures, status epilepticus, and brain damage. Ketamine in combination with atropine, with or without a benzodiazepine, appears to have utility in reducing the effects of organophosphorus nerve agents including soman, which could have utility in field conditions [171]. A study in guinea pigs exposed to soman showed that (S)ketamine and atropine provided comparable protection against death and seizure-related brain damage, but at doses 2-3 times lower than racemic ketamine and atropine [172]. While a promising treatment for RSE and SRSE, ketamine has several notable drawbacks. It appears that ketamine alone may not be an effective treatment for status epilepticus that has lasted for over one hour [158]. Adverse reactions to ketamine have also been reported, including psychiatric symptoms like hallucinations and delirium, increased saliva secretion, and arrhythmias, though these are noted to be treatable and self-limited [173]. Major complications have not been reported [174]. Ketamine-induced neurotoxicity has been described, primarily using animal models [164]. Cerebellar syndrome including cerebellar atrophy has been reported with high-dose ketamine [175]. ere is limited prospective data on the treatment of SE and RSE using ketamine, and this topic warrants further research [176,177]. While a racemic mixture of (S)-and (R)ketamine is typically used, it has been shown that (S)-ketamine is more rapidly eliminated, leading to faster recovery of psychomotor faculties [123]. Whether (S)-ketamine is superior to racemic ketamine in the treatment of seizures warrants further study. While the benefits of ketamine in treating RSE and SRSE are promising, use of ketamine has not been widely adopted, perhaps because ketamine has not been integrated into management algorithms [178]. erefore, integration of ketamine into treatment protocols warrants further consideration by neurologic societies and guideline creators. Furthermore, other novel uses of ketamine for seizure disorders are currently being investigated; for example, recently a case was reported in which low-dose IV ketamine was used in an epileptic patient with postoperative worsening of his seizure burden, with successful improvement in seizures and avoidance of oversedation or intubation [179]. However, it has also been recently called into question whether ketamine may induce seizure in some cases, with one recent case of new-onset seizure being reported following intramuscular ketamine administration in a pediatric patient, which certainly warrants further consideration as well [180]. ere are clear benefits to early administration of ketamine for SE, including limiting the adverse events from polypharmacy and avoiding intubation [178], and earlier administration of ketamine for SE and RSE has been advocated [174,178,181]. Furthermore, early administration of ketamine may prevent neuronal necrosis, making it a useful medication to use early on in SE [182]. Ketamine and Alcohol and Substance Use Disorders. ere is ongoing research surrounding the role of ketamine in treating alcohol and substance use disorders. It is thought that in addition to modulating glutamatergic neurotransmission, ketamine may mediate downstream effects on neuronal connectivity and plasticity through brain-derived neurotrophic factor and other factors to improve dopamine signaling, thereby treating drug-related synaptic deficits [183]. In a study of 111 alcohol-dependent patients, relapse rates were significantly lower at one year in patients who received intramuscular ketamine [184], though this sentinel trial lacked both randomization and blinding [185]. A study of 58 opioid-dependent patients found that subanesthetic ketamine infusion (0.5 mg/kg/hour) significantly improves immediate and short-term (48-hour) withdrawal symptoms in patients who undergo precipitated opioid withdrawal [186]. In a study of 55 cocaine-dependent patients, patients who received a single 40-minute ketamine infusion (0.5 mg/ kg) in conjunction with a mindfulness-based relapse prevention program had a significantly lower relapse rate than patients who received the same mindfulness program in conjunction with a midazolam infusion [187]. Based on these promising results, it is possible that ketamine may fill a major gap in addiction treatment, as there are currently no FDA-approved medications for the treatment of cocaine use disorder [187]. Ketamine has also been shown to treat heroin dependence in a dose-dependent fashion, with one study on 70 detoxified heroin-dependent patients demonstrating that patients who received higher doses of intramuscular ketamine (2.0 mg/kg) had a significantly higher rate of abstinence at two years [188]. However, the use of ketamine in alcohol and substance use disorders is complicated by its psychotogenic, dissociative properties and conventional IV administration route, which could pose particular challenges in patients with addiction or mental illnesses [189]. Conclusions Ketamine has emerged as a promising pharmacologic agent with diverse indications, but controversy surrounds it, as a result of its toxicities, psychedelic side effects, and abuse potential. As an antidepressant, ketamine has the benefit of being significantly faster acting than conventional agents while also having antisuicidal properties, though its longterm use is limited by toxicity and impracticality of IV infusions. In the treatment of migraines, ketamine appears to effectively reduce acute headache symptoms, while not modulating the disease state of chronic migraines. Ketamine appears to be neuroprotective and may play a role in management of TBI, subarachnoid hemorrhage, and strokes. In the treatment of pain, ketamine appears to reduce the analgesic requirement for treatment of acute pain and also has a clear role in the management of CRPS, though again its use is limited by its side-effect profile and toxicity, including neurotoxicity and memory impairment, when used long term. Ketamine may also play a role in drug detoxification and alcohol and drug relapse prevention. e role of ketamine in the management of seizures including SE and RSE is also promising. In general, ketamine has multiple nonanesthetic uses that are drawing attention, but because many studies evaluating its utility have conflicting results and sample sizes are typically small, further research studies including large-scale prospective studies are required to elucidate its role in the field of medicine. Disclosure Abby Pribish and Nicole Wood are co-first authors. Conflicts of Interest e authors declare that they have no conflicts of interest.
2020-04-02T09:10:39.544Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "b9df6fd1eaaf67c92ef740f1ef8acdc545a41f00", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/arp/2020/5798285.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "91dd786803b2a99b3e658b17683b3c1e8662a1bd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
834013
pes2o/s2orc
v3-fos-license
Antioxidant, Antimicrobial and Phytochemical Variations in Thirteen Moringa oleifera Lam. Cultivars A study was undertaken to assess variation in antioxidant, antimicrobial and phytochemical properties of thirteen Moringa oleifera cultivars obtained from different locations across the globe. Standard antioxidant methods including the DPPH scavenging, ferric reducing power (FRAP) and β-carotene-linoleic acid model were used to evaluate the activity. Variation in the antioxidant activity was observed, with TOT4951 from Thailand being the most active, with activity five times higher than that of ascorbic acid (reference compound). A different trend was observed for the activity in the FRAP and β-carotene-linoleic acid assays. Antimicrobial activity was tested against Gram-positive (Staphylococcus aureus) and Gram-negative (Klebsiella pneumoniae) strains using the microdilution method. Acetone extracts of all cultivars exhibited good antibacterial activity against K. pneumoniae (MIC values of 0.78 mg/mL). The remaining extracts exhibited weak activity against the two microorganisms. For the antifungal activity, all the extracts exhibited low activity. Variations were observed in the total phenolic and flavonoid contents. Cultivars TOT5169 (Thailand) and SH (South Africa) exhibited highest amounts of total phenolic compounds while TOT5028 (Thailand) exhibited the lowest amounts of five times lower than the highest. The information offer an understanding on variations between cultivars from different geographical locations and is important in the search for antioxidant supplementation and anti-ageing products. Introduction Free radical damage from reactive oxygen species (ROS) has been linked to the progressive decrease in normal function and accumulation of macromolecular damage that gradually leads to ageing [1]. The ageing process is usually accompanied by several human pathologies including diabetes, cardiovascular disorders, cancer and neurodegenerative diseases, which can also be aggravated by exposure to physiological stressors such as ROS [2]. However, variation in the ageing and onset of these diseases/disorders suggests a high degree of variability in tolerance to ROS and biological ageing in humans. Under manageable concentrations, ROS exert beneficiary effects to the body. However, when expressed in high levels, ROS leads to oxidative stress [1]. Protection against ROS damage depends on the expression of the antioxidant systems within the body or external supplementation of antioxidants [3]. During infections, inflammation and several other pathologies, the body defends itself from further injury by use of ROS such as the superoxide anion, hydroxyl radicals, nitric oxide and hydrogen peroxide from normal cell redox processes [4,5]. Regulation of ROS defense can be reinforced by supplementation with plant derived natural extracts and compounds, such as resveratrol; and antioxidant exogenous sources enriched with flavonoids and vitamin C from the diet and epidermal antioxidant activity from enriched cosmetics [6]. Several medicinal plants have been reported to act as sources of exogenous antioxidants. Amongst them, Moringa oleifera, is one of the most widely distributed species of a monogeneric family Moringaceae [7]. The tree is characterized as a fast-growing, drought-tolerant type, native to north-western India, and is widely cultivated in tropical and subtropical areas where its young seed pods and leaves are regarded as a nutritional powerhouse [8]. Several compounds have been isolated from the leaves of Moringa oleifera including niazirin, niazirinin, 4-[4'-O-acetyl-α-L-rhamnosyloxy) benzyl]isothiocyanate, niaziminin A and B [9,10]. Moringa has long been recognized in traditional medicine worldwide as having value both as a preventative and treatment agent of several health conditions, including the treatment of inflammation, infectious diseases, cardiovascular, gastrointestinal, haematological and hepatorenal disorders [11]. Several scientific articles has been published describing the antioxidant properties of Moringa, which can translate to its use as an anti-ageing herb [12]. However, much of the evidence remains anecdotal as there has been little actual scientific research done to support these claims. This study was aimed at investigating the variations in the antioxidant activity using the DPPH (2,2-diphenyl-1-picrylhydrazyl) radical scavenging assay, ferric-reducing power and the ability to delay or halt the bleaching of β-carotene-linoleic acid in a model system as well as the antibacterial properties between thirteen Moringa oleifera cultivars introduced from four different geographical locations of the world. The variations in the phytoconstituents of the cultivars were also investigated using colourimetric methods. Results and Discussion A comparative study to determine the variation in the antioxidant, antimicrobial and phytochemical properties of extracts of thirteen Moringa oleifera cultivars introduced from The World Vegetable Centre (AVRDC) (Thailand), Taiwan, South Africa and United States of America was carried out. There were significant differences in the antioxidant, antimicrobial activities and phytochemical properties between the cultivars. A correlation analysis between the phytochemical content (flavonoids and total phenolics) and antioxidant activities (DPPH and β-carotene-linoleic acid) revealed insignificant but mostly weak negative correlations (p > 0.05), with coefficient (r) values of between −0.464 to 0.036. However, although insignificant, the negative correlations, in the context of antioxidant activities, indicate some appreciable degree of correlation between high phenolic levels and good antioxidant activities. DPPH Radical Scavenging Activity The EC 50 values for the DPPH radical scavenging potentials of the thirteen cultivars are shown in Table 1. The widely used parameter to measure antioxidant activity is the concentration of a test sample needed to decrease the initial DPPH concentration by 50% and is denoted as EC 50 [13]. In this study, EC 50 values less than or equal to 70.12 µg/mL [that of ascorbic acid (reference/positive control)] were considered good activity. The radical scavenging activity of the cultivars against DPPH radicals according to the respective EC 50 values (Table 1) were in the following descending order: TOT4951 > TOT5028 > TOT7266 > TOT4880 > TOT4100 > TOT4893 > TOT4977 > Limpopo > TOT5330 > TOT5077 > SH > CHM. The first three most active cultivars were all from the AVRDC, Thailand and the last two were the Silver Hill, South Africa cultivars. All the cultivar extracts showed a DPPH radical scavenging ability higher than that of a reference compound ascorbic acid (vitamin C). The most active DPPH radical scavenger, TOT4951 exhibited a scavenging ability five times more than that of ascorbic acid. The least active DPPH radical scavenger CHM, had twice the scavenging ability compared to ascorbic acid. This suggests that all the cultivar extracts tested in this study serves as better antioxidants than ascorbic acid. The experimental data reveal that all the cultivar extracts are likely to have the effect of scavenging free radicals and thus can be incorporated into cosmetics for healthy skin and/or antiageing products. β-Carotene-Linoleic Acid Model System (CLAMS) Activity The results of the delay in β-carotene bleaching, recorded as antioxidant activity (ANT %) and Oxidation Rate Ratio (ORR), calculated on the basis of the rate of β-carotene bleaching at time = 60 min are shown in Table 1. The order of antioxidant activity with respect to the protection of β-carotene against bleaching by the extracts with ORR values ≤ 0.05 was as follows; TOT4977 = TOT5028 = TOT4893 > SH. Lower ORR values, just like EC 50 values, denote better antioxidant potentials. Most of the cultivar extracts performance was lower compared to that of ascorbic acid in the prevention of β-carotene bleaching. Several plant secondary metabolites including phenolics and flavonoids are known to possess the ability to protect certain compounds like β-carotene against oxidation [14]. Ferric-Reducing Power Assay Activity The abilities of the different cultivars at varying concentrations to reduce Fe 3+ complexes in solution are presented in Figure 1 (the Figure was split into A and D to allow visibility of individual lines representing different cultivars). Strong antioxidants (reductants) reduce the Fe 3+ complex to various shades of green and the blue ferrous form, and is characterised by higher absorbance values at λ 630 nm after the assay [14]. Reducing activity increased with the increase in the concentration of all the cultivar extracts (as expected). The reducing activity of bioactive extracts is directly associated with antioxidant activity as the reduction of the Fe 3+ complex by the bioactive compounds is brought about by the donation of electrons. This can be translated to the reaction with ROS, thereby converting them to more stable products, terminating radical chain reactions [15]. There were significant differences in the reduction power with cultivar TOT4977 from the AVRDC, Thailand performing as the least reducing agent. All the Moringa cultivar extracts exhibited lower activities compared to butylated hydroxytoluene (BHT) used as a reference compound ( Figure 1B). Antimicrobial Activity The antibacterial and antifungal minimum inhibitory concentration (MIC) values for the thirteen Moringa cultivar extracts are presented in Table 2. The cultivar extracts with MIC values <1 mg/mL were considered as having high antibacterial activity [16] and are highlighted in bold. The extracts showed a broad spectrum of activities against Klebsiella pneumoniae versus Candida albicans. Of particular interest was the activity of all the acetone extracts of all cultivars against K. pneumoniae (MIC values of 0.78 mg/mL each). Similar activity was observed for ethanol extracts of TOT4880, TOT5077 and CHM cultivars against K. pneumoniae. On the other hand, TOT5077 and TOT4951 ethanol extracts exhibited similar activity against Staphylococcus aureus. The rest of the extracts exhibited moderate to weak activity against the two microorganisms. The acetone extracts of Limpopo, TOT4977 and TOT4893 cultivars performed better against S. aureus than the other acetone extracts of the other cultivars although the MIC values were below the 1 mg/mL mark. All the extracts exhibited low to moderate activity against C. albicans. As observed for the S. aureus results, the acetone extracts of Limpopo cultivar performed better than the rest of the acetone extracts of the other cultivars although the MIC value was below the 1 mg/mL target level. Klebsiella pneumoniae is a Gram-negative baterium, found in the normal flora of the mouth, skin, and intestines. Apart from pneumonia, K. pneumoniae can also cause infections in the skin, urinary tract, lower biliary tract and open-cut/surgical wounds. The bacterium has been reported to be resistant to multiple antibiotics. This is because it belongs to the extended-spectrum beta-lactamase (ESBL)-producing strains. ESBL-producing strains have persistently shown multi-resistance to many broad-acting antibiotics such as aminoglycosides, fluoroquinolones, tetracyclines, chloramphenicol, and trimethoprim/sulfamethoxazole [17]. The fight against ESBL-producing strains such as K. pneumoniae is emerging as an important challenge in both synthetic and natural product development [18]. The cosmetic industry cannot be left out. This is because production of topical cosmetics for skin care products with extracts that are active against ESBL-producing strains such as K. pneumoniae could offer a stepping stone in the battle. The active extracts of Moringa presented in Table 2 could be incorporated into cosmetics for that purpose. Staphylococcus aureus, a member of the Firmicutes, is an important Gram-positive coccus bacterium that cause diseases in humans [19]. It is frequently found in the human respiratory tract and on the skin. Although S. aureus is not always pathogenic, it is a common cause of skin infections such as pimples, boils, cellulitis folliculitis, carbuncles, scalded skin syndrome and abscesses [20]. Cosmetic products with antibiotic activity especially against S. aureus are useful as antiageing agents in that they maintain a healthy skin. They could also offer the solutions to the recent emergence of antibiotic-resistant strains called methicillin-resistant Staphylococcus aureus (MRSA) which are fast becoming a global problem [21]. Total Phenolics and Flavonoid Content The phytochemical analysis carried out in this study included total phenolics and flavonoid content and the results are presented in Figures 2 and 3. There were different levels of phenolic compounds detected in the different cultivars. The same trend was also noticed in the flavonoid content. Cultivars TOT5169 and SH exhibited the highest amounts of total phenolic compounds while TOT5028 exhibited the lowest amounts of at least five times lower than the highest. Different levels of expression of plant secondary metabolites like phenolic compounds suggests the differences in the ability of different cultivars in establishing themselves in new environments. Change in environment may exert stress on the plants and this may result in expression of more plant secondary metabolites. However, bioactivity cannot always be matched with the amount of phenolic compounds. For example TOT5169 and SH exhibited the highest phenolic content amongst the tested cultivars ( Figure 2) but the two cultivars showed moderate antioxidant activity (Table 1) while TOT5028 exhibited the lowest amounts of phenolics but showered a better antioxidant activity compared to the former two. Plants with high phenolic composition, including tannins, are regularly used as a basis for the production of valuable synthetic compounds such as pharmaceuticals, cosmetics, or more recently, nutraceuticals [22]. At lower concentrations, phytochemical compounds have beneficial effects such as antioxidant effects. Phenolic levels of up to 4% of dry matter have been shown to contribute positively in diets. However, as stated above, the beneficiary effects still depend on the nature and type of phenolic compounds present. In some instances, depending on the chemistry of the phenolic compounds, phytochemicals at higher concentrations (>4% in dry matter) may have negative physiological effects such as neurological problems, reproductive failure, goiter, gangrene and in lower animals may lead to death [23]. In cosmetics and antiageing products, phenolics enhance rapid skin and tissue regeneration and have demonstrated antiseptic effects (antibacterial and antifungal) [24]. In skin burns and wound healing, phenolics-protein complexes forms a film which limits fluid loss and forms a physical barrier to damaged tissue, insulating them from bacterial infection or chemical damage [25,26]. General 2,2-Diphenyl-1-picrylhydrazyl (DPPH), β-carotene and neomycin were obtained from Sigma-Aldrich (Sigma Chemical Co., Steinheim, Germany); butylated hydroxytoulene (BHT) and potassium ferricyanide from BDH Chemicals Ltd (Poole, England, UK); trichloroacetic acid, ascorbic acid, polyoxyethylene sorbitan monolaurate (Tween 20), ferric chloride (FeCl 3 ) and methanol from Merck KGaA (Darmstadt, Germany). All other chemicals used were obtained locally and were of analytical grade. Thirteen Moringa oleifera Lam. cultivars collected from different geographical locations in the world were cultivated at the Agricultural Research Council (ARC) experimental farm, Roodeplaat, Pretoria. The trial layout was in a randomized block design fashion with all the cultivars receiving the same management practices of no fertilizers and watering thrice a week. Eight cultivars were introduced from the World Vegetable Centre (AVRDC) in Thailand (TOT4893, TOT4951, TOT4977, TOT 5028, TOT5077, TOT5169, TOT5330 and TOT7266), one cultivar from Taiwan (TOT4100), one cultivar from USA (TOT4880) and three cultivars from South Africa [Silver Hill (SH), CHM and Limpopo]. Sample Preparation Fresh leaf samples from each of the thirteen Moringa oleifera cultivars were separately oven dried at 50 °C for 48 h. Dried plant materials were ground into powders and extracted non-sequentially (1:20 w/v) with 50% aqueous methanol, acetone, 70% aqueous ethanol (EtOH) and water in an ultrasonic bath for 1 h. The extracts were filtered under vacuum through Whatman's No. 1 filter paper. The extracts in 50% aqueous methanol, acetone and EtOH were concentrated under pressure using a rotary evaporator at 30 °C and completely dried under a stream of air while water extracts were freeze-dried. Fresh extracts of 50% aqueous methanol were used in the phytochemical analysis and antioxidant assays while the acetone, EtOH and water extracts were used in the antimicrobial assays (resuspended in 70% ethanol). DPPH Radical Scavenging Activity The DPPH radical scavenging assay was done as described by Karioti et al. [27] with modifications. Extracts of each cultivar (15 µL) at varying concentrations; 0.065, 0.26, 0.52, 1.04, 6.25, 12.5, 25 and 50 mg/mL; in triplicate, were diluted in absolute methanol (735 µL) and added to freshly prepared DPPH solution (750 µL, 50 µM in methanol) to give a final volume of 1.5 mL in the reaction mixture. The above processes were done under dimmed light and incubated at room temperature for 30 min in the dark. Absorbance of the reaction mixtures were read at 517 nm using a UV-vis spectrophotometer (Varian Cary 50, Varian Australia Pvt LTD, Sydney, Australia), with methanol as the blank solution. A standard antioxidant, ascorbic acid at varying concentrations as follows; 5, 10, 20, 40, 80 µM; were used as a positive control. A solution with the same chemicals without extracts or standard antioxidants but with absolute methanol served as the negative control. The assay was repeated twice. The free radical scavenging activity (RSA) as determined by the decolouration of the DPPH solution was calculated according to the formula: RSA % = 1 -Abs 517 nm Sample Abs 517 nm Neg Control × 100 where Abs 517 sample is the absorbance of the reaction mixture containing the resuspended cultivar extract or positive control solution, and Abs 517 Neg control is the absorbance of the negative control. The EC 50 (effective concentration) values, representing the amount of extract required to decrease the absorbance of DPPH by 50% was calculated from the percentage radical scavenging activity. Ferric-Reducing Power Assay The ferric reducing power of the cultivar extracts was determined based on the method by Lim et al. [28] with modifications. Extracts of each resuspended cultivar (50 µL) at 6.25 mg/mL and the positive control (BHT dissolved in methanol) was added to a 96 well microtiter plate in triplicate and two-fold serially diluted down the wells of the plate. To each well, 40 µL potassium phosphate buffer (0.2 M, pH 7.2) and 40 µL potassium ferricyanide (1% in phosphate buffer, w/v) were added. The microtiter plate was covered with foil and incubated at 50 °C for 20 min. After the incubation period, 40 µL trichloroacetic acid (10% in phosphate buffer, w/v), 150 µL distilled water and 50 µL FeCl 3 (0.1% in phosphate buffer, w/v) were added. The microtiter plate was re-covered with foil and incubated at room temperature for 30 min. The ferric-reducing power assay involves the reduction of the Fe 3+ /ferricyanide complex to the ferrous (Fe 2+ ) form. Absorbance of the formed Fe 2+ was measured at 630 nm using a microtitre plate reader (Opsys MR TM , Dynex Technologies Inc., Palm City, FL, USA). The ferric-reducing power of the cultivar extracts and ascorbic acid were expressed graphically by plotting absorbance against concentration. The assay was repeated twice. β-Carotene-Linoleic Acid Model System (CLAMS) The delay or inhibition of β-carotene and linoleic acid oxidation was measured according to the method described by Amarowicz et al. [29] with modifications. β-carotene (10 mg) was dissolved in 10 mL chloroform in a brown Schott bottle. The excess chloroform was evaporated under vacuum, leaving a thin film of β-carotene near to dryness. Linoleic acid (200 µL) and Tween 20 (200 µL) were immediately added to the thin film of β-carotene and mixed with aerated distilled water (497.8 mL), giving a final β-carotene concentration of 20 µg/mL. The mixture was further saturated with oxygen by vigorous agitation to form an orange coloured emulsion. The emulsion (4.8 mL) was dispensed into test tubes to which 200 µL of the resuspended cultivar extracts at 6.25 mg/mL or butylated hydroxytoulene (BHT) (6.25 mg/mL) were added, giving a final concentration of 250 µg/mL in the reaction mixtures. Absorbance for each reaction was immediately (t = 0) measured at 470 nm and incubated at 50 °C, with absorbance of each reaction mixture being measured every 30 min for 180 min. Tween 20 solution was used to blank the spectrophotometer. The negative control consisted of 50% methanol in place of the sample. The rate of β-carotene bleaching was calculated using the following formula: where A t=0 is the absorbance of the emulsion at 0 min; and A t=t is the absorbance at time t (90 min; any point on the curve can be used for the calculation). The calculated average rates were used to determine the antioxidant activity (ANT) of the respective herbal preparations, and expressed as percent inhibition of the rate of β-carotene bleaching using the formula: where R control and R sample represent the respective average β-carotene bleaching rates for the control and cultivar extract, respectively. Antioxidant activity was further expressed as the oxidation rate ratio (ORR) based on the equation: Antibacterial Microdilution Assay Minimum inhibitory concentration (MIC) values for antibacterial activity of the cultivar extracts were determined using the microdilution bioassay in 96-well (Greiner Bio-one GmbH, Frickenhausen, Germany) microtitre plates [30] except that 100 µL of each resuspended extract (25 mg/mL), in EtOH was two-fold serially diluted with sterile distilled water, in duplicate down the microtitre plate for each of the two bacteria used. Water, acetone and EtOH were included as negative and solvent controls. Neomycin was used as a positive control. The screening was done in triplicate and repeated twice for each extract. Two bacterial strains were used; one Gram-positive (Staphylococcus aureus ATCC 12600) and one Gram-negative (Klebsiella pneumoniae ATCC 13883). Antifungal Microdilution Bioassay The antifungal activity (MIC) of the cultivar extracts against Candida albicans (ATCC 10231), a diploid fungus which exists in the form of a yeast, were evaluated using the microdilution assay [30] modified for an antifungal assay [31] except that 100 µL of each resuspended (in EtOH) plant extract (50 mg/mL), were two-fold serially diluted with sterile distilled water, in duplicate down the microtitre plate. Water, acetone and EtOH were included as negative and solvent controls. Amphotericin B was used as a positive control. The screening was done in triplicate and repeated twice for each extract. Determination of Total Phenolics and Flavonoids The amounts of total phenolics in plant samples were determined using the Folin Ciocalteu (Folin C.) assay for total phenolics as described by Makkar [32] and modified by Ndhlala et al. [33]. Gallic acid was used as a standard. Flavonoids were quantified using the vanillin-HCl assay as described by Hagerman [34] with modifications [33]. Catechin was used as a standard. Statistical Analysis The data was subjected to one-way analysis of variance (ANOVA) using IBM Statistical Package for the Social Sciences (SPSS) v21.0 for Windows (Chicago, IL, USA). Significantly different means were separated using Duncan's multiple range tests (p < 0.05). Conclusions A comparative antioxidant, antimicrobial and phytochemical analysis of thirteen Moringa oleifera cultivars introduced from World Vegetable Centre (AVRD) (Thailand), Taiwan, South Africa and United States of America was carried out. There were variations in the observed activity amongst the different cultivars in both the antioxidant assays and the antimicrobial assays. Variations were also observed in the phytochemical levels of the different cultivars. However, there was no direct correlation between the bioactivity and the levels of total phenolics and/or flavonoids. High DPPH scavenging activity, Fe 2+ reducing ability and antimicrobial activity were observed in most of the cultivars. High phenolic expression in some cultivars may suggest different adaptation abilities of the cultivars to different environments. The data obtained here offer an understanding on the variations of different cultivars from different parts of the world. The information is important in the search for health care and antiageing products. It is desirable to carry out further studies to determine the effects of mixing some cultivars with other plant species used in cosmetics such as Aloe and determine if there is any improvement in bioactivity due to synergistic actions. It is also important to carry out safety studies to determine mutagenic and cytotoxicity properties of these cultivars as well as determining the stability and bioavailability of the natural products when used in cosmetics.
2017-04-01T00:11:54.574Z
2014-07-01T00:00:00.000
{ "year": 2014, "sha1": "06f0d02c1cffbb65e7613f3a47f2a94ca9192d98", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/19/7/10480/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "06f0d02c1cffbb65e7613f3a47f2a94ca9192d98", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
110229875
pes2o/s2orc
v3-fos-license
Electronic transport in locally gated graphene nanoconstrictions We have developed the combination of an etching and deposition technique that enables the fabrication of locally gated graphene nanostructures of arbitrary design. Employing this method, we have fabricated graphene nanoconstrictions with local tunable transmission and characterized their electronic properties. An order of magnitude enhanced gate efficiency is achieved adopting the local gate geometry with thin dielectric gate oxide. A complete turn off of the device is demonstrated as a function of the local gate voltage. Such strong suppression of device conductance was found to be due to both quantum confinement and Coulomb blockade effects in the constricted graphene nanostructures. Graphene 1,2,3 a recently discovered single sheet of graphite, stands out as an exceptional candidate for nanoscale electronic applications. Being a nearly perfect two dimensional electron gas, it has mobilities as high as 20000 cm 2 /V⋅s, which give rise to ballistic transport on the 100 nm scale even at room temperature 4 . Furthermore, the unique "quasi-relativistic" carrier dynamics in graphene provides new transport phenomena ready to be explored for novel device applications. Many of these phenomena require lithographically patterned locally gated graphene nanostructures; examples range from Klein tunneling 5 and electron Veselago-lens 6 to spin qubits 7 . From an application point of view these new phenomena promise novel devices with strongly enhanced functionalities and novel operating principles. Patterning graphene into nanostructures has been already demonstrated by a few groups 1, 8 , 9 , 10 where interesting transport phenomena in confined graphene were observed. Other groups have also recently demonstrated the fabrication of local gate controlled graphene samples by selecting graphene flakes of random shape obtained by micromechanical extraction 11, 12, 13, . In this work we present a simple process which combines both the patterning of graphene sheets into any desired planar nanostructure and the local gating of the latter. Besides the abovementioned phenomena, this approach is also of interest for the fabrication of large arrays of identical graphene devices from wafer grown epitaxial graphene 14 , where a global back-gate is absent and local gating offers the only way to modulate the carrier density. Our sample fabrication process is summarized in Fig. 1. First, we deposit graphene flakes on top of an oxidized Si substrate employing mechanical exfoliation 1 . Subsequently, the location of selected flakes is determined with respect to predefined optical markers. Next, electron beam lithography (EBL) is used to pattern electric contacts to the flakes. The electron beam evaporation of Cr/Au (5/30nm) is followed by lift-off in warm acetone ( Fig. 1(a)). We then spin a thin layer (20nm) of hydrogen silsesquioxane (HSQ) solution (1:3 HSQ:MIBK) 9, 10 . The latter is a high resolution negative tone electron-beam resist ideal for the reproducible patterning of graphene nanostructures down to 10nm. After resist development a short oxygen plasma step (50W, 200mTorr, 6s is enough to etch through ~10 graphene layers) is used to transfer the HSQ pattern into the graphene sheet (Figs. 1(b)-(c)). Here HSQ acts as a protective mask, such that only the exposed graphene is etched. Without further processing steps, we deposit 15nm of high-k dielectric hafnium oxide by atomic layer deposition (ALD) directly on the samples 15 . Note that our approach does not require a nonconvalent functionalization layer 13 . Here the HSQ etch mask remains on top of the graphene device and acts simultaneously as an adhesion layer for the ALD grown dielectric. Finally, we define the local metal gates (Cr/Au (5/30nm)) using EBL ( Fig. 1(d)). Thus our devices consist of a lithographically patterned graphene nanostructure sandwiched between two dielectrics, a global back gate (the highly doped Si substrate) and one or more local gates ( Fig. 1(e)). Such gate configuration allows us to tune the global and local carrier densities in graphene devices via the back gate voltage (V BG ) and the local gate voltage (V LG ), respectively. The conductance, G, of our devices is measured at 1.7 K, as a function of V BG and V LG , by using a lock-in technique with an ac excitation voltage of 100µV. The number of graphene layers in our devices is determined by Raman spectroscopy 16 and/or quantum Hall effect measurements 2,3 . Combining nanometer scaled patterning with local gate control allows us to fabricate different graphene quantum devices where the charge density varies locally. Fig. 2 shows examples of such fabricated samples ranging from graphene nanorings (lower inset in Fig. 2a) and top gated graphene Hall bars (top inset in Fig. 2(a)) to locally gated graphene nanoconstrictions (Figs. 2(a) and (b)) and ribbons (Figs. 2(a) and (c)). A typical graphene nanoconstriction is shown in Fig. 2(b). Here the width of a graphene ribbon is reduced from about 1 µm to a 30 nm wide constriction with a channel length of about 100 nm. The conductance of bulk graphene samples remains finite at low temperatures even for zero carrier density 2,3 . This is highly undesirable for electronic devices that require an OFF state (i.e., zero conductance state), such as semiconductor transistors or quantum dots. One can, however, overcome this drawback by engineering graphene nanoconstrictions. On one hand, due to quantum confinement in the transverse direction, graphene develops a band gap in the constriction region. Alternatively, it has been suggested that small irregularities in the constriction geometry, can lead to the localization of charge in small islands, which turns into a suppression of conductance due to Coulomb blockade 17 . We note that in a continuous graphene nanostructure, the latter effect cannot occur without the formation of tunneling barriers for which a band gap due to confinement is still necessary. Therefore, in realistic samples, we expect both phenomena to take place. A typical example of such a locally gated graphene nanoconstriction, is shown in Fig. 2(d). By tuning the local gate on top of this nanoconstriction, we can turn off the device completely (G<10 -10 S), while the graphene 'electrodes' that lead to the constriction remain highly conductive (G>e 2 /h). Figure 2(b) shows a conductance map of the same device as a function of both V BG and V LG . The most notable feature is a diagonally oriented insulating region, representing the (V BG , V LG ) range where the device is in the OFF state. Outside this region, the conductance increases rapidly, the device is turned ON. Note that, compared to previous nanoribbon devices 10 , the local gate is an order of magnitude more effective in the ON-OFF modulation of the conductance. This is mainly due to the increased capacitive coupling, which is a consequence of the reduced dielectric thickness under the local gate. In addition, the fabrication of locally gated constrictions will allow realizing tunable tunnel barriers in order to study graphene quantum dots 10 . In fact, in all our nanoconstriction devices we have observed reproducible sharp peaks in the conductance as V LG approaches the OFF regime (see Fig. 3(a)), indicating the presence of charging effects. To gain insight into the relative contribution of quantum confinement and Coulomb blockade effects to the suppression of G, we have measured the stability diagram (G vs (V SD , V LG )) for our nanoconstriction device (see Fig. 3(c)). The conductance plot shows a large central region of strongly suppressed conductance, with a series of irregular, diamond-shaped, weakly conducting regions superimposed. These irregular "Coulomb diamonds" are characteristic of multiple quantum dots in series 18 , which are likely to form during the etching process (see Fig. 3(d)). A precise value of the charging energy and band gap values is difficult to obtain without a detailed knowledge of the dot configurations. However, we can roughly estimate the relative importance of the charging effects by the width of the second largest "diamond" with respect to the largest one. We estimate that the contribution of Coulomb blockade to the suppression of conductance, i.e. the ratio of charging energy to band gap due to confinement for this particular device is of order ~50 %. We note that similar features were observed in graphene nanoribbon devices controlled by a back gate only, but were analyzed only in terms of band gap formation due to confinement 10 . A more quantitative study of these two contributions will need devices where a single quantum dot is realized, for example by fabricating two smooth constrictions in series 4 . In summary, we have demonstrated a simple approach for the fabrication and local gating of lithographically patterned graphene sheets into any planar geometry. This allows the design of graphene devices for fast exploration of novel phenomena where a local variation of the carrier density is the key to device operation. As an example we have studied graphene nanoconstrictions, where the transmission can be tuned by a local gate. The measurements reveal the importance of both quantum confinement and Coulomb blockade effects in the suppression of the conductance. This work is supported by the ONR (N000150610138), FENA, NSF CAREER (DMR-0349232) and NSEC (CHE-0117752), and the New York State Office of Science, Technology, and Academic Research (NYSTAR).
2015-03-06T19:42:58.000Z
2007-09-11T00:00:00.000
{ "year": 2007, "sha1": "d540aa044b4d1b6fe850701d094f766690b1cb4f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0709.1731", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8b1f08903176459bf86526fa9a9f01467775f256", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
269524747
pes2o/s2orc
v3-fos-license
The paleoredox context of early eukaryotic evolution: insights from the Tonian Mackenzie Mountains Supergroup, Canada Tonian (ca. 1000–720 Ma) marine environments are hypothesised to have experienced major redox changes coinciding with the evolution and diversification of multicellular eukaryotes. In particular, the earliest Tonian stratigraphic record features the colonisation of benthic habitats by multicellular macroscopic algae, which would have been powerful ecosystem engineers that contributed to the oxygenation of the oceans and the reorganisation of biogeochemical cycles. However, the paleoredox context of this expansion of macroalgal habitats in Tonian nearshore marine environments remains uncertain due to limited well‐preserved fossils and stratigraphy. As such, the interdependent relationship between early complex life and ocean redox state is unclear. An assemblage of macrofossils including the chlorophyte macroalga Archaeochaeta guncho was recently discovered in the lower Mackenzie Mountains Supergroup in Yukon (Canada), which archives marine sedimentation from ca. 950–775 Ma, permitting investigation into environmental evolution coincident with eukaryotic ecosystem evolution and expansion. Here we present multi‐proxy geochemical data from the lower Mackenzie Mountains Supergroup to constrain the paleoredox environment within which these large benthic macroalgae thrived. Two transects show evidence for basin‐wide anoxic (ferruginous) oceanic conditions (i.e., high FeHR/FeT, low Fepy/FeHR), with muted redox‐sensitive trace metal enrichments and possible seasonal variability. However, the weathering of sulfide minerals in the studied samples may obscure geochemical signatures of euxinic conditions. These results suggest that macroalgae colonized shallow environments in an ocean that remained dominantly anoxic with limited evidence for oxygenation until ca. 850 Ma. Collectively, these geochemical results provide novel insights into the environmental conditions surrounding the evolution and expansion of benthic macroalgae and the eventual dominance of oxygenated oceanic conditions required for the later emergence of animals. Fossil evidence supports the evolution and ecological expansion of complex, benthic macroscopic algae in the Tonian (Maloney et al., 2021;Tang et al., 2020), following a protracted middle Proterozoic interval for which the eukaryotic fossil record remains sparse and ambiguous (Cohen & Kodner, 2021;Cole et al., 2020;Knoll & Nowak, 2017).Benthic macroalgae play an important role in shaping modern nearshore marine ecosystems and have profoundly affected local carbon and nutrient cycling throughout Earth's history.However, the drivers of this apparent increase in eukaryotic complexity and expansion of habitable environments continue to be debated, and the role of environmental change (e.g., oxygenation, nutrient availability) in driving these biological innovations is unclear.Furthermore, the cause-and-effect relationship between the evolution of complex life and these purported redox transformations remains elusive, with continued debate over the timing and significance of a potential stepwise increase in O 2 (see reviews in Cole et al., 2020;Lyons et al., 2021). A recent empirical study suggests that the early diversification of microbial eukaryotes could have been facilitated by even a small rise in atmospheric oxygen (2%-3% of modern, Mills et al., 2023). Alternatively, oxygen may not have been a critical factor if atmospheric O 2 was already above this threshold when crown-group eukaryotes first appeared. The fossiliferous strata of the Mackenzie Mountains Supergroup (MMS; Yukon and Northwest Territories) allow for the investigation of ca.1000-800 Ma redox conditions coincident with multicellular eukaryotic evolution.In particular, MMS features a diverse macroalgal assemblage including large (cm-scale) green macroalgae Archaeochaeta, found in the Hematite Creek Group (Maloney et al., 2023), the carbonaceous macrofossils Chuaria and Tawuia (Hofmann, 1985;Hofmann & Aitken, 1979) and purported poriferan body fossils (Turner, 2021) within reefal facies in the Little Dal Group.Previous redox studies on the Cryogenian to Ediacaran Windermere Supergroup in the Wernecke Mountains and equivalent strata in the Mackenzie Mountains have provided evidence for a generally anoxic, ferruginous basin (Johnston et al., 2013;Miller et al., 2017;Shen et al., 2008;Sperling et al., 2016).Studies of sections in northwestern Canada that host large and structurally complex Ediacara biota and metazoan traces (Carbone et al., 2015;Narbonne et al., 2014) suggest that the appearance of macrofossils does not coincide with clear evidence for a significant marine increase in O 2 levels (Johnston et al., 2013;Miller et al., 2017;Sperling et al., 2016).Iron paleoredox studies (iron speciation and iron isotopes) have detected a possible redox change to oxygenated surface waters in the late Tonian Fifteenmile Group in the Ogilvie Mountains (Gibson et al., 2020;Sperling et al., 2013) and equivalent Tatonduk inlier of Alaska (Sperling et al., 2013), which has been stratigraphically correlated with the MMS in the Wernecke and Mackenzie Mountains (Halverson et al., 2012;Macdonald et al., 2012).It remains unclear if this change in O 2 in surface waters at ca. 800 Ma is a regional trend or whether evidence of a stratified water column can be found in other inliers (such as the Wernecke Mountains) and older successions (Hematite Creek Group), suggesting a more widespread phenomenon.The MMS in the Wernecke Mountains records clear evidence of changes in the global biosphere through a diverse fossil record and represents a promising target for understanding Tonian marine ecosystems. Here, we present results of a multi-proxy geochemical investigation that includes iron speciation data, redox-sensitive trace element abundances and Nd-Sm data from ca. 1000-850 Ma shales in the lower MMS in the Wernecke Mountains, including rocks from which macroalgal fossils have been reported (Maloney et al., 2021(Maloney et al., , 2023)). Geochemical characterisation of these fossiliferous sections aids in reconstructing the paleoenvironmental conditions in which these primary producers diversified and provides insight into the relationship between eukaryotic expansion and environmental conditions during this critical transition in Earth's history. The MMS sediments and other coeval strata in northwestern Canada were accommodated by episodic extension resulting in an intracratonic rift basin (Macdonald et al., 2012).The MMS in the Wernecke Mountains consists of the Hematite Creek Group at its base, which transitional upwards into the Katherine Group.The contact between the Katherine and the Little Dal Group is also transitional, but the latter is heavily truncated in the Wernecke Mountains such that only the lower part of the Stone Knife Formation is preserved (Figures 1 and 2; Macdonald et al., 2018).The Hematite Creek Group comprises, in ascending stratigraphic order, the Dolores Creek, Black Canyon Creek and Tarn Lake formations (Turner, 2011).The Dolores Creek Fm. is characterized by bright orange-weathering microbial dolomite with stromatolitic intervals and dark grey to black siltstone and shale.The Dolores Creek Fm. is typically ~300 m thick; however, the section in the southern part of the exposure belt where the multicellular macroalgae were recovered extends to nearly 1 km in thickness (Maloney et al., 2021). The informal lower Dolores Creek Fm. consists of ~600 m of shale and siltstone with minor debrites coarsening upward with increasing carbonate content including minor microbially laminated beds, blocks of stromatolites (olistoliths) and finally in-place stromatolite bioherms.The upper Dolores Creek Fm. includes shales and biostromes of columnar stromatolites interpreted to record a proximal, southward-prograding shelf margin over what is thought to represent a fault escarpment that formed in response to an extensional episode that initiated subsidence and formed the Hematite Creek Basin (Turner, 2011).The basin was filled as the stromatolites on the shelf margin prograded southward (in present coordinates) and shed debris.The fossiliferous part of the Dolores Creek Fm. is interpreted to record a shallowing-upward succession of upper slope to shelf margin deposits.The stromatolitic bioherms represent the photic zone where the macroalgae likely lived before transport and burial on the upper slope (Maloney et al., 2022). The Katherine Group in the Wernecke Mountains is subdivided into seven informal units (K1-K7) of fluvial-deltaic sandstones and shales, which likely correspond to the seven formation-scale units of the Katherine Group as defined in the Mackenzie Mountains (Northwest Territories) (Long & Turner, 2014).These sandstone-and shale-dominated intervals are interpreted to represent alternating periods of deposition in braided-meandering rivers and shallow marine environments, respectively (Aitken et al., 1978;Long et al., 2008).However, only the Shattered Range (K5), McClure (K6) and Abraham Plains (K7) formations are exposed in the study area at SW Profeit.The Eduni Formation (K1) is recognized elsewhere in the Wernecke Inlier (Long & Turner, 2012).The Little Dal Group is confined to a small area near SW Profeit, where it is only ~250 m thick as compared to 2.0-2.5 km thick in the Mackenzie Mountains (Aitken, 1981;Halverson, 2006;Long et al., 2008;Turner, 2011;Turner & Long, 2012).This discrepancy is likely due to a combination of pre-Cryogenian uplift and folding related to the Corn Creek Orogeny (Thorkelson et al., 2005) and a deep unconformity, which places Cryogenian conglomerates atop the Stone Knife Formation at SW Profeit (Figure 1; Eisbacher, 1981;Macdonald et al., 2013Macdonald et al., , 2018)). | Geochronology The age of the Dolores Creek Fm. in the Wernecke Mountains is constrained by a direct depositional Re-Os isochron age of 898 ± 68 Ma from the upper Dolores Creek Formation (Maloney et al., 2021) and a maximum depositional detrital muscovite ( 40 Ar/ 39 Ar) age of 1033 ± 9 Ma (Thorkelson, 2000).These ages agree with a detrital zircon maximum depositional age ( 206 Pb/ 238 U) of ca.1000 Ma from presumed equivalent strata in the Hart River Inlier to the west (Rainbird et al., 1997).Detrital zircon ages of 1081 ± 2 Ma (Rainbird, Villeneuve, et al., 1996), and 1005 Ma ± 1 Ma (Leslie, 2009) have also been reported from the Katherine Group in the Mackenzie Mountains. Based on the ages of detrital zircons, Katherine Group sandstones could have been fed by an extensive river system associated with the Grenville Orogen present around 1 Ga (Rainbird et al., 1997(Rainbird et al., , 2017)). The minimum age of the MMS is provided by a U-Pb zircon Isotope Dilution-Thermal Ionization Mass Spectrometry (ID-TIMS) age of 775.10 ± 0.54 Ma on a diabase that crosscuts the units in the neighbouring Mackenzie Mountains (Milton et al., 2017).This diabase is considered part of the Gunbarrel magmatic event, which includes the Little Dal Basalt that caps the MMS in the Mackenzie Mountains (Jefferson & Parrish, 1989).Based on these collective dates and stratigraphic framework, the age of the MMS is constrained to ca. 1000-775 Ma, with the Dolores Creek fossils estimated to be ca.950-900 Ma (Maloney et al., 2021). | RED OX PROXIE S Multi-proxy redox framework studies are strengthened by precise geochronological constraints, and are most reliable when they demonstrate consistency between independent proxies (Raiswell et al., 2018;Raiswell & Canfield, 1998).Although multi-proxy approaches can lead to more complex results, they are necessary to develop robust interpretations and to avoid false signals, in particular in outcrop samples (Gibson et al., 2020;Raiswell et al., 2018). In this study, we employed a combination of iron speciation and | Iron speciation Iron-based redox proxies are widely used to understand modern and ancient environmental redox conditions (Lyons & Severmann, 2006;Poulton & Canfeld, 2011;Raiswell et al., 2018).These methods are fundamentally based on observations in modern environments that show that highly reactive iron (Fe HR ), which refers to iron that is geochemically and biologically active during early diagenesis, is enriched when deposited in sediments under an anoxic water column (Poulton & Canfeld, 2011).The abundance of sulfide in the environment can also be estimated based on the extent to which highly reactive iron is converted to pyrite (Raiswell & Canfield, 1998).Poulton and Canfield (2005) developed a procedure to extract iron sequentially into operationally defined iron pools.These iron pools are defined based on their extraction method: Fe carb (carbonate-associated iron; e.g., siderite and ankerite), Fe ox1 (easily reducible oxides; e.g., ferrihydrite and lepidocrocite), Fe ox2 (reducible oxides, e.g., goethite, hematite and akaganéite), Fe mag (magnetite), Fe PRS (poorly reactive sheet silicate Fe), Fe py (pyrite Fe) and Fe U (unreactive silicate Fe).Highly reactive iron is the sum of Fe carb , Fe ox , Fe mag and Fe py (Poulton & Canfield, 2005).The ratio of highly reactive to total iron (Fe HR /Fe T ) provides insight into whether an ancient water column was oxic or anoxic based on the threshold value observed in modern environments: FeHR/FeT < 0.22 indicates oxic conditions, FeHR/FeT > 0.38 indicates possible anoxic water column, and 0.22 < FeHR/FeT > 0.38 is regarded as equivocal (Poulton, 2021;Raiswell & Canfield, 1998). The degree of pyritisation (e.g., Fe py /Fe HR ratio; Raiswell et al., 1988;Canfield et al., 1992) can help determine whether an anoxic environment was euxinic (anoxic and sulfidic) or ferruginous (iron rich) (Poulton et al., 2004;Poulton & Canfeld, 2011).The ratio reflects the amount of Fe HR converted to Fe py , with this ratio tending to be higher in sediments deposited in euxinic environments.Euxinic environments are classified as those with Fe py /Fe HR > 0.8 and Fe HR / Fe T > 0.38 (Canfield et al., 2008;Poulton et al., 2004;Poulton & Canfeld, 2011).However, it is important to note that increased Fe py /Fe HR ratios are also common in oxic continental margins where sulfide accumulates in porewaters at depths (Raiswell et al., 2018). As with all redox proxies, these thresholds should be critically examined during each study in new depositional environments and geologic settings (Raiswell et al., 2018) with consideration for the influence of diagenesis (Pasquier et al., 2022). Iron speciation is influenced by several factors including depositional rates (Canfield et al., 1996), hydrothermal inputs (Raiswell et al., 2018), weathering (Ahm et al., 2017;Slotznick et al., 2020;Wei et al., 2021), and the iron content of source sediments (Clarkson et al., 2014).Fe HR itself is relatively immune to weathering since Fe 2+ phases become oxidized to insoluble Fe 3+ phases, which are still captured as highly reactive iron in the extractions (Canfield et al., 2008).However, weathering can make it challenging to differentiate deposition under euxinic (anoxic and sulfidic) versus ferruginous (anoxic and iron-rich) water columns as the original pyrite weathers to iron (oxyhydr)oxides, thus lowering Fe py /Fe HR while leaving Fe HR /Fe T broadly unchanged.Dilution of highly reactive Fe due to rapid sedimentation (e.g., by turbidites) can result in false oxic signals (Lyons & Severmann, 2006;Raiswell & Canfield, 1998), while false anoxic signals can occur in certain depositional environments, such as estuaries, where large amounts of iron oxides may be trapped (Poulton & Raiswell, 2002). | Total iron enrichments Based on the assumption that total iron will be enriched against a detrital baseline in anoxic environments due to iron shuttling (Lyons et al., 2003;Severmann et al., 2008;Werne et al., 2002), iron enrichments in sediments can also be assessed by considering the ratios of total iron-to-aluminum (Fe T /Al; Lyons et al., 2003;Raiswell et al., 2018) and titanium (Fe T /Ti; Werne et al., 2002).One challenge with the total iron-to-aluminum proxy is identifying the appropriate Fe T /Al baseline.A value of ~0.5 representing the average value in shales (Taylor & McLennan, 1985) is one option.However Fe T / Al studies on Paleozoic marine sediments (0.53 ± 0.11; Raiswell et al., 2008), modern marine sediments (0.55 ± 0.11; Clarkson et al., 2014) and soils (0.47 ± 0.30; Cole et al., 2017) suggest a range of average values and do not account for regional variability in source lithology.This variation highlights the importance of establishing a baseline for each geological setting when looking to define thresholds to interpret redox proxy data.Unusually low Fe T /Al values (mean = 0.34; Sperling et al., 2013) have been reported from the Fifteenmile Group (Gibson et al., 2020;Sperling et al., 2013), as well as younger strata of the Windermere Supergroup (Sperling et al., 2016).Here, we follow Gibson et al. (2020) who proposed a detrital baseline of Fe T /Al = 0.3 for early Neoproterozoic strata regionally in the Proterozoic inliers based on integrated data from previous studies by Sperling et al. (2013Sperling et al. ( , 2016)). | Redox-sensitive trace elements The processes that control the relative distribution of oxidising agents across depositional and diagenetic gradients can be investigated using redox-sensitive elements (Raiswell et al., 2018;Tribovillard et al., 2006).Little to no enrichment in Mo, V and U occurs in depositional settings with permanent or temporary exposure to oxygen (e.g., seasonal), while significant enrichments are observed in euxinic basins.Under reducing conditions in anoxic environments, Mo and V are enriched in shales because these elements become less soluble and form complexes with sulfur, or are scavenged by organic and inorganic particles (Tribovillard et al., 2006).Further, the greatest enrichments of these elements occur in reducing settings that are connected to a large, oxygenated body of water, which provides a reservoir of redox-sensitive trace elements (Algeo, 2004). Redox-sensitive elements in shales can be enriched through authigenic or detrital sources (Bennett & Canfield, 2020;Brumsack, 2006;Cole et al., 2017;Van Der Weijden, 2002).As detrital redox-sensitive elements input is directly linked to the provenance of the sediments, it can be difficult to identify an effective detrital cut-off based simply on the average crustal composition.Cole et al. (2017) analysed over 4000 soil samples and demonstrated large variations between the previously accepted averages for redox-sensitive metals compared to detrital tracers and recommend using confidence intervals and element ratios that normalize for detrital influence (ex.Fe/Al, V/Al, U/ Th).On the contrary, Bennett and Canfield (2020) proposed that deriving threshold values from modern marine sediments in different depositional environments would be the most accurate method to determine trace metal enrichment.However, this method requires caution because settings analogous to the dominant anoxic conditions in the Proterozoic, including ferruginous basins, are not wellrepresented in modern analogues.The VUMoRe database (Bennett & Canfield, 2020) was compiled to identify potential thresholds that are calibrated using the geochemical behaviour of trace metals in modern sedimentary environments including continental margin upwelling zones, euxinic basins and oxic settings.We follow Bennett and Canfield (2020) using the TM/Al approach for trace metal normalisation where TM enrichment is calculated as: The TM enrichments quantified for the MMS in the Wernecke Mountains can then be compared to the threshold values for different redox settings (Table 1).Van Der Weijden (2002) identified limitations associated with this trace metal normalisation technique, including the possibility of introducing spurious correlations and that TM enrichment does not address the problem of the closure effect (i.e., induced correlations when there are limited variables; Rietjens, 1995), nor does it quantify contributions by other sediment components not associated with the detrital fraction.The influence of diagenesis on trace metal enrichments also requires further investigations to address how these influences vary between depositional settings (Bennett & Canfield, 2020). | Interpretative framework In our multi-proxy redox framework, samples are interpreted to have been deposited under an oxic water column if Fe HR /Fe T < 0.22, Fe T /Al < 0.30, and little to no enrichment in redox-sensitive trace elements is seen.We apply a threshold limit of Fe HR /Fe T > 0.38 (Poulton & Canfeld, 2011), Fe T /Al > 0.30 and measurable enrichment in redox-sensitive trace elements for deposition under an anoxic water column.Intermediate values are interpreted as ambiguous. Samples with a high proportion of highly reactive iron that has been sulfidized (Fe py /Fe HR > 0.70) are interpreted to indicate deposition under a euxinic water column (Lyons & Severmann, 2006;Poulton & Canfeld, 2011). Our interpretations consider the results with respect to any evidence of post-depositional alteration documented through analytical microscopy (e.g., reactive pyrite altered to pyrrhotite; Poulton et al., 2010;Poulton & Raiswell, 2002) and evidence of post-depositional Fe and S mobilisation (pyrite weathering; Ahm et al., 2017;Gibson et al., 2020;Raiswell et al., 2018;Slotznick et al., 2020) in our samples.Scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDS) were employed to examine textures and mineralogy in shale samples at the μm-scale. | Sample collection and preparation As part of the stratigraphic logging of MMS in the Wernecke Mountains of Yukon, Canada (Figures 1 and 2), fine-grained siliciclastic rock samples were excavated and collected every 4 m (where available) from two measured sections.A total of 46 shale samples were collected from seven logged stratigraphic sections.The first section was a transect of the Mackenzie Mountains Supergroup with samples from the Hematite Creek Group (Dolores Creek Fm. [n = 14] (1) TM enrichment = Trace metal concentration ( g ∕ g) Aluminum concentration (wt.% ) Correlations between stratigraphic sections in the Hematite Creek Group, Mackenzie Mountains Supergroup, Wernecke Mountains incuding the type sections for the Dolores Creek and Black Canyon Creek formations (Turner, 2011).Wavy red lines indicate an unconformity, and dashed red lines inferred correlations of formation boundaries.& Canfield, 2020).et al., 2015;Sperling et al., 2013).Neoproterozoic black shales from Svalbard were analyzed along with the samples in this study as quality control standards (Kunzmann et al., 2015).A Thermo Scientific iCAP 6000 series ICP-OES was used to analyse the leachates for each of the three extraction steps.Pyrite associated iron (Fe py ) was extracted using a chromium chloride distillation technique (Canfield et al., 1996).The amount of iron in pyrite was calculated stoichiometrically based on the assumption that all extracted sulfur was pyrite.The Fe py contents can be biased if there are high amounts of acid-volatile sulfur.However, previous studies on similar Neoproterozoic shales have not found evidence of acid-volatile sulfur (Kunzmann et al., 2015;Sperling et al., 2013). Four samples were run with three replicates and 75% of the replicates were within 5% standard error while the remaining samples were within 15%. | Trace elements, major elements and TOC Major and trace elements were measured following a modified protocol from Kunzmann et al. (2015).To oxidize organic complexes and determine the loss on ignition, each sample (~2.5 g) was first weighed, combusted at 1000°C for 2 h, then weighed again.Approximately 0.5 g of combusted material was weighed into a Savillex™ Teflon beaker and taken up by 1 mL of HNO 3 7 N.Samples were then digested using the following acids: (1) 29 N HF at ≥80°C for 5 days, (2) aqua regia (3 mL 6 N HCl + 1 mL 7 N HNO 3 ) ≥80°C for 48 h, (3) reverse aqua regia (1 mL 6 N HCl + 3 mL 7 N HNO 3 ) and (4) HNO 3 3 N at ≥80°C for 3 h.Bulk rock total digest stock samples were diluted, and standards were prepared with 2% HNO 3 .A Thermo Finnigan iCAP Q ICP-MS was used to measure trace element abundances, and a Thermo Scientific iCAP 6000 series ICP-OES was used to measure major element abundances at McGill University. Whole rock analysis was performed by ACTlabs to quantify major (oxide) elements using Fusion XRF (Norrish & Hutton, 1969).Major element concentrations were calculated in percent weight oxide using oxide alpha-influence coefficients to account for matrix effects, and 2) groups. Fe HR /Fe T in the MMS ranging from 0.10 to 0.69 (mean = 0.34). Thirteen samples (28%) show characteristically oxic Fe HR /Fe T ratios <0.22, and 18 (39%) samples exhibit anoxic Fe HR /Fe T ratios >0.38. 3) for this basin from this study (see redox framework).(d) Ratio of highly reactive iron to total iron (Fe HR /Fe T ).Samples below a lower threshold suggests deposition under oxic conditions (Fe HR /Fe T < 0.22) and samples above a higher threshold indicates deposition under anoxic conditions (Fe HR / Fe T > 0.38) (Poulton & Canfield, 2005;Poulton & Raiswell, 2002;Raiswell & Canfield, 1998).The samples in the grey area between 0.22 and 0.38 remain ambiguous where samples could have been deposited under oxic or anoxic conditions (Poulton & Canfeld, 2011;Sperling et al., 2013).(e) Ratio of pyrite iron to highly reactive iron (Fe py /Fe HR ).Samples with ratios >0.8 reflect deposition within a euxinic water column (Canfield et al., 2008;Poulton et al., 2004;Poulton & Canfeld, 2011).(f) Ratio of iron oxy(hydr)oxide to total iron (Fe ox /Fe T ).(g-i) Bulk Mo, V and U contents in ppm (black circles) and calculated trace metal enrichments in ppm/wt.%normalized to Al (red circles, (Bennett & Canfield, 2020)).Black lines represent thresholds for anoxia and red lines are trace metal enrichment thresholds for anoxia.(j) Preliminary interpretation of redox column where green is ferruginous, light green is possibly ferruginous, blue is oxic and grey remains unconstrained.See the redox proxy framework for a detailed description of interpretations. reactive iron in each sample is in the form of Fe (III) oxy(hydr)oxides, making up an average of 69% based on Fe ox /Fe HR values. | Redox-sensitive trace elements and total organic carbon Redox-sensitive trace elements are normalised to their average upper crustal values and total organic carbon contents (TOC; | Petrographic and SEM analyses Petrographic analyses of the MMS siliciclastic rocks indicate a range of sedimentary microstructures including dark wavy laminations and "domes" attributed to a microbial influence (see Appendix S1).They show limited to no pyrite in thin section. Additional imaging using SEM-EDS targeted three thin sections and corresponding thick sections.One sample had well-defined pyrite (Fe p y/Fe HR = 0.19) disseminated throughout, although most pyrite crystals were small (e.g., ~4 to 50 microns; Figure 8a-f). Framboids were visible in thin section, as were likely framboid pluck-out structures (Figure 8c,d,f).In thick sections, the framboids were detectable while pluck-out structures were not visible.Evidence for iron (oxyhydr)oxide pseudomorphs after pyrite To address the influence of post-depositional alteration within our samples, we conducted petrographic and analytical microscopy analyses.Samples were carefully selected to document textures in samples with a variety of Fe py content (ranging from 0 to 2217 ppm).Samples from the Dolores Creek Fm. retained primary pyrite (e.g., cubic to framboidal structures with Fe and S) with more evidence of pyrite alteration in samples from the southern part of the basin (e.g., SW Profeit compared to north of Tarn Lake; Figures 8-10).The samples with the highest preserved pyrite content showed limited to no evidence of alteration. However, post-depositional alteration of pyrite to iron oxyhydroxides was documented in other samples, where pyrite pseudomorph "ghosts" were observed along with some primary pyrite. These pyrite "ghosts" maintain their original crystal structure and can be identified based on their composition (e.g., iron-rich, lacking sulfur).Similar observations of framboidal pyrite ghosts have been found in outcrop samples from the Fifteenmile Group (Gibson et al., 2020).These findings align with experimental studies by Mahoney et al. (2019) that simulated oxidative weathering in shales and found the influence of the weathering was negligible on Fe HR /FeT (difference of <<0.03%), but significant for Fe py /Fe HR (differing up to 32.5%).Since original textures can be destroyed, the documented pyrite ghosts represent only a minimum qualitative gauge of the pyrite that was lost.However, abundant pyrite does not necessarily support an euxinic interpretation for the water column during deposition; for example, Long Island Sound has highly sulfidic pore waters beneath the oxic water bottom with almost 1% pyrite sulfur (~2% pyrite; Canfield et al., 1992).This scenario can occur when H 2 S accumulates in the pore water because the rate of reaction between sulfide and iron is slower than the rate of sulfate reduction.The size of the pyrite can provide insight into its formation, as pyrite formed in the water column is typically <5 μm in diameter and uniform in shape, whereas pyrite from pore waters tends to be larger and more variable in shape (Wilkin et al., 1996;Wilkin & Barnes, 1997). We have considered the extent of oxidative weathering within our samples and found samples from the northern part of the subbasin that show limited alteration.Because all sections demonstrate similar trends, we propose that iron proxy data in our samples are reliable when interpreted within the redox proxy framework except for the Fe py /Fe HR proxy, which is considered at least partially overprinted by oxidative weathering.This limits our ability to distinguish euxinic versus ferruginous anoxic conditions.However, large Fe mag enrichments observed in the samples could be related to the formation of Fe-rich clays (e.g., berthierine and chamosite) as proposed by Slotznick et al. (2020).These would provide independent evidence for Fe 2+ -rich pore water supporting a ferruginous interpretation. | Detrital FeT/Al baseline Total Fe normalized to Al provides a baseline for understanding the roles of detrital input and the iron shuttle in influencing redox proxies (Lyons et al., 2003;Raiswell et al., 2018).et al., 1983;Taylor & McLennan, 1985), while the average Fe (2.51 wt.%) is lower than the average in the upper crust of 3.50% (McLennan et al., 1983;Taylor & McLennan, 1985).The average Figures 3-6).Seven samples from the Dolores Creek Fm. have Fe HR / Fe T > 0.38 while Fe T /Al < 0.30, yielding their interpretation ambiguous.Nevertheless, a total of 7 samples meets both criteria (Fe HR / Fe T > 0.38 and Fe T /Al > 0.30) for anoxia, as compared with 12 samples that are definitively below the oxic thresholds Fe HR /Fe T < 0.22 and Fe T /Al < 0.30.These data suggest that much of the MMS was deposited in an anoxic ocean during dominantly ferruginous conditions with brief oxic intervals, which is consistent with the occurrence of ironstones in shallow shelf settings in the upper Katherine Group, as well as data from other early Tonian basins that indicates ferruginous and anoxic waters in the early Tonian (Guilbaud et al., 2015). Interpreted independently, the trace metal enrichments observed in the MMS suggest the sediments were deposited in oxic waters beneath the core of a perennial oxygen minimum zone (OMZ) based on the thresholds Mo < 5 ppm/w.%,V < 23 ppm/w.%and U > 1 ppm/w.%(Bennett & Canfield, 2020).However, the average V enrichment in the Little Dal Group exceeds 23 ppm/w.%with a mean value of 24.19 ppm/w.%.The depositional environment could also be interpreted as seasonal OMZ, but these environments remain poorly constrained by paleoredox tracers. Overall, we interpret our results to indicate a degree of redox instability and suggest the most plausible explanation of the data is that sediment deposition occurred close to the redoxcline (e.g., seasonal variations in wave intensity).Alternati4vely, longer-term changes in relative sea level could have caused iron speciation values to change significantly (e.g., from anoxic to oxic).Similar scenarios are recorded by transgressive shale deposits in the Reefal assemblage of the Fifteenmile Group (Gibson et al., 2020;Sperling et al., 2013) and in some modern basins influenced by seasonal variability (Bennett & Canfield, 2020;Böning et al., 2005Böning et al., , 2009;;Brumsack, 1989).These two interpretations represent end members, with redox instability occurring over geologic time scales while seasonal trends occur on biological timescales.Based on our redox proxy framework, twelve samples are interpreted as deposited beneath an oxic water column. These samples are from transgressive shales that display similar trends to those observed by Gibson et al. (2020).The fluctuating signal could also represent periodic incursions of oxygenated water from the global ocean in a restricted, anoxic basin. | Paleoenvironmental analysis Iron speciation data can provide insight into local conditions while trace metals can yield context about the broader redox landscape (Gilleaudeau et al., 2020;Lyons et al., 2021;Sperling et al., 2015). Portable X-ray fluorescence analysis (pXRF) in the Hematite Creek Group north of Tarn Lake documented elevated Zn, Pb and Ni at some stratigraphic levels in the Dolores Creek Fm. (Turner, 2011).However, limited enrichments of redox-sensitive trace metals Mo, V and U are observed in the present study.Subtle trace metal enrichments have previously been recorded in the Proterozoic inliers of northwestern Canada in the Fifteenmile Group (Gibson et al., 2020;Sperling et al., 2013) and the Windermere Supergroup of the Wernecke and Mackenzie Mountains (Miller et al., 2017).Here, we present three possible scenarios to explain the muted trace metal enrichments.False anoxic signals (i.e., Fe HR /Fe T ratios > 0.38) can be caused by nearshore trapping of detrital iron oxides under oxic conditions (Poulton & Raiswell, 2002), a large detrital reactive-iron input, or a proportionally small flux of detrital unreactive iron (Raiswell et al., 2018).However, it is more likely that the high Fe ox values are the product of iron oxides precipitated in the water column, although they could also be a result of weathering of other highly reactive iron pools, specifically iron pyrite, which is also within the Fe HR pool. A study of the Little Dal Group east of our study site in the Mackenzie Mountains reported an average Fe T /Al of 0.51 (ranging from 0.29 to 0.85; O'Hare, 2014).Although most of these samples suggest ferruginous, anoxic to possibly oxic bottom water conditions, there is also evidence for an authigenic iron signal overwhelmed by siliciclastic input in siltier mudstones with low Fe T /Al.The dilution of authigenic Fe can result in a low Fe T /Al when overwhelmed by a relatively high-siliciclastic supply (Lyons & Severmann, 2006). Evidence for increased sediment supply in the MMS is found in the 450 m of shale below the fossil interval in the Dolores Creek Fm. likely deposited rapidly during basin extension (Figures 1 and 2).Therefore, we propose that redox interpretations based on our geochemical data are generally robust, though likely unable to differentiate consistently between euxinic and ferruginous conditions. | Scenario 2: Restricted basin The presence of black siltstone and mudstone in the Dolores Creek Fm. has previously been invoked as evidence that the early MMS in the Wernecke Mountains was anoxic and at least partially restricted (Turner, 2011).This could explain the limited trace metal enrichment observed in our study because the trace metal reservoir in a restricted basin would be diminished compared to a basin open to the global ocean (Algeo & Lyons, 2006;Algeo & Rowe, 2012).In the northern part of the basin, the bright orange stromatolitic and other microbialite layers are rich in bitumen, indicating the presence of organic-rich sediments.Three transgressive intervals were identified at the type section north of Tarn Lake (Turner, 2011).Samples interpreted as oxic in our study correlate to some of these shoaling up sections from Turner (2011), and the overall trends are similar to those observed in equivalent units from the Ogilvie Mountains in a basin proposed to be hydrographically restricted (Gibson et al., 2020).However, a tidal influence has been documented in the Black Canyon Creek Fm.Fm. (Maloney et al., 2021) is also consistent with a robust marine influence at this time (Azmy et al., 2008;Cohen et al., 2017;Geboy et al., 2013;Kendall et al., 2009;Rooney et al., 2010Rooney et al., , 2014Rooney et al., , 2018;;Sperling et al., 2014;Strauss et al., 2014;VanAcken et al., 2013). Precambrian successions interpreted to have been deposited in restricted marine to lacustrine environments typically have Os i > 0.8 due to the strong influence of inputs from evolved continental crust (Cumming et al., 2013;Rooney et al., 2018;Tripathy & Singh, 2015).reservoirs, which is supported by the observation of muted trace metal enrichment in coeval units in Canada (Johnston et al., 2013) and Svalbard (Kunzmann et al., 2015).Specifically, Mo inventories could have been severely depleted in Proterozoic oceans because of widespread euxinia (Partin et al., 2013;Scott et al., 2008). In addition, it is important to consider that the average shale values used to determine whether a trace metal enrichment is present are based on comparisons to average Phanerozoic and modern shales (Tribovillard et al., 2006;Bennett & Canfield, 2020, Table 1). There are no known "true" modern ferruginous basins or widely anoxic oceans with a low-trace metal reservoir that allow us to calibrate trace metal enrichments, which increases the uncertainty of this paleoredox proxy in suspected ferruginous and globally anoxic settings (Miller et al., 2017).Subtle enrichments of redox-sensitive trace metals can be challenging to separate from background levels with no modern analogue for comparisons (Scott & Lyons, 2012). Thus, we propose that a poorly oxygenated global ocean during the deposition of the MMS with apparent instability in the local paleoenvironment is most consistent with our data. | Evidence from other Proterozoic Inliers Interpretations of redox data from several studies in the Proterozoic inliers of Canada provide support for redox instability in dominantly ferruginous waters (see Appendix S1).Studies have concentrated on the late Neoproterozoic when the earliest large complex fossils emerged (Canfield et al., 2008;Johnston et al., 2013;Miller et al., 2017;Shen et al., 2008;Sperling et al., 2016) and early Neoproterozoic strata that record the diversification of eukaryotes and their ecosystems (Gibson et al., 2020;O'Hare, 2014;Sperling et al., 2013;Thomson et al., 2015).There is evidence for an anoxic, ferruginous global ocean throughout the Neoproterozoic with brief pulses of oxic and scarcer euxinic conditions, possibly representing the expansion and contraction of an OMZ.However, these oxic conditions rarely align with significant fossil deposits, such as biomineralized scale microfossils (Sperling et al., 2013), "Twitya discs" (Sperling et al., 2016), Ediacara biota (Johnston et al., 2013;Sperling et al., 2016), bilaterian traces (Sperling et al., 2016) and green macroalgae (this study).Samples from the Shaler Supergroup of Victoria Island are noticeably different from the other sites, displaying strong evidence for euxinic conditions during the Bitter Springs interval in an otherwise mostly oxic setting (Thomson et al., 2015).These results highlight regional differences in ocean redox conditions between the Proterozoic inliers that exert local controls on basin dynamics. Based on comparisons with other Proterozoic inliers and evi- This inference aligns with a recent investigation that found nitrate limitation in the early Tonian, followed by a stepwise increase in δ 15 N sed values at ca. 800 Ma, suggesting increased nitrate availability (Kang et al., 2023) and ultimately a stepwise increase in O 2 at around 800 Ma (e.g., Cole et al., 2017;Guilbaud et al., 2015;Kang et al., 2023;Planavsky et al., 2022;Wang et al., 2022). | Requirements for habitable macroalgal environments The Earth experienced dramatic environmental change during the Neoproterozoic, which may have played an important role in the evolution of life by altering the availability of macroalgal habitats and bioessential trace elements (Anbar & Knoll, 2002;Erwin et al., 2011;Knoll & Nowak, 2017).It has been suggested that the availability of nutrients may have delayed the rise of green algae as the dominant primary producers in early Tonian oceans, with eukaryotes unable to outcompete prokaryotes until conditions became more favourable (Brocks et al., 2017;Kang et al., 2023;Maloney et al., 2021;Nguyen et al., 2019;Zumberge et al., 2020).However, fossil discoveries have demonstrated that chlorophytes were able to thrive in benthic habitats and support diverse communities by ca.1000 Ma (Maloney et al., 2021;Tang et al., 2020).Considering the factors that influence the habitability of eukaryotic ecosystems can aid in elucidating trends in macroalgal expansion and the influence of environmental conditions (e.g., redox setting). Comparisons between redox conditions in the Tonian Longfengshan Biota-bearing and non-fossiliferous shales in North China have shown oxic conditions in the non-fossil intervals, while fossil units record a dominantly ferruginous water column with limited trace metal enrichment (Wang et al., 2021).These results are interpreted to reflect benthic oxygen oases where the macroalga Longfengshaniaceae regulated the stratified redox water column through photosynthesis and O 2 consumption.A Chuaria-Tawuia-Longfengshaniacea assemblage has also been observed in the Little Dal Group in the Mackenzie Mountains (Hofmann, 1985).Our data suggest a geographic trend in the Hematite Creek Group, with northern samples mostly anoxic (11/16 includes possibly ferruginous) while the southern redox transect provides more evidence for oxic conditions (7/16 includes possibly oxic) proximal to the macroalgal habitat.These data suggest that the southern part of the basin may have been more favourable for macroalgal life.Alternatively, it could also suggest that macroalgae were bioengineering seafloor habitats by providing a source of oxygen that contributed to benthic oxygen oases similar to trends demonstrated by Wang et al. (2021).It is also possible that benthic macroalgae would influence the sequestration of trace metals in addition to regulating O 2 .Modern macroalgae are used in reclamation to remove heavy metals from wastewater (Arumugam et al., 2018), and heavy metals have been documented in dried seaweeds used for human consumption (Besada et al., 2009;Chen et al., 2018).As such, Tonian macroalgae may also have bioaccumulated metals, though experimental studies are necessary to provide an empirical framework for interpreting the ancient record. Benthic macroalgal communities would have also had a transformative effect on organic carbon burial (LoDuca et al., 2017), further influencing long-term oxygen accumulation and trace metal inventories. | Implications for early Neoproterozoic Eukaryotic Ecosystems The cause-and-effect relationship between the evolution of complex life and oxygenation remains controversial (Cole et al., 2020;Lenton et al., 2014;Lyons et al., 2021;Planavsky et al., 2014).Specifically, did increasing oxygen levels in the Neoproterozoic drive, or at least facilitate the emergence of the first large complex organisms, the Ediacara Biota (~571-539; Knoll, 1992;Canfield et al., 2007;Sahoo et al., 2012)?Or did the rise of complex life drive the oxygenation of the ocean by restructuring the flow of carbon into sediments (Lenton et al., 2014)?In any case, the appearance of new species of benthic macroalgae in ca. 1 Ga strata from Yukon (Maloney et al., 2021) and North China (Tang et al., 2020) raises new questions about the habitability and redox stability of ancient ecosystems. The structure of modern seaweed communities is heavily influenced by competition for resources including light, substrate and nutrients (Carpenter, 1990).Lyons et al. (2021) proposed that certain intervals in the Proterozoic were dominated by hostile environmental conditions that restricted the ecological expansion of eukaryotes, whereas heterogeneity in the environment during critical transitions permitted biological adaptions that allowed life to thrive.Our study supports this hypothesis, with large macroalgae able to colonize some shallow marine environments that previously would have been dominated by cyanobacteria in an extensively anoxic ocean.These macroalgal fossils only occur in specific sedimentary facies, and their distribution is at least partially controlled by taphonomy (Maloney et al., 2022).However, the observation of more than one species representing diverse size ranges indicates a relatively complex algal ecosystem (Maloney et al., 2023). Our geochemical results suggest that these organisms inhabited environments with fluctuating redox conditions.It is possible that the shallow water settings were oxygenated, but these fleeting moments when macroalgae inhabited the outer shelf are not captured by our analyses (Sperling et al., 2016), especially when considering that the average life span of modern macroalgae (e.g., Kelp, Macrocystis pyifera) is several months to a few years (North, 1961;Tussenbroek, 1989).This trend has also been observed in younger Ediacaran communities where soft-bodied metazoans episodically colonize shallow marine environments in anoxic basins (Bowyer et al., 2020;Sperling et al., 2016;Wood et al., 2015).Future investigations of similar Tonian paleoenvironments will provide more insight into whether this opportunistic colonisation et al., 2020). | CON CLUS ION The diversification of macroalgae and expansion of eukaryotic ecosystems likely influenced the oxygenation of shallow marine environments.Here, we provide insight into regional ocean oxygenation CO N FLI C T O F I NTER E S T S TATEM ENT Authors declare no conflicts of interest. F Stratigraphic log and geological map of study area in the Wernecke Mountains.(a) Mackenzie Mountains Supergroup (MMS) stratigraphy with age constraints and fossiliferous units.(b) Geological map of the Tonian MMS in the Wernecke Mountains, showing location of the fossil locality and camps where stratigraphic data and samples were collected.Inset map of Canada showing location of geological map with red rectangle.Map is modified from the Yukon Geological Survey Bedrock Geology Dataset (Yukon Geologic Survey, 2018).Gp., Group; Fm., Formation; Sta., Statherian Period; Eta , Etagochile Formation; Sh.Ran., Shattered Range Formation; Abr.Pl., Abraham Plains Formation; Cryo, Cryogenian Period; E, Ediacaran Period; Winder., Windermere supergroup; Mt.Land., Mount Landreville Formation; Pass Mtn., Pass Mountain Formation; SG, supergroup.redox-sensitive trace element analyses, complemented by petrographic and Nd isotope data to provide insight into weathering and sediment provenance. and mainly carbonate Black Canyon Creek Fm. [n = 2]), the Katherine Group (n = 7) and the Little Dal Group (n = 7).The second section was a transect of the Hematite Creek Group with 16 samples from the Dolores Creek Fm. and one sample from the Black Canyon Creek Fm.Geochemical sampling targeted horizons with fine-grained material and no visible evidence of post-depositional alteration (see Appendix S1).4.2 | Iron speciation and pyrite extractionSequential iron extraction and analysis was performed in the Department of Earth and Planetary Sciences at McGill University (QC, Canada) following the procedure developed by Poulton and Canfield (2005) with minor modifications (see Kunzmann standard G-16 was used for quality control (provided by Dr. K. Norrish of Commonwealth Scientific and Industrial Research Organisation[CSIRO], Australia).The total organic carbon (TOC) concentrations were determined by ACTlabs using an ELTRA instruments C-S analyser.4.4 | Petrographic and SEM analysesThin sections (n = 12) were cut at Queen's University (ON, Canada) from a subset of samples selected to compare samples based on their Fe py and analysed using a petrographic microscope.Thin sections (n = 3) and bulk rock samples (n = 3) were further analysed to investigate the distribution of minerals and provide insight into post-depositional processes using a Zeiss Sigma 500 variablepressure SEM equipped with dual, co-planar Bruker XFlash EDS units at the University of Missouri X-ray Microanalysis Core.SEM imaging and EDS analyses used identical beam and chamber conditions including: 20 keV beam accelerating voltage, 40 nA current, beam apertures of 60 μm (imaging) and 120 μm (EDS), a working distance of 16 mm (±0.2 mm; flat samples allowed for minimal variation) and 20 Pa chamber pressure with a 99.999% nitrogen atmosphere.Z-contrast backscattered (BSE) imaging was conducted using a high-definition 5-segment backscatter detector, and EDS elemental analyses were conducted using both spectrometers in tandem.4.5 | Sm-Nd isotopesAliquots of combusted sample (~0.1-0.95g) were first leached in acetic acid to remove carbonates and then spiked with an enriched 150 Nd-149 Sm tracer.Samples were then digested according to the following steps to dissolve silicates: (1) HF (4.5 mL; ~29 N) and HNO 3 (1 mL; ~15 N); (2) aqua regia (3:1, 6 N HCl:7 N HNO); (3) 6 N HCl.A three-stage chromatography process was applied to the samples to first remove iron and rare earth elements, and then isolate Nd and Sm for analysis.Iron was removed first by passing the sample though columns filled with 200-400 mesh AG1X8 anion exchange resin.The second step targeted the rare earth elements by passing the sample through columns filled with Eichrom TRU Resin SPS 50-100 μm twice.Third, Sm and Nd were isolated using columns filled with ~600 mg of Eichrom LN Resin 50-100 μm.Nd and Sm separates were taken up by 3 mL of 2% HNO 3 .Nd and Sm isotope ratios were then measured on a Nu Plasma II MC-ICP-MS (Multicollector Inductively Coupled Plasma Mass Spectrometer) at Geotop/Université du Québec à Montréal.Nd isotope ratios are reported in εNd notation, where The εNd values are commonly presented as a function of age (i.e.εNd(t)), where both the sample and CHUR 143 Nd/ 144 Nd ratios (2) Nd = (143Nd∕144Nd) sample (143Nd∕144Nd) CHUR − 1 × 10000 are corrected for 143 Nd ingrowth since the time the rocks were deposited.In the models presented here, t = 900 Ma for the Hematite Creek Group, t = 875 for Black Canyon Creek and the Katherine Group, and t = 850 for the Little Dal Group.5 | RE SULTS 5.1 | Iron speciation and major elements All results presented here are based on 46 samples (see Appendix S1).The Katherine Group samples have the highest average abundance of Fe (4.80 wt.%) and Al (9.60 wt.%), while the Hematite Creek Group (Fe = 1.92 wt.%, Al = 8.76 wt.%) and Little Dal Group (Fe = 2.90 wt.%, Al = 8.64 wt.%) samples had lower concentrations.Fe T /Al ratios range from 0.06 to 1.16 (mean = 0.31) with means varying between the Hematite Creek (mean = 0.23), Katherine (mean = 0.59) and Little Dal (mean = 0.37, Figures 3-6; Table The remaining 15 samples (33%) are ambiguous, with Fe HR /Fe T falling between 0.22 and 0.38.Fe py /Fe HR ratios are low throughout the sample set, ranging from 0 to 0.49 (mean = 0.06), with no values within the possible euxinic range (0.6-0.8;Poulton, 2021).Ten samples (22%) meet both threshold values of Fe HR /Fe T (>0.38) and Fe T /Al (>0.30) for deposition under anoxic conditions, and 12 samples (26%) fit our criteria for an oxic depositional environment (i.e., Fe HR /Fe T < 0.22 and Fe T /Al < 0.30; Figure 6).Seven samples (7%) are interpreted as possibly oxic by meeting one threshold value of Fe HR /Fe T ratios <0.22, but demonstrate high (FeT/Al > 0.30) while 20 samples are interpreted as possibly anoxic (i.e., Fe HR /Fe T > 0.38 or Fe T /Al > 0.30).Most of the F I G U R E 3 Stratigraphic column and geochemical data from shale intervals in the Hematite Creek Group, Mackenzie Mountains Supergroup, Wernecke Mountains.(a) Stratigraphic section at SW Mt.Profeit (T1820/T1821) legend in Figure 2. (b) Total organic carbon content (wt.%).(c) Ratio of total iron to aluminum (Fe T /Al).The threshold line is the inferred detrital baseline (Fe T /Al ~ 0. F Figure 8g-l).Cross-plots comparing the difference between pyrite (Fe py ) and Fe-oxide (Fe ox ) with oxidative weathering documented F Stratigraphic column and geochemical data from shale intervals in the Hematite Creek Group, Mackenzie Mountains Supergroup, Wernecke Mountains.(a) Stratigraphic section north of Tarn Lake (TN1, TN2, T1827) legend in Figure 2. (b) Total organic carbon content (wt.%).(c) Ratio of total iron to aluminum (Fe T / Al ).Thresholds described in Figure 3.(d) Ratio of highly reactive iron to total iron (Fe HR /Fe T ).(e) Ratio of pyrite iron to highly reactive iron (Fe py /Fe HR ).(f) Ratio of iron oxy(hydr)oxide to total iron (Fe ox /Fe T ).(g-i) Bulk Mo, V and U contents in ppm (black circles) and calculated trace metal enrichments in ppm/wt.%normalized to Al (red circles, (Bennett & Canfield, 2020)).(j) Preliminary interpretation of redox column where green is ferruginous, light green is possibly ferruginous, blue is oxic and grey remains unconstrained.See the redox proxy framework for a detailed description of interpretations.F I G U R E 6 Iron proxies.(a) Cross-plot of Fe HR /Fe T and Fe py /Fe HR .Grey dashed lines are threshold values defined in Figure 3.(b) Crossplot of Fe T /Al and Fe HR /Fe T .The Red dashed line is the inferred detrital baseline from this study (Fe T /Al ~ 0.3).TA B L E 2 Shale total organic carbon (TOC), major and minor elemental concentration and iron speciation data. Raiswell et al. (2018) emphasized the importance of targeting fresh material when collecting geochemical samples from outcrop to limit the oxidative weathering of pyrite.To test this approach,Ahm et al. (2017) compared trace and major element geochemistry and iron speciation in paired samples from outcrop and drill cores.They documented that pyrite in outcrop samples had been oxidised and remobilised by weathering, while the majority of reactive iron from core samples was preserved as pyrite.This behaviour highlights the difficulties of working from outcrop samples as Fe py can weather out of the sample or is converted to the Fe ox operational pool in iron speciation analyses, which could skew an interpretation between euxinic and ferruginous conditions.Loss of total reactive iron and depleted Fe HR /Fe T were also observed in outcrop samples, whereas no significant difference was documented in Fe T .This suggests that remobilized iron could have accumulated in the unreactive phases, such as authigenic Trace metal enrichments cross plots with enrichments normalized to Al reported in ppm/wt.%plotted with TOC in wt.%.See Appendix S1 for the full data set.TA B L E 3 Sm-Nd results. F Analytical microscopy of pyrite and weathered pyrite textures where red = iron and yellow = sulfur.(a, c) SEM maps of TN1-124 thin section.(b, e) EDS elemental map on SEM image in A with pyrite retaining sulfur and iron.(d, f) EDS elemental map on SEM image in C with white arrowheads pointing to sulfur enrichment.(g, i) SEM maps of T1820-214 thin section.(h, k) EDS elemental map on SEM image in G with white arrowheads pointing to evidence for pyrite (relative enrichment of Fe).(j, l) EDS elemental map on SEM image in I with white arrowheads pointing to limited sulfur enrichment compared to iron.Samples from Tarn Lake N (a-f); SW Profeit (g-l); White scale = 50 microns, Black and grey scales = 5 microns.Additional microscopy images are available in the Appendix S1. is important to note that Ahm et al. (2017) conducted their study in Nevada, where the surficial weathering conditions are different than what would be expected in Yukon based on climate, and there are currently no constraints on exposure time for either site although both would be important factors.It is also necessary to consider the influence of post-depositional alteration on the iron speciation analysis.For example, goethite is a common alteration mineral in sedimentary rocks that forms during surficial oxidative weathering and can increase the highly reactive iron values because it is extracted with the reducible oxides pool (Fe ox ) (Slotznick et al., 2020).Mixed-valance Fe clays F Possible evidence for oxidative weathering.(a) Fe speciation for the Mackenzie Mountains Supergroup in the southern part of the basin with Fe py /Fe HR plotted against Fe ox /Fe HR .(b)The percentage of highly reactive iron represented by each of the reactive pools is shown above and includes carbonate (Fe carb ), pyrite (Fe py ), oxy(hydr)oxides (Fe ox ) and magnetite (Fe mag ).The sections include the Hematite Creek Group (Dolores Creek (T1820) and Black Canyon Creek (T1821) Formations), Katherine Group (G1806) and Little Dal Group (LD).The section and sample number are indicated on the y-axis.The trend of oxidative weathering of pyrite is indicated by the arrow(Ahm et al., 2017). The average detrital flux into sedimentary basins is often approximated as the average value for the continental crust, or the upper continental crust (Fe T / Al ~0.48; McLennan, 2001), and thus a Fe T /Al baseline of ~0.5 is assumed.However, the Fe T /Al varies widely in detrital fluxes based on their source composition, as demonstrated by a recent study by Cole et al. (2017) on modern environments that quantified this heterogeneity in detrital flux.The average Al observed in the Wernecke shale samples (8.79 wt.%) is comparable to 8.04 wt.% in the upper crust (McLennan Fe T / Al values are 0.23 for the Hematite Creek Group, 0.59 for the Katherine Group and 0.37 for the Little Dal Group.The low Fe T /Al ratios observed in this study are consistent with the regional trend observed bySperling et al. (2013; mean = 0.37 wt.%) and Gibson et al. (2020; average 0.35 wt.%) in shales of the correlative Reefal Assemblage of the Fifteenmile Group (Figures 3-6).Thus, with the exception of the Katherine Group (mean = 0.59 wt.%), Tonian strata from northwestern Canada contain Fe T /Al values significantly lower than the upper continental crust.We apply the detrital baseline of Fe T /Al of ~0.3 proposed by Gibson et al. (2020) to account for this regional trend, which is interpreted to reflect low-detrital iron silicate input to the basin (rather than high-aluminum content; Sperling et al., 2013).Ahm et al. (2017) suggested Fe T /Al remained unaffected by weathering in their samples.Since the Proterozoic inliers are known to have low Fe T /Al values (Gibson et al., 2020; F I G U R E 1 0 Possible evidence for oxidative weathering.(a) Fe speciation for Dolores Creek (TN1, TN2) and Black Canyon Creek (T1827) Formations in the northern part of the sub-basin with Fe py /Fe HR plotted against Fe ox /Fe HR .(b) The percentage of highly reactive iron represented by each of the reactive pools is shown above and includes carbonate (Fe carb ), pyrite (Fe py ), oxy(hydr)oxide (Fe ox ) and magnetite (Fe mag ).The section and sample number are indicated on the y-axis.The trend of oxidative weathering of pyrite is indicated by the arrow (Ahm et al., 2017). 6.4.1 | Scenario 1: Detrital and/or post-depositional influences Detrital overprints and post-depositional alteration can affect the interpretation of paleoredox information from iron speciation data. These strata are ascribed to the informal lower Dolores Creek Fm., whereas the overlying ~300 m interval is equivalent to the Dolores Creek Fm. type section.The samples analysed during this study are from the type section (e.g., northern part of basin) and a section of the upper Dolores Creek Fm. in the southern part of the basin where there is limited evidence for rapid sedimentation.Sediment provenance, grain size and accumulation rate are important factors influencing paleoredox proxies.However, based on detailed assessment of detrital and post-depositional influences (e.g., evidence of limited alteration in northern part of the sub-basin with similar results throughout the basin) in our samples and in consideration with the broader, regional geochemical trends and geology, we rule out discrepancies due to detrital and/or post-depositional influences. (e.g., bidirectional current structures and reactivation structures), which overlies the Dolores Creek Fm., suggesting that at least the upper strata in MMS (e.g., the Katherine and Little Dal Groups) were deposited in a basin connected to the open ocean, while restriction may have occurred during the deposition of the older Dolores Creek Fm.A low initial 187 Os/ 188 Os of 0.38 from the upper Dolores Creek Our relatively low εNd(t) data confirm that the source area at this time was relatively evolved, with a modest mafic contribution.Therefore, based on sedimentary structures indicating tidal influence and the Osi data, it is unlikely that the basin was generally restricted during the deposition of the MMS strata in the Wernecke Mountains, although it was clearly not a broadly open passive margin.6.4.3 | Scenario 3: Poorly oxygenated global oceanSperling et al. (2013) documented thin black shales with high Fe HR /Fe T in the Reefal Assemblage stromatolite reef core of the Fifteenmile Group, which correlates to the fossiliferous Little Dal Group in the Wernecke Mountains (Macdonald & Roots, 2010).Miller et al. (2017) carefully considered several causes for the lack of Phanerozoic-style redox-sensitive trace element enrichment in northwestern Canada and ultimately proposed that global anoxic conditions resulted in limited marine redox-sensitive trace element dence for a connection to the open ocean, we propose that the ocean was poorly oxygenated resulting in a limited trace metal reservoir during the deposition of the lower MMS in the Wernecke Mountains.Increased trace metal enrichments observed in the Fifteen Group(Gibson et al., 2020) correlate with the basal Little Dal Group at SW Profeit where there is limited evidence of increased trace metal enrichment, but iron speciation suggests oxygenation in the Little Dal Group.In the Mackenzie Mountains, thick sulfate evaporite deposits reported from the Ten Stone Fm. of the Little Dal Group(Turner & Bekker, 2016) provide further support for environmental change around ca. 850-800 Ma.These sulfate evaporites mark an abrupt change from the halite evaporites observed in the lower to middle Little Dal strata, implying an increased sulphate supply to the oceans and marine oxygenation extending below the storm wave base. of photic zone environments by macroalgae in ancient oceans was a local trend or a more widespread phenomenon.The expansion of macroalgal ecosystems would have global environmental (e.g., oxygenation; Xiao & Tang, 2018), carbon cycle (Krause-Jensen & Duarte, 2016) and ecological (e.g., diversity, competition; Carpenter, 1990) implications, which likely spurred biological change including the stepwise increase in morphological disparity observed in Proterozoic macroalgae (Bykova during a critical period of Earth's history when eukaryotic lineages including diverse macroalgal communities were colonising benthic habitats.A multi-proxy redox framework was utilized to identify the prevailing ocean redox conditions recorded in the MMS.Geochemical evidence implies deposition under a dominantly ferruginous water column (low Fe T /Al, high Fe HR /Fe T ) with muted trace metal enrichments punctuated by brief oxygenated intervals.Three hypotheses are provided to explain the muted trace element enrichments despite evidence for generally anoxic water columns: (1) false positive anoxic conditions from the detrital input, or post-depositional alteration; (2) deposition in a restricted basin; or (3) a globally anoxic ocean with a depleted dissolved trace metal reservoir.We propose that the third scenario is the most likely, with our data reflecting a poorly oxygenated global ocean with some potential influence from basin restriction during the deposition of the Dolores Creek Fm.However, there is more evidence for oxygenation in the ca.850 to 800 Ma Little Dal Group.Additional sampling targeting fossil sections will provide insights into Tonian macroalgal paleoenvironments and future studies should continue to document the influence of seasonal variations on redox conditions that can dramatically influence biological activity.It is likely that eukaryotes living in these heterogeneous environments within a ferruginous ocean were able to thrive opportunistically during brief oxygenated pulses.These results provide insight into increasingly complex algal ecosystems that emerged during the early Tonian Period and enhance our knowledge of basin redox conditions at this critical interval of eukaryotic evolution.ACK N OWLED G M ENTSWe gratefully acknowledge the First Nation of Na-Cho Nyak Dun for permitting our fieldwork on their traditional territory.This study was supported by the National Science and Engineering Research Council of Canada (NSERC) postgraduate and postdoctoral scholarships, the Queen Elizabeth II Graduate Scholarship in Science & Technology (QEII-GSST), the Geological Society of America Graduate Research Grant, and the Northern Scientific Training Program to KMM; NSF IF 1636643 to JDS; NSERC Discovery grants to GH RGPIN2017-04025 and ML RGPIN435402, the Polar Continental Shelf Program and the Agouron Institute. Table 2 , Figures 3-5 and 7).TOC contents measured in the MMS Trace metal enrichment of Mo normalized to Al ranges from 0.09 to 1.18 ppm/w.%with the largest average enrichment in the Katherine Group, while U varies from 0.10 to 1.15 ppm/w.%following the same trend.V enrichment ranges from 7.25 to 58.28 ppm/w.%with the largest average enrichment in the Little Dal Group and smallest in the Katherine Group. Table 3 ). Less negative εNd values are observed in the Black Canyon Creek Fm. (εNd(t) = −3.87),while εNd(t) for the older Dolores Creek Fm. ranges from −9.94 to −4.05 with a mean of −7.00.εNd(t) values for the Katherine and Little Dal groups average −6.83 and −7.18, respectively.
2024-05-04T06:17:09.055Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "1eaa57d85a0a03b96e80bea358c6ca6335abd844", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/gbi.12598", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "7bf2d8f2d76c0dd1913557f9304fcac2d6932cb5", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Geology" ], "extfieldsofstudy": [ "Medicine" ] }
214169786
pes2o/s2orc
v3-fos-license
Measurement on the thermal barrier ability of ceramic coatings in a flame thermal shock tester Ceramic materials are widely used at high temperatures due to their low thermal conductivity. Thermal barrier coatings (TBCs) have been applied in hot sections in industrial gas turbines and air engines. The measurement of the thermal barrier ability of the TBCs is very important for coating property evaluation, that can be done in hot gases with temperature gradient through the coatings. In this study, a flame thermal shock tester was used for such investigation. Thick TBC, containing a MCrAlY bond coat and a YSZ top coat, was sprayed onto a superalloy substrate by air plasma spray (APS) process. In the test, hot flame impacted on the top coat surface with the temperature controlled by an infrared thermometer. A thermocouple was attached onto the backside of the substrate to measure the backside temperature. Therefore, the temperature gradient in the TBC can be evaluated. The thermal barrier ability of the TBCs in different back cooling rate was studied. Introduction Thermal barrier coatings (TBCs) have been widely used to protect the component in gas turbines [1][2][3]. Many studies have been carried out on the thermal shock fatigue behavior of TBCs using various testing conditions [4][5][6]. By using flame as the heating source and using forced air to cool the samples is a better approach for thermal shock testing of coatings like TBCs as such process does not bring water corrosion effect for the edge cracking [7,8]. To test coatings' ability against thermal cycling spallation, controlling of the surface temperature is important, including not only the holding (operation) temperature but also the heating rate (start up of the engine) and cooling rate (shut down) [8]. In this study, The thermal barrier ability of the TBCs in different back cooling rate was studied by using a flame thermal shock tester. Experimental GH4169 superalloy with diameter of 25.4 mm was used as substrate and a MCrAlY as bond coat. A ~350um thick yttria stabilized zirconia (YSZ) TBC coating was made by APS process in a GTV spraying system. The APS TBC coating had about 10~15% porosity. Thermal shock testing of the TBC samples was performed in aflame thermal shock platform (made by Beijing QinheTechnol.company, China). A propane/oxygen flame gun was fixed on a mechanical moving system with a CCD camera for recording the coating surface and a infrared thermometer for monitoring the temperature. More description of the testing machine can be found in [8,11]. The flow rate of propane and oxygen gases was a constant in each testing trial. As shown in figure 1, hot gases attacked the TBC surface with cooling air at back side of the sample. At the back side, a hole was drilled so a thermocouple can measure the temperature near blew the TBC coating. Table 1 gives the flow rate of gases in different testing trails of flame thermal shock. Every trail contained a single thermal cycle with fixing propane and oxygen flow rate. By changing the oxygen flow rate among the trails, the surface temperature of TBC sample can be changed. By changing the back cooling flow rate of forced air, the temperature of the sample backside can be obtained. The working distance (gun to sample) was about 30mm. Results and Discussion In trail 1#, oxygen flow rate of 17 L/min was used and back cooling air of 10L/min. As given in figure 2, the temperature of the TBC surface was recorded. After about 200s, the temperature of the sample center (location A) was almost stable at around 1030 ºC. At about 230s, the infrared thermometer was moved to measure location B and the temperature became around 980 ºC. So about 50 ºC difference was shown. Further measuring location C and D, the temperature of the sample edge was about 20 ºC lower than the center. In trail 2#, the temperature of the front and back sides of the samples was recorded simultaneously with oxygen rate 18 L/min and back air 10 L/min. The back temperature was measured in the back hole. At beginning of the heating process, the temperature of the front side increased faster, so the temperature difference between the front and back sides was high. With the following-up of the back temperature, the difference became smaller. After about 200s, the measured temperature at front and back sides became the same. It indicated that with a low back cooling rate, the TBC coating can provide little thermal barrier effect. Figure 4 compares the results using different back cooling rates. The temperature difference at 200s was <10 ºC with 10 L/min of back cooing and 10 ºC with 30 L/min. With 50 L/min of back cooling, the temperature difference can get to 20 ºC after 300s when front temperature reach 1150 ºC. So, the thermal barrier ability of TBC coatings had a strong relationship with the back cooling. A larger cooling rate gave larger temperature gradient by TBC coatings. In trial 6#, the thermocouple was firstly attached to the back surface of the sample. A large temperature difference (about 300 ºC) was obtained. Such value, however, can not reflect the real temperature of the sample because the cooling air took away too much heat from the thermocouple. When put the thermocouple into the back hole, the temperature difference became small. It indicate that to measure the temperature at the backside of the sample by using thermocouple, a drilled hole to set the thermocouple was necessary. In addition, by changing the oxygen flowing rate, the heating rate of the Conclusions A flame thermal shock tester was used to measure the thermal barrier ability of a TBC sample with a back hole. The temperature difference between the coating surface and the back hole can be obtained. The main conclusions are: 1) With a low back cooling rate (< 10L/min), the TBC can hardly give a thermal barrier effect. A enough high back cooling force was necessary to reflect TBC's thermal barrier ability. 2) By increasing the back cooling rate to 50L/min, the TBC coating gave about 20 ºC decrease at 1150 ºC.
2019-12-19T09:18:51.563Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "a55115067dad763bacd02e9f23e0562835216236", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1347/1/012129", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1c5f7ecadff146fbc017953dd3f4553451a4b759", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
87971832
pes2o/s2orc
v3-fos-license
Roasted Sesame Hulls Improve Broiler Performance Without Affecting Carcass Characteristics An experiment was conducted to evaluate the effect of using graded levels of roasted sesame hulls (RSH) on growth performance and meat quality characteristics in broiler chickens. A total of 360 day-old Lohmann chicks were randomly allocated into 24 floor pens and raised over 42 days. One of four dietary treatments was assigned to each group of six pens in a completely randomized fashion. The chicks in the control group were fed a corn-soybean based diet (RSH-0), while the chicks in treatments two, three, and four were fed graded levels of RSH at 4% (RSH-4), 8% (RSH-8), and 12% (RSH-12), respectively. Diets were formulated to meet broiler chicks’ requirements according to the National Research Council for both starter and finisher rations. The results showed that RSH inclusion increased (P<0.05) feed intake and final body weight without adversely affecting the feed conversion ratio. Broiler chicks fed RSH-12 had heavier (P<0.05) breast and leg cuts compared to the control-fed group with no change to their chemical composition. Water holding capacity (WHC), cooking loss (CL), and shear force (SF) reported similar results in all dietary groups. The chemical composition of both thigh and breast cuts was not affected by the RSH. After one day of thawing, colour coordinates of breast cuts behaved similarly in all dietary groups. The results of this study suggest that the addition of RSH to broiler diets up to 12% improves their growth performance; nevertheless, carcass characteristics and meat quality showed no alterations compared to the control-fed group. Introduction Sesame (Sesamum indicum L.) is an herba-ceous annual plant belonging to the Pedaliaceae family and one of the world's most important and oldest oilseed crops known to man (Sonntag, 1981). The chemical composition of sesame seeds is 48.3%, 20.8%, 13.5%, and 5.3% for oil, protein, carbohydrates, and ash, respectively (Kahyaoglu and Kaya, 2006), which shows the importance of these seeds as source of nutrients for humans. Mechanical oil extractions of intact seeds by a screw-pressed expeller produce bitter meal with low digestibility due to the presence of fibrous husks, which can be useful only in livestock feeding. However, the quality of this meal can be greatly improved by seed de-hulling before the pressing process. In Jordan, the imported quantity of sesame grew from around 16,000 tons in 2007 to about 25,000 tons in 2013 (Ministry of Agriculture, 2007Agriculture, , 2013. Most of this was processed to make sesame paste called tehineh, a popular food in the Middle East that is also used for the manufacture of halaweh (sweetened tehineh), as described by Abou-Garbia et al. (2000). Whole sesame seeds are hulled using peeling machines to separate the sesame hulls (SH) from the seed. Usually, small and broken seeds escape the peeling process and stay in the dehulled portion, comprising 15-17% of the original weight. Elleuch et al. (2007) described the stages of the by-product elimination during the preparation of the sesame paste. After water soaking, the majority of the SH by-products are collected by sieving the de-hulled seeds, while the rest of the SH are removed after the roasting stage (RSH). The chemical composition of RSH shows that it is important source of crude protein (25.8%), oil (17.6%), and metabolisable energy (3.92 kcal/g) (Obeidat and Aloqaily, 2010). Great discrepancies exist between the hulls collected after soaking (SH) compared to those collected after roasting (RSH). The RSH is a remarkable source of calcium, phosphorus, zinc, and iron, and contains higher percentages of dry matter, protein, and oil compared to SH (Elleuch et al., 2007) and is rich in sulphur-containing amino acids (Kapadia et al., 2002). Due to the great jump in costs for conventional feeds and the chemical composition of RSH, we believe that RSH can be incorporated in broiler rations. However, the literature cites a limited number of research studies using either SH or sesame meal (SM) on broiler or layer (Cheva-Isarakul and Tangtaweewipat, 1993;Mamputu and Buhr, 1995;Farran et al., 2000), sheep (Obeidat et al., 2009;Obeidat and Aloqaily, 2010), or goat feeding trials (Obeidat and Gharaybeh, 2011). Cheva-Isarakul and Tangtaweewipat (1993) demonstrated that SM can be used at 13% in layer rations without adversely changing egg production, feed intake, body weight gain, and egg weight. However, Mamputu and Buhr (1995) reported depressed egg production variables in hens fed a diet containing 18% SM. Additionally, they also showed that the performance of broiler chicks decreased when fed a dietary level of SM beyond 7.5%. Farran et al. (2000) recommended that SH be used at levels not exceeding 8% in broiler diets and 14% in layer diets without adversely affecting the production parameters. In this trial, we propose using the RSH for the first time in broiler rations and hypothesize that it can be included at higher levels without adversely affecting growth performance, carcass characteristics, and meat quality. Materials and methods Animal and experimental procedure A total of 360 one day-old broiler chicks (Lohmann) were randomly assigned to 24 floor pens located in a commercial open-sided poultry house and raised over 42 days. One of four dietary treatments was assigned to each group of six pens in a completely randomized fashion. The chicks in treatment one were fed a corn-soybean basal diet with no RSH (RSH-0), while the chicks in treatments two, three, and four were fed graded levels of RSH at 4% (RSH-4), 8% (RSH-8), and 12% (RSH-12), respectively. The respected RSH were included in both starter and finisher diets (Table 1), which were formulated to satisfy the recommendations of the National Research Council (NRC, 1994). The poultry house was illuminated 23 h a day. Feed and water were provided ad libitum throughout the experimental period. Feed consumption (FC) and refusal were recoded daily and body weight gain (BWG) was measured weekly. The feed conversion ratio (FCR) was calculated as the ratio between total FC to final BWG. The chicks in all of the experimental groups were vaccinated against Newcastle and Infectious Bronchitis diseases at six days of age, and against the Infectious Bursal Disease at 14 days of age. At 42 days of age, two birds were randomly selected from each replication of each treatment (n=12 samples per treatment) and then slaughtered for meat quality, meat colour coordinates, and chemical composition. The contents of dry matter (DM), crude protein (CP), and ether extract (EE) for breast and thigh muscles were analysed as outlined by AOAC (1990). The live body weight (BW) was recorded after the birds had fasted for eight hours. Slaughtering was done by manual exsanguinations by severing both the carotid arteries and at least one jugular vein with a knife; the bleeding was continued for 120 s and the head and shanks were removed. After bleeding, the birds were scalded at 60°C for 120 s followed by defeathering in a rotary drum picker for 40 s and manually eviscerated. Then, the carcasses were washed internally and externally, and chilled for 40 min at 4°C in clean water. The carcass, breast, leg, and fat pad weights were measured and the dressing percentage was calculated using the carcass weight. Meat quality measurements were performed on the pectoralis major muscle of the broiler breast. Right and left pectoralis major muscles were harvested from each carcass after chilling according to the procedure described by . Briefly, carcasses were aged for 5 h on crushed ice before hand deboning at 6 h post-mortem. Pectoralis major samples were then placed on trays wrapped with plastic sheets and refrigerated at 3-6°C. At 24 h post-mortem, the left pectoralis major muscles were used for the measurement of cooking loss (CL) and Warner-Bratzler shear force (SF) values of cooked meat. Whereas the right pectoralis major muscles were used for colour coordinates, pH, and water holding capacity (WHC) measurements. Cooking loss determination The left pectoralis muscles were weighed (initial weight) then placed in labelled polyeth-ylene bags. The bags were placed in a thermostatically-controlled water bath and cooked for 25 min at 85°C to achieve the maximum internal temperature of 80°C. After cooking, the bags were brought to room temperature (23-24°C) before opening to drain the liquid, and then the cooked samples were dried with paper towels to remove excess surface moisture and re-weighed. CL was reported as the weight lost during cooking divided by the fresh sample weight and expressed as a percentage (Abdullah and Matarneh, 2010) Shear force (tenderness) Tenderness was measured according to Abdullah and Matarneh (2010). Briefly, within three hours of cooking, the dried samples from each of the left pectoralis major muscle were cut to obtain four cores (20 × 13 × 13 mm) of similar size parallel to a line beginning at the humoral insertion and ending at the point adjacent to the keel, and included the complete depth of each cooked muscle sample. Each core was sheared perpendicular to the longitudinal orientation of the muscle fibre using a Warner-Bratzler shear blade with the triangular slot cutting edge mounted on a Salter model 235 (Warner-Bratzler meat shear, G-R manufacturing co. 1317 Collins LN, Manhattan, Kansas, 66502, USA) to determine the peak force (kg/cm 2 ) when shearing the samples. SF was determined as the average of the maximum force of the four replicates from each pectoralis major muscle sample. pH measurements The pH values were determined in duplicate samples using the iodoacetate method as described by Jeacocke (1977) and Sams and Janky (1986). To measure pH values, 1-1.5 g of raw right muscles were put into a plastic test tube containing 10 ml of neutralized 5 mM iodoacetate reagent and 150 mM KCL, and homogenized using a homogenizer (Ultra-Turrax T8, IKA Labortechnik, Janke & Kunkal GmbH & Co., Germany). The ultimate pH values of the homogenate were measured using a pH meter (pH spear, large screen, waterproof pH/temperature tester, double injection, model 35634-40, Eurotech Instruments, Malaysia). Colour measurements Instrumental colour measurements of raw right pectoralis muscles were measured 24 h post-mortem using a colorimeter (12MM Aperture U 59730-30, Cole-parameter International, Accuracy Microsensors, Inc. Pittsford, New York, USA), and calibrated throughout the study using a standard white ceramic reference (CIE L* = 97.91, a* = -0.68, b* = 2.45). The samples were placed on a tray, covered with wax paper to avoid surface drying. Random readings were taken from each sample at three different locations on the muscle surface that were adjacent to the skin, and in areas that were free of any noticeable colour defects, such as bruises or broken blood vessels. The three location readings were averaged, and the colour for each sample was expressed in terms of CIELAB (Commission International de l'E clairage, 1976) brightness (L*), redness (a*), and yellowness (b*). Water holding capacity The water holding capacity of the pectoralis major muscles were estimated by measuring the amount of water released from the muscle protein by the application of force (expressible juice) and by the ability of muscle protein to retain water present in excess and under the influence of internal force (WHC). The WHC was measured according to the method described by Graw and Hamm (1953) and modified by Sañudo et al. (1986). Briefly, approximately 5 g of raw meat samples were cut into small pieces (initial weight). Then the sample meat pieces were covered with two filter papers (qualitative, 185 mm f circles, fine crystalline retention, Whatman International Ltd, England), and two thin plates of quartz material were then pressed with a weight of 2,500 gm for 5 min. The meat samples were then removed from the filter paper and their weight was recorded (final weight). The difference in weight divided by the initial sample weight was reported as the WHC of the pectoralis major muscles. Statistical analysis The means of the pens' performance and meat quality variables were analyzed as a completely randomized design to examine the effect of including RSH in poultry rations. The statistical analysis was achieved using the general linear model procedure (PROC GLM) of SAS (SAS Institute, 1994) by applying the following mdoel: Yij = + ai + eij, where is overall mean, is the effect of RSH, and is the residual error. Treatment means were compared using the least squares means option in PROC GLM. The differences among the means were declared significant at a P<0.05. Results and discussion The statistical inference for FC and BWG were similar among experimental treatments ( Table 2). The inclusion of RSH in broiler diets increased (P<0.05) both FC and BWG compared to the basal (RSH-0) diet with no change in FCR. Feeding RSH-4 diet lifted both FC and BWG upwards when compared with birds fed on RSH-0 diet, but did not reach a significant level. However, FC and BWG were greater (P<0.05) when the birds were fed on RSH-8 and RSH-12 diets compared to RSH-0 diet. The efficiency of feed conversion remained within normal ranges among all of the experimental groups. The differences in dressing percentages remained within the normal range for one group. However, the dietary inclusion of RSH at 4% and 8% resulted in numerically higher weights of breast and leg cuts, but at 12%, the increase in weight was significant (P<0.05) compared to the control-fed group. The proximate chemical composition for tissues sampled from breast and thigh muscles is summarized in Table 3. Our results point out that breast and thigh tissues of broilers fed on control or RSH diets had similar levels of DM, CP, and EE. Carcass WHC, CL, and SF from basal-diet fed chickens and RSH-fed chickens were comparable (Table 4). Chicks fed a basal diet tended to have a higher WHC and a lower CL, but these changes remained within the normal variation of the collected data in this study. The changes in colour coordinates (L*, a*, b*) and pH values 24 h after thawing are also summarized in Table 4. The results presented in this study also indicate that the inclusion of dietary RSH did not alter colour coordinates and pH scores. Colour lightness (L*) ranged between 53.3 and 55.9 with no statistical differences due to RSH inclusion. Similarly, the redness (a*) and yellowness (b*) of breast muscle were not altered due to RSH and registered values between 2.24 and 3.02 and 12.8 and 14.3; respectively. The pH values of breast muscles 24 h post-mortem also seemed not to be affected by RSH. The pH values of breast muscles collected from different dietary treatments ranged from 5.73 and 5.91. The use of agro-industrial by-products has become a necessity in poultry feeding to cut down on production costs as major feed ingredient prices increase. In Jordan, a total of 3,750-4,500 tons of SH are produced annually, most of which is being drained into the sewer system and some is sold as a livestock feed (MOA, 2013). The high energy and protein levels in RSH maximize its added value in poultry rations. However, the high calcium content was a major constraint of setting the highest level of RSH in the starter and finisher rations; thus, broilers fed RSH-12 have no limestone added to the formulated diet (Table 1). In this study, we state that RSH scaled up FC and improved the BWG of broiler chickens without affecting the FCR. Both FC and BWG of the RSH-0 fed group and those fed on RSH-4 had similar results. However, when chickens were fed 8% and 12% RSH, their FC increased by 8.7% and 7.7%, respectively, which consequently improved BWG by 8.8%, and 9.3%, respectively. The dressing percentages of all dietary treatments were comparable to each other; however, birds fed RSH-12 reported an increase of 15%, and 11% in breast and leg cuts, respectively, compared to birds fed on the control diet. Although RSH positively altered growth performance and major carcass cuts, the chemical composition of both breast and leg muscles appeared not to be influenced by dietary RSH inclusion. Carcass composition was not significantly affected, in support of previous findings (Kamran et al., 2008) that show it is more difficult to modify carcass composition than to alter growth rate or efficiency. The mean pH of breast meat did not differ between broilers that were fed different levels of RSH and the control diet. In addition, no treatment had a significant incidence of pH values below 5.7. It has been found that pH is a potential indicator of meat exhibiting poor quality characteristics because rapid postmortem pH decline can lead to protein denaturation, which may result in a pale colour and low WHC (Briskey and Wismer-Pedersen, 1961). According to Jones and Grey (1989) and Sams and Miles (1993), normal pH values at the end of the post-mortem process are between 5.60 and 5.80 and 5.78 to 5.86, respectively. Our data showed that at 24 h postmortem, breast pH persisted within these values in broilers fed different dietary treatments. This indicates that there were no quality problems with the broiler meat independently of RSH inclusion. WHC, CL, and SF are quality parameters interrelated to meat tenderness, one of the most important sensory characteristics of meat, and were not altered by dietary RSH. SF values in conventional boned breast muscle were close to 6.0 kg/cm 2 (Papinaho and Fletcher, 1996;Souza et al., 2005). Considering these reference values, dietary RSH, roasted sesame hull. RSH-0,-4, -8, and -12 refer to roasted sesame hulls at zero, 4%, 8%, and 12% respectively. Each value represents the mean of six replicates with two samples in each replicate. Roasted sesame hulls in broilers [Ital J Anim Sci vol.14:2015] [page 499] RSH did not affect meat tenderness in the present study, since SF values were between 2.33 to 3.33 kg/cm 2 . Meat colour alteration is closely related to meat quality and can be identified by objective colorimetric measurements from the CIELAB system, which determines the parameters L*, a*, and b* (lightness, redness, and yellowness) as described by Barbut (1993). The results presented in this study revealed that dietary inclusion of RSH did not cause alteration to any of the color coordinates. Previous studies have used L* as a measure to estimate the incidence of paleness or the pale, soft, and exudative condition, or both, in broiler breast meat (Barbut, 1998;van Laack et al., 2000). Van Laack et al. (2000) reported that breasts appearing to be normal had L* values of 55 and those appearing to be pale had CIE L* values of 60, and stated that high L* values and low pH were indicative of broiler breast meat that was pale in colour with low WHC. The L* and mean pH values for control and other RSH fed groups after 24 hours of thawing in the current study were similar to values that have been reported by previous researchers as characteristics of normal broiler breast meat at 24 hours post-mortem (Barbut, 1998;Woelfel et al., 2002). As no studies interrelated to the use of RSH in monogastric feeding trials could be found in the available literature, the current results will be compared to SH or SM. The results reported in our study are not in harmony with the related literature of SH and SM researches. The general trend in the previous SH and SM studies revealed that layer hens tolerated higher levels of SH (Farran et al., 2000) and SM (Cheva-Isarakul and Tangtaweewipat, 1993;Mamputu and Buhr, 1995), without adversely deviating from their control groups counterparts. However, broiler birds fed SH at a 6-12 % dietary level showed depressed weight gain and increased FC (Farran et al., 2000). Other studies reported that SM incorporation above 7.5% in the broiler diet reduced feed intake, BWG, and feed conversion during the first three weeks of age. In a recent study, broiler starter, grower, and finisher diets were formulated with increasing levels of SM, and yet a lower body weight and feed conversion efficiency were recorded (Rahimian et al., 2013). The effect of SM on growth performance and the histological intestinal alteration of layer chickens was evaluated by Yamauchi et al. (2006). They concluded that SM would have no detrimental effect on the growth performance with up to 20% dietary SM, nor on the intestinal villi with up to 30% dietary SM, but hypertrophy was observed in the epithelial cells of birds fed up to 20% dietary SM. They concluded that up to 20% SM could be incorporated into diets of male birds fed under commercial conditions of laying strains in the developer period. It appears that the nutritional efficacy of sesame seed by-products depends on its antinutritional factors such as phytate, oxalate, and tannins. The phytate level may have contributed to the lower performance reports in SH and SM researches. The level of phytate was estimated to be 5.18% in SM (Mega et al., 1982) and 1.12% in SH (Farren et al., 2000). It has been reported that a high level of phytates depressed feed and protein conversion in salmon (Richardson et al., 1985), decreased protein digestibility in common carp (Hossain and Jauncey, 1989), and increased the calcium requirement of broiler chicks (Farkvam et al., 1989). Furthermore, phytate decreases the bioavailability of proteins and essential elements such as Ca, Mg, Zn, Fe, and P by forming insoluble complexes, which are not readily absorbed by the gastrointestinal tract (Akande et al., 2010;Agbaire and Emoyan, 2012). The high content of oxalate (13%) in SH, as reported by Farren et al. (2000), may have contributed to the poor performance in broilers and layers by reducing Ca bioavailability (Ward et al., 1982). Oxalates interfere with magnesium metabolism and react with proteins to form complexes, which have an inhibitory effect in peptic digestion (Akande et al., 2010). Jacob et al. (1996) reported that SM contains 2.15% of tannins, which may also contribute to the poor performance reported in the literature, since tannins interfere with nutrient digestion by binding the protein in feed. Tannins are water-soluble phenolic compounds that chelate Fe and Zn and limit the absorption of these nutrients (Akande et al., 2010), which may precipitate proteins from an aqueous solution by inhibiting digestive enzymes. They have been found to interfere with digestion by displaying anti-trypsin and anti-amylase activity (Soetan and Oyewole, 2009). We believe the discrepancy between the performance results of our study and those in the related literature stems from the differences between SH and SM versus RSH. The chemical composition, functional properties, antioxidant activity, and the physiochemical characteristics of RSH might explain the reported improvement in broiler performance. Chang et al. (2002) reported that SH has significantly higher antioxidant components compared to sesame seed. Also, the industrial processing effect on physiochemical characteristics of sesame seed and its by-products were extensively researched by Elleuch et al. (2007), who found that roasting considerably improves SH physiochemical characteristics without changing its fatty acid composition. It also increases the total phenolic content, radical scavenging activity, reducing power, and antioxidant activity of SH (Elleuch et al., 2007); for example, the polyphenol content of RSH (260 mg/100 g) was higher than that of raw seed (87 mg/100 g). This can be explained by the fact that polyphenols are compounds associated with dietary fibre (Larrauri et al., 1996). Sesamol is a potent phenolic antioxidant (Yoshida and Takagi, 1997), increasing from 8 mg/kg in raw sesame seed to 22 mg/kg in SH; however, the RSH content jumped to 54 mg/kg (Elleuch et al., 2007). The abrupt increase in sesamol is explained by the conversion of sesamolin to sesamol, which occurs during roasting (Yoshida and Takagi, 1997). They also reported that sesamol is an effective stabilizing substrate for oil and has a synergetic action with -tocopherol. Conclusions The presence of anti-nutritional factors in SH affects some nutrient bioavailability and may limit its usefulness as a feed ingredient. We believe that roasting was effective in degrading much of these anti-nutritional factors (Udousoro et al., 2013), and thus made SH a more valuable feed ingredient. Also, RSH is a good source of CP and fat with increased functional properties, as clearly illustrated in the searched literature that the antioxidant properties of RSH are considerably higher in polyphenolic compounds and antioxidant activity compared to raw SH. Therefore, it should be safe to conclude that RSH can improve broiler performance without altering meat composition or quality.
2019-03-31T13:46:06.349Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "71ac6b57fe0c271a8d590f51bff817e002644669", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.4081/ijas.2015.3957?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "d0ec155f45860d9477542c7dec4b617fde71a3a0", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
41302229
pes2o/s2orc
v3-fos-license
New activities and changing roles of health sciences librarians: a systematic review, 1990–2012 Objective: The paper identifies and documents new health sciences librarian activities and roles during the period from 1990–2012. Methods: A systematic review of the literature was conducted using MEDLINE, Library and Information Abstracts, Library Literature, Scopus, and Web of Science. To find new roles that might not yet have been described in the literature, job announcements published in the Medical Library Association email discussion list archives from 2008–2012 were searched. For inclusion, an article needed to contain a substantive description of a new role and/or activity performed by librarians and be in the field of medical or health sciences librarianship. Papers that did not describe an actual (rather than proposed) librarian role were excluded. Results: New roles identified through the literature search were: embedded librarians (such as clinical informationist, bioinformationist, public health informationist, disaster information specialist); systematic librarian; emerging technologies librarian; medical education librarian; development librarian; and data management librarian. New roles identified through job announcements were digital librarian, metadata librarian, scholarly communication librarian, and translational research librarian. New twists to old roles were also identified: clinical medical librarian, instruction librarian, outreach librarian, and consumer health librarian. Conclusions: While the main purposes of health sciences librarianship remain the same, the new roles represent major new activities so that, for many librarians, daily on-the-job work is completely different. Implications: This list of new activities should inform students contemplating medical librarianship careers, guide formal and continuing education programs, and encourage other librarians to consider these new services. INTRODUCTION In 1938, Keys, reference librarian at the Mayo Clinic, outlined the responsibilities of the medical librarian. They were (1) preservation of books and journals, and (2) distribution of the knowledge in those books and journals. This latter responsibility included cataloging, indexing, and teaching others how to use the materials. He also lamented the time burden imposed by these responsibilities, suggesting it would be Utopia for librarians to have the time for actual reading: ''It is, of course, unheard of that a librarian should be so bold as to read.'' In ''Looking toward 1970,'' he anticipated better cataloging, medical librarian educational textbooks, and a graduate school for medical librarians [1]. That was all. Keys did not anticipate the changes that would begin before 1970. By the end of the century, many new roles and activities had appeared. By 1998, when the Medical Library Association (MLA) celebrated its first one hundred years, Homan noted, ''Dramatic advances in research, health care, and information science have occurred since 1898'' [2]. For example, in the 1960s, MEDLARS, a computerized bibliographic system, produced the annual Cumulated Index Medicus, one volume that could be searched by hand. A search over a previous decade required looking at only ten volumes, rather than the dozens of separately bound journal volumes. By the 1970s, MEDLINE had been implemented. It connected MEDLARS, the large reference database, with a commercial telecommuni-cations network. Now librarians were able to conduct computer-aided searches. In the 1980s, the MEDLINE database became searchable using Grateful Med software. This program could be installed on desktop computers to allow individual health professionals without specialized search training and without a librarian to search the millions of journal article references in MEDLINE. By 1989, Berners-Lee, a British scientist at CERN, had invented the World Wide Web. The web was originally developed to meet the demand for automatic information sharing between scientists in universities and institutes around the world. By the end of 1990, prototype software for a basic web system was demonstrated for use by the general population [3]. Toward the end of the 1990s, MEDLINE became available free of charge on the web through PubMed [4]. New information technology and the web triggered an ''information explosion on the digital front'' [5]. It was now possible for a library client to access information from a desktop without involving a librarian. PURPOSE These changes were associated with dramatic transformations in the roles of many librarians. However, while there are articles on how health sciences librarians need to build on the past and re-engineer themselves to meet the information demands of the future, the authors found no systematic review of actual new roles or activities. The purpose of this review is to document and categorize new roles and activities during the period from about the beginning of the Internet to the present, 1990 to 2012. For purposes of this paper, ''health sciences librarians'' is used to mean medical librarians or librarians who work in health care environments. Search strategy A literature search was conducted in January 2013 using five databases: Library To find new roles that might not yet have been described in the literature, we examined job announcements published in archives from 2008 to 2012 for MEDLIB-L, an MLA email discussion list. The list's ''advanced search'' feature was used to find listings that had ''job'' or ''position'' in the title of the email message. Job titles or activities identified from the job announcements that were different were used as text words for another search paired with the text word ''librarian.'' ''Different'' was a subjective measure based on the criteria of (1) not in the list of new roles from the literature, and (2) not in the current or traditional activities of health care librarians as understood by the authors. This search was also conducted in the same five databases used for the first search. Published literature inclusion and exclusion criteria Descriptive articles of actual new roles that health sciences librarians have embraced from 1990 to 2012 were included. ''New'' means described or implied as such by the article's author that seemed to be rational claims. To be included, an article also had to include a substantive description of the role. That means the article had to include enough detail to suggest a real position or role, rather than only a proposed role. We excluded commentaries, editorials, and articles that described the need for new roles but did not describe a situation in which the role was operational. We found that descriptions of new roles did not always indicate that they were librarian roles. A probable connection to a new librarian role was inferred from these activities because the description occurred in a library-oriented medium. It is understood that there is a continuum between new activities and new roles. If an article described a function and said it was a role, we included it. If an article described activities that we felt represented a new role, we also included it. If an article described activities that generally were subsidiary to traditional roles, we did not include it as a new role but added it to the category of ''new twists, old roles.'' Our intention was to identify major new roles and activities documented in the years 1990 to 2012, regardless of whether they were defined as a specific role or not. Our mode was to be inclusive rather than exclusive. To provide clarification and depth of understanding of the new roles, in some cases, we included related citations to supplement the citation that first identified the role. Search retrieval results A total of 371 citations were retrieved in the first search. The second search using search words gleaned from job announcements retrieved 144 citations. The results of the 2 sets were combined for a total of 515 citations. Of these, 91 citations were duplicates, leaving a total of 424 citations to be screened for eligibility. Three hundred forty-six citations from the eligibility group were excluded because they did not meet the criteria for inclusion and/or did not describe an actual role. Seventy-eight citations describing new roles were kept for further review. Of the 78 articles, 28 were further excluded because they did not describe actual new roles. Fifty articles were then reviewed further to identify a new role and/or activity and the first citation to mention the new role or activity during the time frame of 1990-2012. The PRISMA diagram ( Figure 1) shows the flow of information through the different phases of this review [6]. RESULTS We present the results in three categories: (1) roles or activities identified from traditional literature, (2) roles identified from job announcements, and (3) new twists on enduring roles. In Table 1, we list the first citation that we found for the role and, in some cases, additional citations that add clarification. We did not intend to include all articles found that describe the role or activity. I. Health sciences librarian roles identified from literature 1. Embedded librarian. Embedded librarianship focuses on the client user. It brings the library and the librarian to users in their work environment, wherever they are: office, laboratory, or home. Shumaker defines this ''growing trend'' this way: ''if a regular part of a librarian's work involves participat-Roles of health sciences librarians: a systematic review ing in a group, community, or organizational unit primarily made up of non-librarians, providing knowledge and information services as a part of the group, then that librarian is participating in a growing trend of embedded librarianship where their services are in settings outside the library'' [7]. Two prominent types of embedded librarianship are the liaison role and the informationist role. 1a. Liaison role. Liaison librarians are defined as librarians who are formally designated as the primary contact between the library and one or more departmental or administrative units. They participate as a part of the group [8]. Their purpose is to improve the transfer of information between the library and users, to improve the quality of collections and services that match their clients' needs, and to enhance the library's image. Liaison librarian responsibilities are often seen as being divided among reference, instruction, and collection development [9]. Traditional library services are offered in the user's work environment rather than in the library. This role is different than the older ''designated contact'' librarian role, in which a librarian is selected to be the receiver of requests from a group. One of the first appearances of liaison services was in 1991. The Houston Academy of Medicine-Texas Medical Center Library initiated an outreach project using a health sciences librarian as liaison to Baylor College of Medicine's Center for Biotechnology, located several miles from the campus medical library. The service strengthened the relationship between these scientists and the librarians [10]. Another early article described the contractual arrangement in 1995 between Yale University's Countway Library and a medical school department that showed how a professional librarian can be integrated into the institutional environment and take on new roles. As liaison to the curriculum development department of the medical school, the librarian was involved in curriculum planning, software support, and computing facility support [11]. 1b. Informationist role. Informationist was defined by Davidoff, in a 2000 editorial, as a health information professional on clinical teams who is trained in science or medicine as well as information science [12]. This new role for librarians would include answering clinical questions by reading the full text of the most pertinent articles, identifying and extracting relevant information, writing brief synopses of their findings, and sending the resulting information product to the user, and the role is distinct from the traditional medical librarian [13]. Variations of the informationist concept emerged over the next few years. All were called informationists with specific subject knowledge or skills. For completeness, we note that some would say that more subject knowledge is not required of informationists; however, it is not the purpose of this review to identify all the requirements for informationist positions. It is also noted that informationist programs are not identical. Each informationist program is customized to the needs of the group it supports. This customization is unique to the informationist role [14]. Informationist specialties began with the clinical informationist and now include bioinformationists, public health information specialists, and disaster information specialists. Clinical informationist. A clinical informationist is a librarian with specific clinical and/or scientific qualifications gained either through graduate education or experience [15]. Guise evaluated Vanderbilt University's five-year clinical informationist program in 2005. In their program, informationists answered questions on rounds with synthesis of evidence-based medicine (EBM) literature and supported outpatient care through a service called ''evidence consult'' [16]. At an Australian hospital, clinical informationists attended medical in-patient ward rounds and clinical meetings in the respiratory medicine, sleep disorders, and rheumatology units. Evaluation of the service found that the medical staff not only used the clinical informationist service, but the service contributed to clinicians' medical decision making, clinician education, and clinical outcomes [17]. Bioinformationist. Bioinformatics is an interdisciplinary field that develops and improves methods for storing, retrieving, organizing, and analyzing biological data. Bioinformatics is now moving to encompass all levels of biological analysis, and several case reports have described librarians' involvement in this subject area. As the need for specialized information in molecular biology and genetics becomes more central in health care organizations, many librarians are increasing their skills and competencies in this subject area [18]. Bioinformationist librarians at Harvard University, University of Florida, University of Minnesota, and Vanderbilt University compared information about their bioinformationist services and programs at their institutions. They found all four programs developed partnerships with units on their campuses and offered knowledge management, instruction, and electronic resource support. The librarians acted as a first line of support, directing users to specific databases and researchers [19]. Purdue University's bioinformatics specialists collected researchers' information needs through careful observation of researchers in their work environment, in their laboratory meetings, and through interviews with department chairs and individual researchers. The information was used to develop a bioinformatics program to serve the information needs of researchers in their organization [20]. Public health informationist. Providing information to those who work in the public health sector supports critical policy decisions in health care. The public health informationist at St. George Hospital in London assisted in public health postgraduate training. The position grew to include a centralized support service for disseminating information in which the librarian became a hospital team member and provided information support. The services eventually broadened to become a part of the public health network in the region [21]. Another example of public health informationist support occurred during a week-long Federal Emergency Management Agency (FEMA) exercise. By being part of the FEMA team, the public health informationist provided information at the point of need [22]. Public health informationists are specializing even more and are becoming experts in providing information during disasters, in other words, disaster information specialists. Disaster information specialist. Health sciences librarians demonstrated how they can contribute to disaster preparedness after Hurricane Katrina. Proactive reference and information services were set up in a mobile home in Baton Rouge. From that temporary site, librarians were able to provide reference services during the disaster's many phases to both health care personnel and families affected by the disaster [23]. The National Library of Medicine (NLM) implemented a special program as a collaborative effort to promote the role of information specialists in providing disaster-related information resources to the workforce and communities. NLM's Disaster Information Specialist Program offers training courses, resources, funding for disaster outreach programs, and an email discussion list to share information [24]. Sarasota Memorial Health Care System, one of the participants in NLM's Disaster Information Specialist Program, established a librarian position as a key member of the hospital's emergency preparedness team. Some of the librarian's functions included participating in emergency preparedness meetings, noting unfilled needs and questions, and distributing updated information quickly to users at the time of need [22]. Systematic review librarian. A systematic review is a summary of literature that assesses and evaluates studies on a particular issue. Researchers use an organized and clearly stated method of locating, Roles of health sciences librarians: a systematic review assembling, and evaluating a body of literature on a particular topic using a set of specific criteria [25]. Health sciences librarians now serve on systematic review teams and often are coauthors of published reviews. A team of information professionals at the Centre for Health Information Management Research at the University of Sheffield conducted a systematic review on the topic ''health information needs of visually impaired people.'' In conducting the systematic review, the librarians identified ten librarian roles that support systematic reviews: project leader, project manager, literature searcher, reference manager, document supplier, critical appraiser, data extractor, data synthesizer, report writer, and disseminator [26]. Similarly, another observational case study ''chronicled a librarian's involvement, skills, and responsibilities in each stage of a real-life systematic review.'' In conducting actual systematic reviews, the author identified librarian activities as expert searcher, organizer, and analyzer. As expert searcher, the librarian must interact with the investigators to develop terms required for a comprehensive search strategy in appropriate sources. As organizer and analyzer, the librarian must effectively manage the articles and document the search, retrieval, and archival processes [27]. Recently, a systematic review librarian position opened at the United States Department of Agriculture's (USDA's) Center for Nutrition Policy and Promotion [28]. The center works to improve health by developing and promoting dietary guidelines that link scientific research to nutrition needs. This position documents that the activity is needed and can constitute a full-time role. 3. Emerging technologies librarian. The health sciences librarian's role has always been to connect the user to information in direct and efficient ways. The new role called emerging technologies librarian focuses on the methods that libraries can use to deliver services and information with new technologies. Job titles are varied. An advertisement for the University of North Carolina School of Information and Library Sciences lists job titles of recent graduates as: ''information architects, system analysts, database designers, usability engineers, web application developers, and more'' [29]. In these roles, librarians design, develop, and manage their libraries' website. They integrate new web applications, social media, and mobile interfaces to support the ability to access information. Often these librarians advise clients about web development, Web 2.0 and 3.0 technologies, social networking, virtual worlds such as Second Life, gaming, podcasting, video, e-learning services, distance education, semantic web, and other current and future technologies. Skilled website developers perform usability testing to assure that user needs are met. One of the first articles to discuss usability testing in a health sciences setting came from researchers at Oregon Health & Science University. They evaluated the usefulness of the library's website as an orientation tool for students [30]. Social media technologies are expanding the ways that librarians are collaborating, creating, and disseminating information. For example, librarians at the Mayo Clinic developed customized courses for library staff, health sciences faculty, and nurse educators using Web 2.0 and social media tools such as blogs, really simple syndication (RSS), wikis, and other networking tools [31]. Librarians at the University of Texas Health Science Center at San Antonio created a blog to support their health information outreach activities [32]. Continuing medical education librarian. Continuing medical education (CME) consists of educational activities that help clinicians maintain, develop, and increase their knowledge and skills. Librarians can work collaboratively on CME teams. A 2001 article described how professional librarians were selected by the Connecticut State Medical Society to be active members of the society's team that reviews the CME programs that are offered by Connecticut's hospitals. Librarians who participated in this effort provided collaboration that revealed new and important roles for librarians on an accreditation team [33]. Grants development librarian. Many different types of grants are available from both public and private agencies at the national, state, and local levels. Clinicians, researchers, and administrators are often unaware of available external funding opportunities and how to secure those funds. Health sciences librarians have the opportunity to become resources for information about available grants and can use their expertise in the grant-writing process. A grants information service was established by the Governors State University Library, located in University Park, Illinois. It serves as a model for hospital and other health care libraries. This grant service showed how hospital librarians are ideally suited to promote grant writing in their organizations and how they can make a valuable contribution by teaching the grant-writing process [34]. 6. Data management librarian. The National Science Foundation in 2011 announced it would require data management plans in all grant proposals. Researchers must make a plan to manage their data before beginning a research project and then follow that plan throughout the research life cycle in order to ensure usability, preservation, and access to the data. Many federal agencies and other funders now require or are considering requiring grant applicants to include a data management plan in their proposals. Librarians can help researchers develop their data plans to help them manage, curate, archive, and share their data. The University of North Carolina Libraries created a data management committee made up of ten different librarians from various branch libraries, Cooper and Crum including a data services librarian. The committee developed data management training for the university community. They covered understanding of data management plans, strategies for handling data, and digital repository access. They also developed a web portal for researchers with templates for formulating data management plans, examples of language for data management plans, and other guides and links [35]. Librarians in the Research and Scholarly Communications Department of the Lamar Soutter Library developed a subject guide that provides researchers with easy access to resources for data management, including an annotated list of popular, relevant datasets that are available online; news and updates about data management, data sharing, and the open data movement; and links to peer-reviewed articles about data management [36]. While not specifically labeling them data management librarians, Creamer found twenty health sciences librarians who conducted data interviews with researchers to assess their data needs, worked with researchers to develop data management plans, taught data literacy to their patrons, and accessed data sets from published literature for their patrons' research [37]. II. Roles or activities identified through job announcements Job announcements in MLA's email discussion list, MEDLIB-L, were reviewed for the last five years to identify new roles or titles for health sciences librarian positions that were not identified in the literature ( Table 2). We found four roles that are familiar but for which we found no published reports in the health sciences literature. Based on the presence of the job description, we assumed that these roles are being filled or will be performed in the near future. 1. Digital librarian. Position announcements appeared twice in the last five years with digital librarian as a title. Responsibilities included managerial tasks that emphasized planning for and oversight of digital library projects, and leadership and expertise in digital library areas. Trend analysis, such as monitoring the standards and practice of current digital libraries, was usually critical in these jobs [38]. As early as 1995, Braude wrote, ''Exactly what a digital library is and how it is to be organized have not yet been determined, and the bibliographic organization of digital information has not been sufficiently addressed'' [39]. The definition that he sought seems yet to be realized. Developments in both medical informatics and medical librarianship indicate a need for greater collaboration between these specialties to achieve their common purpose: the creation, classification, and dissemination of scholarly information. 2. Metadata librarian. The title metadata librarian appeared in the late 1990s and reflects the new challenges of tracking, organizing, and improving user access to data as resources shifted to digital resources. While metadata are often defined broadly as ''data about data,'' librarians generally use the term to mean descriptive metadata that helps users access information, much like a catalog card file helps users locate a book in the library. Metadata librarian positions often evolved from cataloger positions but focused on providing other types of metadata besides standard catalog records, typically for digital materials. As an example, the main responsibility for the metadata librarian in one job announcement was to create and maintain taxonomies for digital and hard copy documents. Digital librarians also test controlled vocabularies for sustainability in order to improve continued access. Old ways of cataloging and classifying print materials often are not flexible enough for rapidly changing digital resources. 3. Scholarly communications librarian. Traditionally, the term ''scholarly communication'' was narrowly defined as the system for disseminating scholarly work, primarily through journals. More recently, the definition has been broadened to include the creation, transformation, dissemination, and preservation of knowledge [40]. It encompasses the entire process by which faculty, researchers, and other scholars share and publish their findings within and beyond the academic community. Roles of health sciences librarians: a systematic review Position announcements for scholarly communication librarians have appeared in the last five years. The position responsibilities typically included: promote a digital resources library or institutional repository, explore new opportunities for publication (including open access models), assist individuals interacting with editors and publishers, provide support for complying with government deposit mandates such as the National Institutes of Health (NIH) public access policy, explore new ways of publishing, and provide open access materials to faculty and students. This role may include ensuring perpetual access to clients' published resources and negotiating for archival and/or perpetual access licenses. New twist 1: clinical medical librarian. The CML role was described in 1971, so it is not new since 1990. Lamb recognized a need to bridge the gap between volumes of information and its relevance to the health care professional. Biomedical librarians were placed in a patient care setting. They could attend rounds, note questions asked, and go back to the library to find the answers and make photocopies of the relevant articles for the clinical team [42]. Later, Literature Attached to Charts (LATCH) emerged whereby photocopies of searches were attached to a patient's chart. CML programs continued through the years, but by 1991, some were asking if CMLs were viable in the new automation age [43]. More recently, clinical librarians in some programs have begun to ''project themselves not as information 'servers' who trail the team in an auxiliary capacity, but as an integral part of the group with a specialized expertise that can contribute vitally to clinical situations'' [12]. Some think the role of clinical medical librarianship is evolving into a new role: clinical informationist [44]. New twist 2: instruction librarian. Librarians have always embraced the teaching role. Library users have been taught how to use the library catalog, print abstracts, indexes, and other library resources. Today, instruction includes not only how to use library resources, but also how to use new technology to access information. When PubMed became available to the end user, librarians began teaching students and faculty how to search the new database in their own offices or homes and helped users develop skills to find information on their own. Librarians are teaching other related nonlibrary services such as how to best use bibliographic managers (EndNote, RefMan, and others) and to use academic teaching services such as Blackboard, Google Documents, and other platforms that are used in instruction. MLA stated that health sciences librarians should work to ''understand curricular design and instruction in order to teach ways to access organize and use information'' [45]. In 2012, the Journal of the Medical Library Association published a special issue that was devoted to instruction in health sciences libraries [46]. Librarians now teach for-credit classes in medical school curricula and use of non-bibliographic databases in the biosciences. Instruction is moving outside the library and becoming more embedded in the user's world. New twist 3: outreach librarian. A legacy program, outreach, provided library services to groups of hospitals. For example, circuit rider librarians at the Cleveland Health Sciences Library served five hospitals in a shared program in the 1970s [47]. In another program, also in the 1970s, circuit librarians provided library services to clinicians in rural northwestern North Carolina. After the pilot, the physicians voted to continue the service and share the costs [48]. In the 1980s, health sciences librarians provided library and information services to clinicians in Area Health Education Centers (AHECs). And in 2000, a circuit librarian became a virtual librarian for the AHEC in her area, providing services almost entirely electronically [49]. Outreach now usually means providing information to rural practitioners. But outreach can mean reaching out to users in one's own institution. The National Cancer Institute-Frederick Scientific Library provided an outreach program to its research labs by providing a ''laptop librarians service,'' in which librarians took a laptop, spent time in research buildings, and provided users with necessary information [50]. The difference between ''outreach'' in these cases and ''informationists'' might be the depth of specific specialized knowledge required, but both roles embody the trend of moving library services outside the library. New twist 4: consumer health librarian. Consumer health programs have been around since health sciences librarians first offered patient education materials in their libraries for patients and their families. The Houston Academy of Medicine-Texas Cooper and Crum Medical Center Library, which is open to the public, conducted a user survey in 1980 and confirmed that the general public was asking for more health information [51]. Growing consumer interest in health-related information created a need for librarians to provide a service to manage and provide information to the public. This service broadened further to reach out to the community. For example, librarians offered pamphlets at health fairs and other community meetings. Now, consumer health involves advanced technologies like interactive websites and connection of consumer health information to the patient record, bringing information for patients to the point of care [52]. MLA offers a Consumer Health Information Specialization for librarians, allowing librarians to document their expertise in this specialty [53]. DISCUSSION Our goal in conducting this review was to identify, within the timeframe of 1990 to 2012, actual new roles for health sciences librarians. We searched published literature and reviewed job announcements. We found sixteen new roles or activities, twelve from the literature and four from job announcements, and four major new twists in traditional roles. These lists should be useful to teachers of library science, to library planners and decision makers, and to students contemplating a librarian career. The literature on emerging roles in health sciences librarianship is not robust. We feel it underrepresents the new roles that health sciences librarians have undertaken in recent years. Many articles argue that we must change what we do as librarians to survive in our field, and they often describe possible new activities for librarians but do not document if the activities have actually been incorporated into job responsibilities. The distinctions between roles and activities, and between new roles versus twists on traditional roles, are subjective. Nevertheless, we produced a list that reflects major changes in the health sciences librarian's work and can serve as a basis for discussion. In the past few years, new technologies have lightened librarians' burdens by reducing clerical work associated with library services. Further, since clients can obtain materials directly without going to a library or interacting with a librarian, the librarian's role is less critical. Thus one trend has been that librarians have less to do, and their role as intermediaries in information-seeking activities has diminished. On the other hand, at the same time, technology has presented new challenges and opportunities to expand the librarian's role. Keys's role of knowledge distribution is the same today, but different. Knowledge is now often called information, and the way it is distributed has changed enormously. Very often, it is distributed without flowing through a library at all. While automation relieved librarians of some duties, it provided time to develop and extend services outside the physical library. At the same time, outside information services began competing with the library (e.g., Google Scholar, PubMed, independent journal subscriptions, and electronic newsletters that summarize current journal articles and news). Also, the health sciences information universe expanded exponentially beyond printed books and journals. It can be argued that increased demand for electronic information coincided with librarians' ability and need to exploit new technologies, which leads to new librarian roles. In one sense, though, little has changed. Keys, in 1938, identified two basic roles for health sciences librarians. The first, preservation of books and journals, has diminished in most librarians' lives (except archivists, of course). The second, distribution of knowledge and ideas, including teaching and facilitating access to knowledge, has changed dramatically in character, though not in intention. Librarians continue to provide information to support the work of their users; just the means of doing so have changed. Study limitations We used only two sources to identify new roles, published literature and job announcements. Information from blogs, electronic mailing lists, unpublished papers, and conference or meeting presentations was not included. It is possible, even likely, that new roles have been created but not described in the sources that we used. Also, this review was limited to English language articles. Admittedly, the duties of the roles described here overlap, but they are defined by their general thrust and often by special training needs. Decisions about characterization of activities and roles are necessarily subjective. CONCLUSIONS For the period 1990 to 2012, twelve new roles from the literature, four new roles from job announcements, and four new activities in heirloom roles were identified. These new roles represent major changes in how health sciences librarians serve their institutions and users. They reinforce the librarian's role as a specialized professional participating in new technology to distribute information (knowledge) to their clients and to participate in expanded roles outside the library.
2018-04-03T05:02:17.791Z
2013-10-01T00:00:00.000
{ "year": 2013, "sha1": "2aa01e9e417020aed5c42fda071ff11459e0f4c0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3163/1536-5050.101.4.008", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "ca9edfe89621cf1f38896d729d3f1ba1340f3821", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257676712
pes2o/s2orc
v3-fos-license
Application of Rapeseed Meal Protein Isolate as a Supplement to Texture-Modified Food for the Elderly Rapeseed meal (RSM), a by-product of rapeseed oil extraction, is currently used for low-value purposes. With a biorefinery approach, rapeseed proteins may be extracted and recovered for high-end uses to fully exploit their nutritional and functional properties. This study reports the application of RSM protein isolate, the main output of a biorefining process aimed at recovering high-value molecules from rapeseed meal, as a supplement to texture-modified (TM) food designed for elderly people with mastication and dysphagia problems. The compositional (macronutrients by Official Methods of Analyses, and mineral and trace element profiles using Inductively Coupled Plasma Optical Emission Spectrometry ICP-OES), nutritional and sensory evaluations of TM chicken breast, carrots and bread formulated without and with RSM protein supplementation (5% w/w) are hereby reported. The results show that the texture modification of food combined with rapeseed protein isolate supplementation has a positive impact on the nutritional and sensory profile of food, meeting the special requirements of seniors. TM chicken breast and bread supplemented with RSM protein isolate showed unaltered or even improved sensory properties and a higher nutrient density, with particular regard to proteins (+20–40%) and minerals (+10–16%). Supplemented TM carrots, in spite of the high nutrient density, showed a limited acceptability, due to poor sensory properties that could be overcome with an adjustment to the formulation. This study highlights the potentialities of RSM as a sustainable novel protein source in the food sector. The application of RSM protein proposed here is in line with the major current challenges of food systems such as the responsible management of natural resources, the valorization of agri-food by-products, and healthy nutrition with focus on elderly people. Introduction Protein availability in the future will not be sufficient to meet the increased demands of a growing world population, expected to reach 10 billion people by 2050 [1]. Therefore, the exploitation of new sustainable sources of proteins has become an emergent issue in the agri-food and human nutrition sectors [2]. and technological advances to produce nutritious, palatable, innovative, and affordable food products tailored to the special needs of seniors are among the expected progresses of healthcare systems and food industries for the upcoming years. In this context, we propose an innovative application of rapeseed proteins that adds new perspectives for the valorization of currently underutilized sustainable protein sources. RSM protein isolate obtained through sustainable and green processes was applied as a supplement for texture-modified (TM) food suitable for elderly people with mastication and swallowing difficulties. The compositional, nutritional, and sensory evaluations of the food products formulated are hereby described. This is a first-of-its-kind application of rapeseed proteins, coupling green and sustainable technologies for the recovery of proteins from agri-food by-products with advanced food formulations delivering personalized nutrition solutions for an ageing population. The original feedstock was a nongenetically modified rapeseed (Brassica napus L. var. Napus) with low erucic acid and low glucosinolate contents grown during the 2020/2021 crop season in Central and East European regions, namely Slovakia, Poland, Hungary, Czech Republic, Romania, and Ukraine. Seed quality met the STN 462300-1 and 2 and the Codex Alimentarius requirement of the Slovak Republic, Government Regulation no. 439/2006, and the requirements set out in the list of permitted varieties. RSM Protein Isolate The protein extraction and purification from RSM was performed at the semi-pilot scale on 2-5 kg pre-batch at Celabor (Herve, Belgium). The RSM was first ground using a MASUKO ® supercolloider (Masuko Sangyo Co., Ltd., Kawaguchi, Japan). The ground material was extracted using an aqueous alkaline solution in mild conditions, in a 65 L maceration tank (Ferrari srl, Ghislenghien, Belgium). The solid-liquid separation was performed with a vertical centrifuge (RC30, Rousselet-Robatel, Annonay, France). An extract with a yield of 19.0 ± 3.5% was obtained, containing 23.3 ± 4.5% of proteins, determined with the Kjeldahl method. Protein concentrate was obtained with a purity of 76.0 ± 5.1% with isoelectric precipitation, and the following purification step was performed using membrane microfiltration, using 10 L pilot equipment (Evonik Industries AG, Essen, Germany) with a ceramic membrane of 50 kDa. The obtained protein isolate was freeze-dried before analysis with a final yield of 7.2 ± 0.7% and a high purity (batch 1, Figure S1). After validation at the semi-pilot scale, the process was up-scaled on a 50-100 kg full-pilot-scale batch at ENVIRAL a.s. (Leopoldov, Slovak Republic), delivering the final rapeseed meal protein isolate product by means of spray drying (batch 2, Figure S2). The main steps of the RSM protein production process are detailed in Figure 1. population [28]. One of the challenges for the future is to assure that, despite such a demographic change, mankind can address the health and nutrition security issues posed by an ageing population in a targeted manner. Personalized dietary programs for seniors and technological advances to produce nutritious, palatable, innovative, and affordable food products tailored to the special needs of seniors are among the expected progresses of healthcare systems and food industries for the upcoming years. In this context, we propose an innovative application of rapeseed proteins that adds new perspectives for the valorization of currently underutilized sustainable protein sources. RSM protein isolate obtained through sustainable and green processes was applied as a supplement for texture-modified (TM) food suitable for elderly people with mastication and swallowing difficulties. The compositional, nutritional, and sensory evaluations of the food products formulated are hereby described. This is a first-of-itskind application of rapeseed proteins, coupling green and sustainable technologies for the recovery of proteins from agri-food by-products with advanced food formulations delivering personalized nutrition solutions for an ageing population. The protein extraction and purification from RSM was performed at the semi-pilot scale on 2-5 kg pre-batch at Celabor (Herve, Belgium). The RSM was first ground using a MASUKO ® supercolloider (Masuko Sangyo Co., Ltd., Kawaguchi, Japan). The ground material was extracted using an aqueous alkaline solution in mild conditions, in a 65 L maceration tank (Ferrari srl, Ghislenghien, Belgium). The solid-liquid separation was performed with a vertical centrifuge (RC30, Rousselet-Robatel, Annonay, France). An extract with a yield of 19.0 ± 3.5% was obtained, containing 23.3 ± 4.5% of proteins, determined with the Kjeldahl method. Protein concentrate was obtained with a purity of 76.0 ± 5.1% with isoelectric precipitation, and the following purification step was performed using membrane microfiltration, using 10 L pilot equipment (Evonik Industries AG, Essen, Germany) with a ceramic membrane of 50 kDa. The obtained protein isolate was freeze-dried before analysis with a final yield of 7.2 ± 0.7% and a high purity (batch 1, Figure S1). Materials and Methods After validation at the semi-pilot scale, the process was up-scaled on a 50-100 kg fullpilot-scale batch at ENVIRAL a.s. (Leopoldov, Slovak Republic), delivering the final rapeseed meal protein isolate product by means of spray drying (batch 2, Figure S2). The main steps of the RSM protein production process are detailed in Figure 1. The compositional data of the RSM protein isolates from semi-pilot-and full-pilot-scale trials are reported in Table 1. The two protein isolates were used for food product formulation by Biozoon GmbH (Bremerhaven, Germany) as described below. Formulation and Preparation of Texture-Modified Food Texture-modified (TM) food was formulated and prepared at Biozoon GmbH according to internally established protocols, as described below. In order to evaluate the suitability of RSM protein isolate as a supplement to TM products, different types of food were preliminary tested. Chicken breast, carrot, and bread were selected for this study. Food matrices were pureed, texture-modified, and tested for their sensory quality and nutritional value without and with supplementation with RSM protein isolate from semi-pilot-and full-pilot-scale tests. All ingredients, except texturizers and RSM protein isolate, were purchased from a local supermarket in Bremerhaven, Germany. Steam-cooked and spiced chicken breast and carrots were chopped, added with water (1:1 w/v), and pureed in a food blender (Blixer ® 3, Robot Coupe, France). RSM protein isolate (5% w/w) and GELEAhot instant ® (4% w/w), a texturizing system owned by Biozoon, were added to the pureed food and homogenized manually with a whisk. GELEAhot instant ® is composed of maltodextrin, agar-agar, and xanthan, requiring an activation temperature of approximately 87 • C before forming a gel while cooling. The pureed food with added RSM protein isolate and GELEAhot instant ® was brought to boil and then molded into silicone molds resembling the shape of the original food in order to increase the appeal and sensory acceptability of the product. The main preparation steps of the TM chicken breast and carrots are detailed in Figure 2. The extent of RSM protein supplementation (5% w/w) was the one adopted in Biozoon's internal protocols based on previous experience, indicating this ratio as the one with the best performance, delivering an additional amount of useful protein without significantly affecting the technological and sensory characteristics of the products. The control samples were prepared following the same procedures described above, except the addition of RSM protein. RSM protein isolate was also tested as a supplement for TM bread, prepared by using Biozoon's SMOOTHBROT ® mix, based on gluten, maltodextrin, whey protein, oil pow- The extent of RSM protein supplementation (5% w/w) was the one adopted in Biozoon's internal protocols based on previous experience, indicating this ratio as the one with the best performance, delivering an additional amount of useful protein without significantly affecting the technological and sensory characteristics of the products. The control samples were prepared following the same procedures described above, except the addition of RSM protein. RSM protein isolate was also tested as a supplement for TM bread, prepared by using Biozoon's SMOOTHBROT ® mix, based on gluten, maltodextrin, whey protein, oil powder, agar-agar, and xanthan gum. TM bread (1.2 kg loaf) was prepared as follows. Stale wheat bread, roughly chopped into small pieces, was added to tap water (about 1:2.3 w/v) and left to soak for approx. 20 min in a flat bowl. The content of the bowl was then transferred into a blender (Blixer ® 3, Robot Coupe, France), mixed to a smooth dough-like mass, added to SMOOTHBROT ® mix powder (16.7% w/w) and RSM protein isolate (5% w/w), and gently stirred manually. The mixture was further transferred into a baking pan and steamed for about 90 min until a kernel temperature of nearly 90 • C was reached. Afterwards, the bread was cooled down to room temperature while resting in the pan to allow the gel structure to form while cooling. The main steps of TM bread preparation are detailed in Figure 3. The extent of RSM protein supplementation (5% w/w) was the one adopted in Biozoon's internal protocols based on previous experience, indicating this ratio as the one with the best performance, delivering an additional amount of useful protein without significantly affecting the technological and sensory characteristics of the products. The control samples were prepared following the same procedures described above, except the addition of RSM protein. RSM protein isolate was also tested as a supplement for TM bread, prepared by using Biozoon's SMOOTHBROT ® mix, based on gluten, maltodextrin, whey protein, oil powder, agar-agar, and xanthan gum. TM bread (1.2 kg loaf) was prepared as follows. Stale wheat bread, roughly chopped into small pieces, was added to tap water (about 1:2.3 w/v) and left to soak for approx. 20 min in a flat bowl. The content of the bowl was then transferred into a blender (Blixer ® 3, Robot Coupe, France), mixed to a smooth dough-like mass, added to SMOOTHBROT ® mix powder (16.7% w/w) and RSM protein isolate (5% w/w), and gently stirred manually. The mixture was further transferred into a baking pan and steamed for about 90 min until a kernel temperature of nearly 90 °C was reached. Afterwards, the bread was cooled down to room temperature while resting in the pan to allow the gel structure to form while cooling. The main steps of TM bread preparation are detailed in Figure 3. Sensory Evaluations A descriptive sensory analysis was conducted at Biozoon's laboratories according to the DIN 10964:2014-11 standard method [29]. The evaluations were performed by a group of 5 staff experts trained in sensory analyses. The evaluation group analyzed all three RSM-protein-enriched products in order to identify individual product aspects in terms of descriptive attributes (appearance, odor, taste, and mouthfeel/texture). The attributes were collected correspondingly for further interpretation. Control samples of each product (meaning without any RSM protein supplementation) were presented to the group as well, in order to describe possible differences between the two products. For the sensory evaluation, RSM-protein-enriched TM chicken breast and TM carrots were reshaped in gel blocks and served as such. TM bread was cut into slices of approx. 0.8 cm (the thickness of a standard bread slice). Each participant in the sensory evaluation was allowed to take as much as needed in order to describe the products. The evaluation group was informed about the purpose of the sensory analyses. Each sample was served separately, and the selection of attributes was free and unbounded to a list. A list of product-specific attributes was further developed in order to identify relevant differences between the supplemented products and their respective controls. Sensory Evaluations A descriptive sensory analysis was conducted at Biozoon's laboratories according to the DIN 10964:2014-11 standard method [29]. The evaluations were performed by a group of 5 staff experts trained in sensory analyses. The evaluation group analyzed all three RSM-protein-enriched products in order to identify individual product aspects in terms of descriptive attributes (appearance, odor, taste, and mouthfeel/texture). The attributes were collected correspondingly for further interpretation. Control samples of each product (meaning without any RSM protein supplementation) were presented to the group as well, in order to describe possible differences between the two products. For the sensory evaluation, RSM-protein-enriched TM chicken breast and TM carrots were reshaped in gel blocks and served as such. TM bread was cut into slices of approx. 0.8 cm (the thickness of a standard bread slice). Each participant in the sensory evaluation was allowed to take as much as needed in order to describe the products. The evaluation group was informed about the purpose of the sensory analyses. Each sample was served separately, and the selection of attributes was free and unbounded to a list. A list of product-specific attributes was further developed in order to identify relevant differences between the supplemented products and their respective controls. Chemical Analyses The freshly prepared TM food samples (chicken breast, carrots, and bread) were packed under vacuum and shipped in a refrigerated state to CREA laboratories in Rome (Italy). Upon arrival, the food was immediately frozen and freeze-dried (Scanvac Coolsafe 55-4 Pro, Labogene, Allerød, Denmark) for further chemical analyses. The results were further normalized and expressed on a wet mass basis. The semi-pilot-scale and full-pilotscale batches of RSM protein isolate were analyzed as received. The moisture, crude protein, crude fat, and ash contents were determined separately in individual RSM protein isolate batches and in the formulated foods with and without supplementation with RSM protein following the methods of the Association of Official Analytical Chemists [30]. The crude protein content was evaluated using the Kjeldahl procedure, using 6.25 as a nitrogen-to-protein conversion factor. Nonprotein nitrogen (NPN) was determined using the Kjeldahl method after protein precipitation with 10% (w/v) trichloroacetic acid and filtration. The crude fat content was determined using Soxhlet extraction. The ash content was determined gravimetrically after incineration in a muffle furnace at 550 • C. Total dietary fiber was determined according to the method of Prosky et al. [31]. Carbohydrates were calculated by difference. All macronutrients' analyses were performed in triplicate. The energy content was calculated by using the conversion factors indicated by the EU Regulation 1169/2011 [32]. The conversion factor from kcal to kJ was 4.184. Quality Assurance For the validation of the applied methods and quality control of the proximate and dietary fiber data, the standard reference materials peanut butter (NIST 2387, National Institute of Standards and Technology, Gaithersburg, MD, USA) and dried haricot beans (BC514, European Reference Material ERM ® , Geel, Belgium) were analyzed. For the validation of the method and the quality control of minerals and trace element data, three Certified Reference Materials, cabbage (IAEA-359, International Atomic Energy Agency Reference Materials Group, Vienna, Austria), peanut butter (NIST 2387), and haricots verts (BCR 383, Community Bureau of Reference, Brussels, Belgium), were analyzed. All analyses were performed at least in triplicate. Food Formulation The TM chicken breast, carrots, and bread were formulated without and with supplementation with the RSM protein isolate. The formulations of TM chicken breast, carrots, and bread are reported in Tables 2-4. The TM food without and with RSM protein supplementation ( Figure 4) underwent a sensory evaluation at first and then chemical and nutritional evaluations. The TM food without and with RSM protein supplementation ( Figure 4) underwent a sensory evaluation at first and then chemical and nutritional evaluations. Sensory Assessment The RSM-protein-supplemented TM chicken breast samples were darker in color compared to the control and slightly brownish. No off smell was reported, so the product kept the original smell of chicken. The collected taste attributes of the samples were described as slightly bitter and off-tastes also occurred (strawy), but, still, the product was reported as acceptable with a recognizable original taste. The supplemented TM chicken breast samples were described as softer, with the results being acceptable within TM food applications. Rapeseed protein supplementation (2%) in beef and pork sausages has been also reported in the literature [22,33] with acceptable results regarding product quality maintenance. In the case of TM carrots, supplementation with 5% RSM protein isolate showed limited applicability, especially with regard to odor and taste. Besides the darker color, which was expected, an off smell as well as seedy notes were reported when the supplementation with RSM protein isolate was used. The characteristic carrot flavor was also masked by the RSM protein addition. These results suggest that, in the case of carrots, an adjustment to the formulation, consisting of a lower supplementation with RSM protein, is advisable. Sensory Assessment The RSM-protein-supplemented TM chicken breast samples were darker in color compared to the control and slightly brownish. No off smell was reported, so the product kept the original smell of chicken. The collected taste attributes of the samples were described as slightly bitter and off-tastes also occurred (strawy), but, still, the product was reported as acceptable with a recognizable original taste. The supplemented TM chicken breast samples were described as softer, with the results being acceptable within TM food applications. Rapeseed protein supplementation (2%) in beef and pork sausages has been also reported in the literature [22,33] with acceptable results regarding product quality maintenance. In the case of TM carrots, supplementation with 5% RSM protein isolate showed limited applicability, especially with regard to odor and taste. Besides the darker color, which was expected, an off smell as well as seedy notes were reported when the supplementation with RSM protein isolate was used. The characteristic carrot flavor was also masked by the RSM protein addition. These results suggest that, in the case of carrots, an adjustment to the formulation, consisting of a lower supplementation with RSM protein, is advisable. The texture was described as softer than the control, but still with good uses for texture-modified purposes. To the best of our knowledge, studies on the incorporation of rapeseed protein into vegetable matrices have not been reported in the literature; thus, a comparison on this matter was not possible. As already reported for chicken and carrots, RSM protein supplementation in TM bread also resulted in a darker color compared to the control sample, which was, in this special case, recognized positively due to associations with whole-grain bread. Similar results were also described by Korus et al. [34], who incorporated different amounts of rapeseed protein (6-15%) as a starch replacer in gluten-free breads and registered improved color characteristics. Furthermore, our study showed that the attribute "whole-grain characteristics" was additionally mentioned in the odor and taste description. The odor was described as bread-like and slightly roasted, which is positive. In terms of the described taste, besides the bread-like characteristics, malty notes and seedy notes were also reported. Nevertheless, the positive attributes with regard to taste must not be disregarded. The reported texture attributes were summarized as slightly drier. This could be due to the increased dry matter of the product resulting from the rapeseed protein supplementation, as described in the next section. Nutritional Evaluations The macronutrient composition and energy value of the TM chicken breast, carrots, and bread without and with supplementation with RSM protein are reported in Tables 5-7. The results highlight a higher nutrient density in RSM-protein-supplemented food compared to the control samples. In particular, the supplemented products showed higher dry matter (chicken breast: +14-16%; carrots: +45-108%; and bread: +8-9%) and protein (chicken breast: +19-24%; carrots: +1035-1120%; and bread: +38-41%) contents compared to the control samples. With the addition of the RSM protein isolate, the chicken breast and bread showed a slight increment in the energy value (chicken: +10-15%, depending on the protein batch used; bread: +7-10%) and no relevant changes in the other macronutrients (Tables 5 and 7). The TM carrots (Table 6) were the product that received more nutritional advantage from supplementation with rapeseed meal protein isolate, not only in terms of protein content (with an over 10-fold increment) but also of total minerals (ashes +72-87%). Similar trends as regards the macronutrient profile were observed in the two independent trials carried out to test the semi-pilot and full-pilot batches of RSM protein isolates. This is an indication of the robustness and reproducibility of the protein extraction process and the reliability of the formulations used. The mineral profile of the TM food without and with the addition of RSM protein is reported in Tables 8-10. With the addition of RSM protein isolate, the TM chicken breast showed an increment of minerals (Table 8), in particular phosphorus and copper, to different extents depending on the protein batch used. The TM carrots added with RSM protein (Table 9) showed a higher content of most minerals and trace elements, in particular phosphorus, zinc, and copper. The RSM-protein-fortified bread (Table 10) also showed a nutritionally favorable increment in minerals in the two trials, in particular phosphorus and copper, reflecting the peculiar composition of the semi-pilot and full-pilot RSM protein isolate batches. The higher nutrient density of RSM-protein-supplemented food is a nutritionally favorable attribute, considering that the texture modification of food implies the addition of a high amount of water and that, as a consequence, TM products have a low nutrient density compared to the original food matrix [35]. The increased protein and mineral contents of TM products with added RSM protein isolate reported here are nutritionally correct and adequate since, while a lower energy intake is needed at an advanced age, the micronutrient and protein requirements are not diminished [36]. In fact, an adequate protein intake is necessary at an advanced age to prevent the loss of muscle mass, a frequent negative health concern of ageing [37]. Rapeseed proteins napin and cruciferin, the major storage proteins of rapeseed, have a balanced amino acid composition, and a protein efficiency ratio comparable to that of other proteins commonly used in food preparations such as egg and milk proteins [16]. Therefore, supplementation with rapeseed protein isolate has a positive impact on the nutritional properties of the TM food. In addition, besides providing all essential amino acids, supplementation with RSM protein gives products an added value in functional terms because, once digested, rapeseed proteins are potentially cleaved into bioactive peptides with beneficial health properties such as antihypertensive, antioxidant, bile-acid-binding, and antithrombotic [33,[38][39][40]. An adequate intake of dietary fiber is important at any age to increase bowel motility in order to prevent constipation and chronic diseases typical of older people [41]. Thus, targeting adequate protein and fiber intakes is of pivotal importance at an advanced age. As regards minerals, the contribution given by rapeseed protein isolate added to food is also favorable. In chicken breast and bread, RSM protein supplementation positively affected the content of phosphorus, calcium, and copper, minerals essential in bone mineralization, oxygen transport, energy metabolism, and enzyme activities. Adequate levels of these minerals in the diet have beneficial implications for the elderly. The copper values in the RSM-protein-supplemented samples (about 0.2 mg 100 g −1 ) were largely within the nutritionally recommended levels (corresponding to 0.9 mg/day for adults) and comparable to those present in several foods of animal and plant origin (i.e., contents per 100 g: pork meat, 0.15 mg; beans, 0.7 mg; barley, 0.29 mg; and carrots, 0.19 mg) [42]. The increment in sodium observed in the carrots and bread supplemented with batch 1 of the RSM protein isolate (by 50% and 16%, respectively) does not represent a serious health concern, as the detected levels (200-240 mg per 100 g product), in the frame of a daily diet, are very far from the maximum advisable levels of sodium for hypertension prevention established at 2 g/day. Meat, vegetable, and bread consumption may be limited in old age because of mastication and swallowing difficulties, especially in patients with dysphagia problems. This may lead to protein, energy, dietary fiber, vitamin, and mineral deficits. Texture modification combined with the rapeseed protein fortification of food has a positive impact on the nutritional profile of the food and increases the palatability, acceptability, and nutrient density of the diet for patients affected by dysphagia or with mastication difficulties. Furthermore, the texture modification steps imply a modification of the initial food matrices from solid to fluidic and, finally, to special texture (e.g., gel), with such a procedure being highly advantageous as the fluidic stage offers the perfect condition for allowing fast, high, and specific ingredient supplementation. These are unique features with potential in future applications of tailor-made food with a high nutritional profile. The enhanced protein and mineral contents of food reported here are in line with the current dietary recommendations for elderly people [43][44][45]. Clinical studies demonstrate that elderly people are at risk of malnutrition. Physiological changes, a decline in physical activity, and a loss of appetite and taste sensitivity are only some of the factors that expose seniors at an increased risk of nutritional inadequacy in advanced age [46]. Elderly people with mastication and swallowing difficulties are a category at increased risk of malnutrition. The use of pureed food as a nutritional solution is very limited as it is not adequately formulated and does not address the key requirements of a food (e.g., sensorially pleasant). The careful design and formulation of TM food are, therefore, needed in order to give it the desired nutritional and sensory properties [47] as well as provide the motivation to eat it [48]. Furthermore, protein supplementation based on the specific needs of seniors can be applied to counteract the prevalence of malnutrition in this population group. Here, the use of rapeseed protein has potential. Furthermore, malnourished elderly people are most likely not able to consume an entire meal; thus, smaller portions supplemented with key ingredients as proteins are necessary. Moreover, there is a connection between suffering from eating difficulties (e.g., dysphagia) and a decrease in overall food intake, which underlines the need for supplemented texturemodified foods for such groups of elderly people [48,49]. Conclusions Food consumption may be limited at old age because of mastication and swallowing difficulties. This may lead to protein, energy, dietary fiber, vitamin, and mineral deficiencies. Texture modification increases the palatability, acceptability, and nutrient density of the diet of aged people affected by dysphagia or with mastication difficulties. Protein supplementation based on the specific needs of seniors can be combined with texture modification in order to design highly nutrient-dense food and counteract the prevalence of malnutrition in this population group. This study highlights the potentialities of RSM protein isolate as a food supplement for TM food for elderly people with mastication and dysphagia problems. The obtained results show that the texture modification of food combined with rapeseed protein isolate supplementation may have a positive impact on the nutritional and sensory profile of food. The consistency of the obtained results in terms of protein enrichment of TM food in the two independent trials, testing RSM protein isolates obtained from semi-pilot and full-pilot-scale extractions and purifications, is an indication of the robustness of the processes and the reliability of the formulations. Within the tested food applications, TM chicken breast and bread were the products giving the best results, showing unaltered or even improved sensory properties and a richer nutritional profile with special regard to the protein and mineral contents. On the contrary, supplemented TM carrots, in spite of the higher nutrient density, showed limited acceptability due to poor sensory properties that could be overcome with an adjustment to the formulation. The RSM protein isolate applied as an ingredient to TM food in this study is the main output of a biorefining process aimed at recovering and valorizing underutilized nutrients present in an agri-food by-product such as RSM. The application of RSM protein proposed here is in line with the current major societal challenges, such as the responsible management of natural resources, the valorization of agri-food by-products, and healthy nutrition with a focus on elderly people. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/foods12061326/s1, Figure S1: Rapeseed meal protein isolate from semi-pilot plant used in this study as an ingredient of texture-modified food; Figure S2: Rapeseed meal protein isolate from full-pilot plant used in this study as an ingredient of texture-modified food.
2023-03-23T15:33:41.103Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "3fcd596b2b22c29b15f42b5b037c6a164f29bcf0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/12/6/1326/pdf?version=1679315131", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "000044b6ff9800b35e99f57732e79fd5fb630da0", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257390270
pes2o/s2orc
v3-fos-license
Consideration of spectroscopic measurements with broadband Fourier domain mode-locked laser with two semiconductor optical amplifiers at ∼1550  nm Abstract. In this study, an experimental system was developed using a broadband wavelength-swept laser for fast and broadband spectroscopic measurements at ∼1550  nm. The broadband wavelength-swept laser employed Fourier domain mode-locking (FDML) to realize a high-speed sweep rate of 50.7 kHz. FDML lasers at ∼1550  nm experience a problem of the sweep bandwidth being limited by the amplification wavelength range of the semiconductor optical amplifier (SOA). To realize a sweep bandwidth >100  nm, the proposed broadband FDML laser incorporated SOAs with different amplification wavelength ranges in parallel. Consequently, it expanded the sweep bandwidth to 120 nm at a center wavelength of 1544 nm. The experimental system employed the broadband FDML laser and introduced reference and compensation optics. These optics can compensate for the effects of fluctuations in optical output intensity and wavelength shift in the laser to improve measurement stability. Moreover, the experimental system demonstrated fast transmission spectrum measurements with a wavelength range of 1500 to 1580 nm. Introduction Near-infrared spectroscopy has a wide range of applications in structural chemistry, process analysis, agriculture, food, and medicine. [1][2][3][4] These applications using near-infrared light are advantageous in using materials such as highly durable and inexpensive glass and optical fiber. Further, using optical fibers as probes, flexible systems can be developed for nondestructive in-situ analysis. In recent years, the field of combustion systems has witnessed increasing interest in the elucidation of combustion mechanisms and the development of diagnostic techniques through the analysis of gas composition, temperature, pressure, and velocity. 5,6 Therefore, gas analysis using a wide variety of lasers, such as vertical cavity surface emitting, supercontinuum, and frequency comb lasers, has been reported. [7][8][9] However, for an in-depth understanding of the combustion mechanism, measurements in very transient combustion and propulsion environments at high pressure are required. 6,10 Consequently, a measurement method with improved time resolution and an expanded measurement wavelength range must be established. A gas temperature measurement method using a Fourier domain mode-locked (FDML) laser with a wavelength of 1300 nm has been reported as a high time-resolution measurement method. 11 The FDML laser comprises a fiber ring cavity, and the time of light circulating in the ring is controlled via a delay fiber. This optical control produces a high measurement rate of over several tens of kHz. 12 The ring cavity inserts a wavelength filter and a semiconductor optical amplifier, which together provide fast wavelength tuning and amplification, thereby resulting in a excellent wavelength sweep capability. A previous study reported real-time spectral measurement systems with an FDML laser with a sweep bandwidth of 30 nm at 1550 nm. 13 However, the sweep wavelength range in FDML lasers is limited by the amplification wavelength range of the semiconductor optical amplifier (SOA). Recently, in the medical field of optical coherence tomography, a broadband FDML laser of ∼1300 nm was developed by arranging SOAs with different amplification wavelength ranges in parallel to overcome the limitation of the amplification wavelength range. 14,15 Near-infrared spectroscopy has significantly contributed to the development of application fields by covering a wide range of wavelengths and improving measurement performance through the development of various lasers. Therefore, the application of the fast and broadband FDML laser to near-infrared spectroscopy for expanding the laser to the wavelength range of 1550 nm is a significant research prospect. In the 1520-to 1620-nm wavelength regions, numerous rotational vibration spectra exist. Mance et al. 16 reported the transient absorption spectrum measurements of acetylene-oxygen gas mixtures during combustion. This study aimed to develop a broadband FDML laser with a wavelength of ∼1550 nm, wherein two SOAs were arranged in parallel to achieve high temporal resolution and to expand the measurement wavelength range for near-infrared spectroscopy. The developed broadband FDML laser uses two SOAs with center wavelengths of 1509 and 1554 nm to achieve a broadband sweep bandwidth of 120 nm at a center wavelength of 1544 nm and a high-speed sweep rate of 50.7 kHz. The experimental system using the broadband FDML laser introduced reference and compensation optics, which compensated for laser fluctuations and contributed to the stabilization of spectroscopic measurements. Moreover, the experiments demonstrated fast and broadband spectroscopic measurements of fast transmission spectra using fiber Bragg gratings (FBGs) 17-20 that reflected only a specific wavelength region to simulate the absorption spectrum of a gas. SOA1 (SOA1013, Thorlabs) has a center wavelength of 1509 nm and a small signal gain of 13 dB; SOA2 (SOA1117, Thorlabs) has a center wavelength of 1554 nm and a small signal gain of 20 dB. Broadband spontaneous emission light is emitted from each SOA and entered the FFP-TF1 via CP 1 and IS 1 . FFP-TF1 (Micron) has a full width half maximum (FWHM), center wavelength, and free spectral range of 85 pm, 1575 nm, and 190 nm, respectively. The FFP-TF1 extracted only light in a specific wavelength range. The light was repeatedly injected into SOAs while circling, and was gradually amplified to result in laser oscillation. The oscillator (OSC, 33612A, Agilent) controlled the wavelength range extracted by the FFP-TF1 using a sweep control signal (V S ). In the experiment, the FFP-TF1 was controlled by V S set to a sinusoidal signal with sweep speed f m ¼ 50.7 kHz and sweep period T m ¼ 19.7 μs ð¼ 1∕f m Þ. The laser swept from short to long wavelength (forward scan) and from long to short wavelength (backward scan). In the ring cavity, a 2-km long delay fiber with a Faraday rotated mirror was installed to enable the laser to control the time for the light to circulate to ∼19.7 μs (¼T m ) and synchronize it with the sweep period of the FFP-TF1. Thus, a high-speed sweep with the FDML operation was realized. 12,20 The FFP-TF1 and the delay fiber were placed in a thermostatic chamber and maintained at a constant temperature of 25°C. 20 However, achieving a broadband sweep beyond 100 nm is challenging with the amplification wavelength range of a single SOA at 1550 nm. Thus, to overcome this limitation, the broadband FDML laser was developed by arranging two SOAs in parallel with different amplification wavelength ranges. Characteristics of Broadband FDML Laser First, to evaluate the optical output characteristics of the developed broadband FDML laser, the optical output was measured using an optical spectrum analyzer (OSA, AQ6317B, ANDO) with an averaging count of 10 for the single and parallel SOA cases (Fig. 2). When only SOA1 (center wavelength of 1509 nm) was connected excluding CP 1 and CP 2 , a sweep bandwidth of ∼70 nm was obtained on the short wavelength side; however, optical output at longer wavelengths above 1580 nm was not obtained. In the case of only SOA2 (center wavelength of 1554 nm), in contrast to SOA1, optical output was obtained at the long wavelength side and not at the short wavelength side. Thus, a laser using a single SOA at ∼1550 nm cannot achieve a broadband sweep bandwidth beyond 100 nm owing to the limitation of the amplification wavelength range. However, using SOA1 and SOA2 in parallel can aid in the realization of a broadband sweep covering a wide range from short to long wavelengths. The developed broadband FDML laser achieved a broad sweep bandwidth of ∼120 nm from 1484.3 to 1603.7 nm at a center wavelength of 1544 nm. In this configuration, each SOA amplified short and long wavelengths. For example, when a short wavelength light entered CP 2 , 50% of the light was amplified by SOA1; however, the remaining 50% may not be amplified properly by SOA2. This results in the optical output intensity in broadband FDML lasers being lower than in lasers with a single SOA. The wavelength sweep characteristics of the broadband FDML laser was further evaluated in an experimental setup using a wavelength filter with an FFP-TF2 and OSA. [20][21][22] The FFP-TF2 (Micron) had a FWHM, center wavelength, and free spectral range of 119 pm, 1550 nm, and 120 nm, respectively. Figure 3 shows the measured results of the forward scan region of the broadband FDML laser. The broadband FDML laser operated at a high speed at a sweep rate of 50.7 kHz and achieved a sinusoidal broadband wavelength sweep. The sweep characteristics fðtÞ of the broadband FDML laser can be approximated by a polynomial, yielding a time-towavelength conversion equation, which was used for spectroscopic measurements. Figure 4 shows an experimental system using the broadband FDML laser. The sinusoidal wavelength-swept light of the broadband FDML laser propagated through the optical fiber and entered a fiber optic switch (FSW). The FSW (NSSW, Agiltron) controls optical output ON/OFF with a repetition rate of DC ∼ 500 kHz. The FSW extracted only the light in the forward scan region of the broadband FDML laser using the pulse control signal (V P ) of the OSC. Subsequently, the extracted light was injected into CP 4 . The optical system in the experimental system comprised three optical paths: measurement, reference, and compensation optical paths. In the measurement optical path, five FBGs with different reflection wavelengths were installed to simulate the absorption spectrum. The Bragg wavelengths of the FBGs were selected as 1500, 1520, 1540, 1555, and 1580 nm, with FWHM and reflectance of ∼0.2 nm and 80%, respectively. With this configuration, the performance of the experimental system can be evaluated by simulating, for example, the absorption spectrum distributed around 1550 nm found in a mixture of acetylene and oxygen. This approach is easier to evaluate the quantitative performance of an experimental system that controls gas temperature and pressure, because the transmission spectrum of FBG can be easily controlled by the application of strain. For stable continuous measurement of the experimental system, the effects of fluctuations in optical output intensity and wavelength shift of the broadband FDML laser must be considered. The reference optical path directly monitored the laser output to correct fluctuations in optical output. Further, the compensation optical path was set up with an FBG R (Bragg wavelength of 1540 nm and FWHM and reflectance of ∼0.2 nm and 80%, respectively) as the reference wavelength. The experimental system compensated for the effect of wavelength shift of the broadband FDML laser by monitoring changes in the reflection spectrum of the FBG R . Experimental Setup The light from each optical path entered the detector (D) and the detector signal (V D ) was input to an analog-to-digital converter (ADC). The ADC (5170R, National Instruments) has four analog input channels with sampling frequency and resolution of 250 MHz and 14 bits, respectively. The ADC was input with the trigger signal V T and the reference clock signal V R synchronized with the sweep control signal V S of the broadband FDML laser. In addition, the acquisition timing of the V D was controlled. Moreover, the broadband FDML laser was operated with a sweep rate of 50.7 kHz and a sweep bandwidth of 120 nm in forward scan, and spectroscopic measurements were performed using FBGs. Compensation Method of Wavelength Shift of Broadband FDML Laser Experimental systems using broadband FDML lasers experience spectral measurement accuracy degradation because of wavelength shift of the laser caused by long-time operation. This must be resolved to promote the development of monitoring applications that require continuous measurement. Consequently, a correction method using FBG R was considered. Figure 5(a) shows the wavelength sweep characteristics of the broadband FDML laser, Fig. 5(b) shows the reflection signal of the FBG R , and Fig. 5(c) shows the signal processing flow for the compensation. Assuming the wavelength sweep characteristic fðtÞ of the broadband FDML laser with no wavelength shift, reflection signal was detected at time t R from the FBG R with Bragg wavelength λ R . Whereas, in the case of a broadband FDML laser whose wavelength has shifted by Δλ S owing to long-time operation, the wavelength sweep characteristic changed to f 0 ðtÞ and the reflection signal time changed from t R to t m . In the correction process flow, the wavelength shift Δλ S of the laser can be calculated using the following Eq. (1) by measuring the time t m of the reflection signal E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 1 6 ; 3 0 3 Δλ S ¼ fðt R Þ − fðt m Þ: (1) Further, using the wavelength shift Δλ S , the wavelength sweep characteristic f 0 ðtÞ can be calculated using the following Eq. (2) when the laser wavelength shift occurs E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 6 ; 7 1 1 f 0 ðtÞ ¼ fðtÞ þ Δλ S : (2) As this method measures only one point of the FBG R reflection signal, the compensation process can be performed at a low computational cost. The experiment corrected the wavelength sweep characteristic fðtÞ obtained in Fig. 3 via processing using FBG R to calculate the wavelength sweep characteristic f 0 ðtÞ, which compensated for the effect of wavelength shift. Spectroscopic Measurements with Broadband FDML Laser First, the light output of each optical path was evaluated using the experimental system. Figure 6 shows the results of the detector signal when the broadband FDML laser was swept for one cycle with the sweep period T m ¼ 19.7 μs. Only light in the forward scan region of the broadband FDML laser was extracted by the FSW. Figure 6(a) shows the measurement signal by V D1 , and transmission signals by five FBGs with different Bragg wavelengths were observed. However, the measurement signal was affected by fluctuations in the optical output of the broadband FDML laser. Therefore, as shown in Fig. 6(b), the experimental system directly measured the optical output of the broadband FDML laser as a reference signal with V D2 . In Fig. 6(c), the reflection signal of the FBG R was detected as a compensation signal with V D3 to compensate for the effect of the laser wavelength shift. When the broadband FDML laser was wavelengthshifted, the effect was eliminated by adapting the signal processing shown in Fig. 5 on the FBG R reflection signal. To reduce the influence of fluctuations in the optical output of the broadband FDML laser, the measurement signal was normalized using the reference signal. Furthermore, for spectroscopic measurements, the measurement results were converted from those based on time to wavelength based on the wavelength sweep characteristics of the broadband FDML laser obtained in Fig. 3. Figure 7 shows the results of spectral measurements compensating for the effects of light output fluctuations. Flat characteristics were obtained owing to the compensation of optical output fluctuation, and transmission spectra with five FBGs with different Bragg wavelengths were observed over a wide bandwidth ranging from 1500 to 1580 nm. In Fig. 8, strain Δε was applied to FBG 1 , FBG 3 , and FBG 5 , and the change in transmission spectrum was measured to evaluate the performance of this experimental system. Strain was applied by stretching the optical fiber containing the FBG using a movable stage. Figure 8(a) shows the results of the spectroscopic measurement. The change in the transmission spectrum owing to the application of strain was evident in FBG 1 , FBG 3 , and FBG 5 . Figure 8(b) shows the enlarged transmission spectrum of FBG 3 . The peak spectral reflection wavelength values for Δε ¼ 0, 500, and 1000 με were 1540.43, 1541.00, and 1541.52 nm, respectively. Further, the analysis of the transmission spectra using Gaussian fitting (LabVIEW, National Instruments) revealed that the FBG 3 peak spectra for Δε ¼ 0; 500, and 1000 με had half-widths of ∼0.20, 0.20, and 0.21 nm, respectively, and reflectance values of 83%, 83%, and 79%, respectively, which were consistent with the FBG 3 specifications. Thus, the experiments confirmed that this experimental system can monitor slight changes in the transmission spectrum over a broad wavelength range. Compensation for Wavelength Shift of Broadband FDML Laser The effectiveness of this compensation method was verified by simulating the effect of wavelength shift on the broadband FDML laser. The experiment was tuned such that the wavelength shifts Δλ S were ∼ − 1.28, 0, and þ1.28 nm, respectively, at the center wavelength of the broadband FDML laser of 1544 nm. This shift corresponded to ∼1% of the wavelength sweep bandwidth of the laser. Figure 9(a) shows the results of the optical spectrum of the broadband FDML laser with OSA with an averaging count of 1. Figure 9(b) shows the results of the FBG R reflection spectrum, which changed with the wavelength shift of the laser. The peak wavelengths of the respective reflection spectra of FBG R at each wavelength shift were 1541.22, 1539.89, and 1538.59 nm. Therefore, the wavelength shifts Δλ S can be calculated using Eq. (1) with results of Fig. 9(b) obtained as −1.33, 0.00, and þ1.30 nm; these were almost identical to the adjusted wavelength of the broadband FDML laser. Thereafter, spectroscopic measurements with wavelength shift compensation were performed. Figure 10(a) shows the spectroscopic measurement results before wavelength shift compensation. Originally, the spectra were supposed to be identical; however, they were shifted significantly owing to the wavelength shift of the broadband FDML laser. The peak wavelengths of the transmission spectra of FBG 3 were 1541.75, 1540.41, and 1539.13 nm. Therefore, by setting the wavelength shift to Δλ S , the peak wavelength was shifted by −Δλ S . Figure 10 shows the results of wavelength shift compensation. The influence of the wavelength shift was reduced by the compensation of the FBG R , and the transmission spectra of each FBG were measured as the same value. The peak wavelengths of the transmission spectra of FBG 3 were 1540.45, 1540.41, and 1540.43 nm, with improved values compared to Fig. 10(a). The peak wavelengths of the transmission spectrum owing to static strain during wavelength shift were analyzed. Figure 11(i) shows the results before wavelength shift compensation. The experimental system could measure the linear response of the transmission spectra of FBG 1 , FBG 3 and FBG 5 owing to the application of strain. However, when affected by the wavelength shift amount Δλ S , the measured wavelength was shifted by −Δλ S from the original FBG peak wavelength. Figure 11(ii) shows the results following wavelength shift compensation, and the same peak wavelength was obtained for the strain even if the wavelength shift was affected. The slope of the peak wavelength of each FBG with respect to strain was ∼1.2 × 10 −3 nm∕με. The standard deviation of the difference between the peak wavelengths obtained in Fig. 11(ii) without and with wavelength shift were <16, 14, and 61 pm for Figs. 11(a-ii), 11(b-ii), and 11(c-ii), respectively. The standard deviation is worse for Fig. 11(c-ii), which uses the end regions of the approximation equation in Fig. 3. This experimental system using the broadband FDML laser compensates for the wavelength shift effect and can simultaneously measure changes in multiple transmission spectra in the broadband range of 1500 to 1580 nm. Evaluation of Fast Spectroscopic Measurements To evaluate fast spectral measurements, a piezoelectric transducer with a vibration frequency of f v ¼ 4.46 kHz was installed in FBG 1 to change the transmission spectrum. The broadband FDML laser was set with a sweep period of T m ¼ 19.7 μs, and 100 irradiations were continuously performed for ∼2 ms. In addition, to verify the effect of wavelength shift compensation, the laser wavelength shifts Δλ S were adjusted to be ∼0, þ0.64, and þ1.28 nm. Figures 12(a-i), 12(b-i), and 12(c-i) show the results of spectroscopic measurements with Δλ S ¼ 0, þ0.64, and þ1.28 nm, respectively. Fast changes in the transmission spectrum owing to the application of vibration to FBG 1 were observed. Figures 12(b-i) and 12(c-i), which are affected by wavelength shift, showed transmission spectra that were almost identical to Fig. 12(a-i) because of compensation. Figure 12(ii) tracks the peak wavelength of the transmission spectrum of FBG 1 . The experimental system clearly measured the sinusoidal variation of the peak wavelength with applied vibration frequency at a time resolution of 19.7 μs. In Figs. 12(a-ii), 12(b-ii), and 12(c-ii), each vibration had a center wavelength of ∼1499.25 nm and a peak-to-peak value of ∼0.11 nm, indicating that almost identical peak wavelengths with compensation were obtained for the same vibrating body. The standard deviation of the peak wavelength of each FBG when no vibration was applied was <9 pm. Thus, the experimental system with the broadband FDML laser was demonstrated to be fast, broadband, and capable of performing spectroscopic measurements. Considerations for Measurement Systems with Broadband FDML Laser with Two SOAs The performance of the measurement system using the FDML laser with two SOAs was studied. First, two points were compared between a laser with one SOA 13 and a laser with two SOAs: stability of optical output measurement and temporal stability of peak wavelength measurement. The stability of optical output measurement was evaluated by normalizing the light output in the spectral measurement shown in Fig. 7 and calculating the standard deviation. The wavelength interval for analysis was 1544 to 1549 nm. The laser with one SOA has a sweep bandwidth of 30 nm and a sweep rate of 50 kHz, and it was evaluated with the system developed in the previous study. 13 The laser with two SOAs has a sweep bandwidth of 120 nm and a sweep rate of 50.7 kHz, and it was evaluated with the system developed in the present study. The ratio of the standard deviation was 0.8% for one SOA and 2.3% for two SOAs. While the laser with two SOAs was able to improve the sweep bandwidth, an increase in standard deviation for intensity was observed. The temporal stability of the peak wavelength measurement was evaluated by calculating the standard deviation from the time series results of the peak wavelength of each FBG. No vibration was applied to each FBG. When wavelength measurements were performed with one SOA and two SOAs, the standard deviations of the peak wavelengths were ∼7 and 9 pm, respectively. Next, we discuss the sweep rate of the FDML laser, which determines the measurement rate of the measurement system. FDML operation is known as a method of driving wavelength-swept lasers at fast sweep rates exceeding tens of kHz. 12 FDML operation requires that the time for light to circulate in the resonator be synchronized with the driving period of the wavelength filter. Here, since the developed FDML laser has two SOAs in parallel, matching the circumnavigation time of light in the two optical paths is necessary. Therefore, in this experiment, each optical path length is adjusted in cm units, and the laser is designed so that the circumnavigation time of each optical path is equal. The maximum sweep rate of the FDML laser is limited by the wavelength filter. Most wavelength filters used for FDML lasers are FFP-TF. These filters mechanically control the wavelength by means of piezoelectric actuator. This limits the operating sweep rate to a several tens of kHz. To overcome this limitation, lasers with buffered optics have recently been proposed. 23 This method multiplexes the laser sweep and can further increase the sweep rate by a factor of several. Therefore, the developed FDML laser is expected to increase the current measurement rate by several times by introducing the latest buffered optics. The FDML laser has proven to be useful for gas spectroscopy and other applications, while being relatively inexpensive compared to light sources typically used for high-speed spectroscopy. By introducing FDML operation and two SOAs in parallel, the proposed broadband FDML laser has a sweep bandwidth of 120 nm at a center wavelength of 1544 nm and a fast sweep rate of 50.7 kHz. The experiments were thorough and systematic, and provide the basic design for the development of the measurement system. Using FDML and two SOAs is a very innovative way to extend the sweep bandwidth to 120 nm in the 1.55-μm wavelength band. In addition, a simple wavelength-shift compensation method for long-time operation of lasers will contribute to further performance enhancement of spectral measurement systems. These results are impressive and will be of interest to the broad community working on fast wavelength-swept lasers and their applications. Conclusion This study developed an experimental system using the broadband wavelength-swept laser to realize broadband and fast spectroscopic measurements at ∼1550 nm. The broadband wavelength-swept laser overcame the limitations of sweep speed and sweep bandwidth by introducing FDML operation and parallelization of two SOAs. The proposed broadband FDML laser has a sweep bandwidth of 120 nm at a center wavelength of 1544 nm as well as a high-speed sweep rate of 50.7 kHz. The experimental system used reference and compensation optics to compensate for variations in the optical output and wavelength shift of the laser. Further, spectroscopic measurements showed a change in the transmission spectrum over a broadband range of 1500 to 1580 nm. Fast spectroscopic measurements demonstrate that steep transmission spectral changes due to FBG can be observed with a temporal resolution of 19.7 μs with a standard deviation of 9 pm. In future, the authors plan to replace the measurement optical path of this experimental system with a gas cell and apply it to the acquisition of gas absorption spectra. In actual spectroscopic measurements using a gas cell, the line width of the absorption spectrum of a gas varies with pressure. In future, we plan verify whether the measurement system can observe the change in the line width of the gas.
2023-03-08T16:03:09.929Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "4bc2e5972ef0db97beb8f67b4ea83c5243fab819", "oa_license": "CCBY", "oa_url": "https://www.spiedigitallibrary.org/journals/optical-engineering/volume-62/issue-3/036101/Consideration-of-spectroscopic-measurements-with-broadband-Fourier-domain-mode-locked/10.1117/1.OE.62.3.036101.pdf", "oa_status": "HYBRID", "pdf_src": "SPIE", "pdf_hash": "0aa91eb8f2d44230f2bb947f9f31ab56fc36c594", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
14298822
pes2o/s2orc
v3-fos-license
A Simple Evaluation Method of Seismic Resistance of Residential House under Two Consecutive Severe Ground Motions with Intensity 7 In the 2016 Kumamoto earthquake in Japan, two severe ground shakings with the seismic intensity 7 (the highest level in Japan Meteorological Agency scale; approximately X–XII in Mercalli scale) occurred consecutively on April 14 and 16. In the seismic regulations of most countries, it is usually prescribed that such severe earthquake ground motion occurs once in the working period of buildings. In this paper, a simple evaluation method is presented on the seismic resistance of residential houses under two consecutive severe ground motions with intensity 7. Therefore, the proposed method can be used for the design of buildings under two consecutive severe ground motions. The present paper adopts an impulse as a representative of near-fault ground motion and two separated impulses are used as the repetition of intensive ground shakings with the seismic intensity 7. Two scenarios to building collapse (collapse limit in terms of zero restoring force with P-delta effect and collapse limit in terms of maximum deformation) under two repeated severe ground shakings are provided and energy consideration is devised for the response evaluation. The validity and accuracy of the proposed theories are discussed through numerical analysis using recorded ground motions. INTRODUCTION The general well-accepted theory of main-shock, after-shock occurrence was severely distorted in the 2016 Kumamoto earthquake in Japan and two intensive ground shakings with the seismic intensity 7 [the highest level in Japan Meteorological Agency (JMA) scale; approximately X-XII in Mercalli scale] occurred consecutively on April 14 and 16.In most seismic regulations in earthquakeprone countries, it is usually prescribed that such intensive earthquake ground motion occurs once in the working period of buildings and the after-shock is relatively small compared to the mainshock.In this circumstance, some changes of design philosophy may be necessary.In this paper, the degree of necessary upgrade is investigated on the seismic resistance of residential houses under two consecutive severe ground motions with intensity 7. Several attempts have been conducted on the damage analysis of structures under repeated ground motions (Mahin, 1980;Amadio et al., 2003;Fragiacomo et al., 2004;Li and Ellingwood, 2007;Hatzigeorgiou and Beskos, 2009;Hatzigeorgiou, 2010;Moustafa andTakewaki, 2011, 2012; Motosaka, 2012;Ruiz-Garcia, 2012;Hatzivassiliou and Hatzigeorgiou, 2015).The formulations of residual deformation and member deterioration after one ground motion may be key issues.It seems that most previous papers deal with the response characteristics of structures under repeated ground motions and do not mention directly the necessary strength upgrade due to input repeat.In other words, while the previous researches are aimed at the analysis of damage for the main-shock-after-shock sequence, the purpose of the present paper is to propose a design method for preventing from collapse under two consecutive intensive ground shakings. The present paper adopts an impulse of the velocity V as a representative of near-fault ground motion and two separated impulses are used as the repetition of intensive ground shakings with the seismic intensity 7 (see Figure 1).The modeling of earthquake ground motion into an impulse corresponds to the evaluation of the input energy under a monotonic loading that is a well-known and well-accepted concept in understanding the earthquake input energy demand (see Figure 2).It is not intended to extract a pulse from a record because the pulse represents an impulsive input symbolically in this case.A residential house is modeled by three models.The first one is an undamped singledegree-of-freedom (SDOF) model of normal bilinear hysteresis with negative second slope (steel structures), the second one is an SDOF model of slip-type restoring-force characteristic, including a bilinear hysteresis (wooden structures), and the third one is an SDOF model of degrading hysteresis (reinforced-concrete structures).Two scenarios to building collapse (collapse limit in terms of zero restoring force with P-delta effect and collapse limit in terms of maximum deformation) under two repeated severe ground shakings are provided and energy consideration is devised for the response evaluation.The validity and accuracy of the proposed theories are discussed through numerical analysis using recorded ground motions. SDOF MODEL OF NORMAL BILINEAR HYSTERESIS WITH NEGATIVE SECOND SLOPE (STEEL STRUCTURES) Consider first an undamped SDOF model of normal bilinear hysteresis with negative second slope (steel structures) as shown in Figure 3A (Kojima and Takewaki, 2016).The negative slope of the first model can be understood as a modeling result of P-delta effect and structural degradation.The validity of using this model can be found in Appendix from the viewpoint of P-delta effect.It is assumed that, after the input of the first impulse, the SDOF model goes into the plastic range and starts unloading after the maximum deformation (see Figure 3B).It is also assumed that the SDOF model converges to a zero restoring-force state in the unloading path due to some damping effects (joint friction, radiation damping, etc.).Then the second impulse is given to the SDOF model with a residual deformation and the SDOF model goes into the plastic range again.Once the restoring force becomes 0, the SDOF model collapses. Let m, k denote the mass and initial stiffness of the SDOF model and let u, f denote the deformation (displacement of mass) and the restoring force, respectively.The natural circular frequency, the ratio of the second slope to the initial slope, the yield deformation and the yield strength are denoted by ω 1 = √ k/m, α(<0), d y and f y , respectively.Let V y = ω 1 d y denote the velocity level of the input impulse at which the SDOF model just attains the yield deformation after the first impulse as in the reference (Kojima andTakewaki, 2015, 2016). The degree of necessary upgrade on the seismic resistance of residential houses under two consecutive severe ground motions with intensity 7 is computed by comparing two models in which a building is designed to just collapse under one impulse and the other is designed to just collapse under two consecutive impulses.In order to make this comparison, two structures resisting one or two impulses are designed in the following. Limit Input Velocity for One Impulse Since a long time ago, a half-cycle sine wave is used as a simple representative of impulsive ground motions, e.g., see Housner (1963).In this paper, a near-fault impulsive ground motion is simplified into a half-cycle sine wave and then into a single impulse (see Figure 1).The introduction of impulses enables a simple energy evaluation of elastic-plastic structures (see Figure 2). Following a procedure similar to that in the reference (Kojima and Takewaki, 2016), the limit input velocity for one impulse can be derived as follows by equating the kinetic energy (1/2)mV 2 provided by one impulse with the dissipated energy (triangle in Figure 4). In Eq. ( 1), u p = − (1/α)d y is used (see Figure 4).Therefore, the ratio of limit input velocity to the reference velocity V y (input velocity just attaining the yield of the model after one impulse) may be expressed by Then the reference velocity (strength indicator of the model) may be derived for a specified input velocity level V. V [1] The corresponding model strength f [1] y can then be obtained as where k [1] is the stiffness of this model. Limit Input Velocity for Two Consecutive Impulses The limit input velocity for two consecutive impulses can be derived from the following procedure.The maximum deformation and the residual deformation after the first impulse can be computed by equating the kinetic energy (1/2)mV 2 provided by the first impulse with the dissipated and strain energy (quadrangle in Figure 5).Then the limit input velocity for the two consecutive impulses inducing collapse after the second impulse can be derived by equating the kinetic energy (1/2)mV 2 provided by the second impulse with the dissipated energy (triangle in Figure 6).Let u (1) p and u (2) p denote the plastic deformation after the first impulse and the second impulse, respectively.Since the plastic deformation of the model just attaining the collapse after the second impulse can be described by −(1/α)d y , the following relation holds (see Figure 6). First of all, find u (1) p from A 1 = A 2 (see Figure 6) which is guaranteed by the same input energy (1/2)mV 2 by the first and second impulses.The condition p can be expressed by Rearrangement of Eq. ( 6) leads to From Eq. ( 7), u p /d y can be obtained as p ≤ − (1/α)d y , the following expression is derived. By using the energy balance, the input velocity level V of the single impulse can be related to A 1 (or A 2 ) and u (1) p .The energy balance after the first impulse can then be expressed as Therefore, the limit input velocity corresponding to the collapse after two impulses may be derived as By substituting Eq. ( 9) into Eq.( 11), the ratio V/V y is obtained as The reference velocity (strength of the model) may then be derived for a specified input velocity level V. V [2] y = V/ The corresponding model strength f [2] y can be obtained as where k [1] is the stiffness of this model.In summary, the ratio of the reference velocity for collapse after two impulses to that for collapse after one impulse can be computed as Finally, the ratio of the model strength for collapse after two impulses to that for collapse after one impulse can be expressed as f [2] y /f [1] y = (α − 2)/(α − 1) (16) Figure 7 shows the plot of y with respect to α.Furthermore, Figure 8 presents the plot of f [2] y /f [1] y with respect to α.In Figure 8, numerical results using recorded ground motions (Kumamoto earthquake in 2016) are also plotted.It should be noted that, since the purpose of this section is to investigate the ratio f [2] y /f [1] y , the same earthquake ground motion recorded on April 16, 2016 has been input twice. Figure 9 shows the restoring-force characteristics of the models with different second slopes, under the earthquake ground motion of April 16, 2016, designed so that they just collapse after one single impulse.T 1 in Figure 9 is the natural period of the model.In the top left one, the plastic deformation proceeds toward the reverse direction different from the other three cases.On the other hand, Figure 10 illustrates the restoring-force characteristics of the models with different second slopes, under the twice repeated earthquake ground motion of April 16, 2016, designed so that they just collapse after two impulses. y for α α α (including analysis for recorded ground motion). CASE OF COLLAPSE LIMIT ON MAXIMUM DEFORMATION Consider second a SDOF model of elastic-perfectly plastic sliptype hysteresis.This model corresponds to wooden structures and reinforced-concrete structures in some sense (in the sense that both kinds of structures include slip-type properties).In other words, the formulation in this section is made for an ideal model as shown in Figure 11 and the results may be applied approximately to wooden structures and reinforced-concrete structures.FIGURE 12 | Scenario to building collapse just after two impulses (collapse limit is given by maximum deformation). Application to Wooden Structures and Reinforced-Concrete Structures with Slip-Type Hysteresis The formulation presented in Section "Limit Input Velocity for Two Consecutive Impulses" is applied to wooden structures and reinforced-concrete structures with slip-type hysteresis. As explained in the beginning of Section "Case of Collapse Limit on Maximum Deformation, " the formulation in section "Limit Input Velocity for Two Consecutive Impulses" was made for an ideal model as shown in Figure 11 and the results may be applied approximately to wooden structures and reinforcedconcrete structures.Consider wooden structures here.Figure 15 shows the restoring-force characteristics of the models with different strength ratios under repeated 2016 Kumamoto earthquake ground motion (April 16) designed so that they just collapse after one impulse or two impulses [reference model: collapse just after two impulses/slip and bilinear model (wooden structures)].This model is a combination of an elastic-perfectly plastic model and a slip model.The resistance ratios with which the respective models (elastic-perfectly plastic model and slip model) govern are 0.2 and 0.8.The mass is 60 × 10 3 (kilogram), the yield deformation is d y = 0.1 (m) and the structural damping ratio is 0.02.It can be observed that the strength ratio 1.5 is reasonable for wooden structures.In other words, the maximum deformation of the wooden structure, designed for two impulses, under two consecutive severe earthquake ground motions is almost equivalent to that of the wooden structure, designed for one impulse, under one severe earthquake ground motion and their strength ratio is about 1.5. Consider next reinforced-concrete structures.Since reinforcedconcrete structures have complex hysteresis rules, a new scenario different from Figures 11 and 12 may be necessary.However, a simple numerical investigation is conducted here in order to obtain the property on strength ratio between the structure designed for two impulses and that for one impulse. Figure 16 shows the restoring-force characteristics of the models with different strength ratios under repeated 2016 Kumamoto earthquake ground motion (April 16) designed so that they just collapse after one impulse or two impulses [reference model: collapse just after two impulses/Takeda model (Takeda et al., 1970) (reinforced-concrete structures)].The mass is 60 × 10 3 kg, the crack deformation is d c = 0.003 m, the yield deformation is d y = 0.03 m, and the structural damping ratio is 0.02.It can be observed that the strength ratio 1.3-1.4 is reasonable for reinforced-concrete structures.In other words, the maximum deformation of the reinforced-concrete structure, designed for two impulses, under two consecutive severe earthquake ground motions is almost equivalent to that of the reinforced-concrete structure, designed for one impulse, under one severe earthquake ground motion and their strength ratio is about It may be useful to remind that the present paper introduced two scenarios on the collapse.For steel buildings considering the P-delta effect or strength-degradation effect, the zero restoring force represents the collapse limit.This was demonstrated in Figures 8-10.On the other hand, for wooden and reinforcedconcrete structures, the maximum deformation defines the collapse limit.Regarding this collapse scenario, a simple slip-type hysteretic model as shown in Figure 12 has been introduced.Although a theoretical result has been obtained in Figure 14, wooden and reinforced-concrete structures have more complicated hysteretic models.Therefore, the comparison as shown in Figure 8 for steel buildings is difficult in wooden and reinforcedconcrete structures and another investigation has been conducted in wooden and reinforced-concrete structures.In Figures 15 and 16, the strength ratio has been investigated in which a strengthened building exhibits the same maximum deformation under two consecutive ground motions as that of the corresponding building with the original strength under one ground motion.This provides an appropriate strength ratio for wooden and reinforcedconcrete structures. CONCLUSION The following conclusions have been derived. (1) The repetition of severe near-fault ground motions can be modeled approximately by two separated impulses in order to capture a general property of input of consecutive, intensive ground shakings.This modeling enables a simple evaluation of earthquake response of a non-linear system under consecutive near-fault ground motions in terms of free vibration. (2) Two scenarios to building collapse under two repeated severe ground shakings have been provided and energy consideration is devised for the response evaluation.The first scenario is based on the collapse limit in terms of the zero restoring force with the P-delta effect and the second one is based on the collapse limit in terms of the maximum deformation.The first scenario corresponds to steel structures and the second one corresponds to wooden structures.(3) The validity and accuracy of the proposed theory have been discussed through numerical analysis using recorded ground motions.The degree of necessary upgrade on the seismic resistance of residential houses under two consecutive severe ground motions with intensity 7 has been computed by comparing two models in which a building is designed to just collapse under one impulse and the other is designed to just collapse under two consecutive impulses.The ratio was turned out to be approximately from 1.4 to 1.6.(4) For reinforced-concrete structures, another scenario may be necessary.However, the degree of necessary upgrade is almost the same and around 1.3-1.4. The present theory may be applicable to near-field ground motions so as to enable the modeling of ground motions into impulses.Furthermore, the proposed method can be applied to rather lower buildings because SDOF modeling is necessary.In addition, the proposed method can be used for the design of new buildings and the seismic retrofitting.It may be difficult to require the input of consecutive, intensive ground shakings for ordinary buildings, including residential houses.It seems reasonable to use the present method for important building structures, e.g., hospitals, city halls, schools, police offices.Furthermore, the passive control methods using dampers may be promising for smart upgrade of building structures. FIGURE 1 | FIGURE 1 | Modeling of repeated intensive ground motions into two impulses. FIGURE 2 | FIGURE 2 | Simple energy evaluation of elastic-plastic structure under impulsive loading. FIGURE 3 | FIGURE 3 | Restoring-force characteristic and collapse scenario: (A) Normal bilinear hysteresis with negative second slope.(B) Collapse scenario under two impulses. FIGURE 5 | FIGURE 5 | Collapse scenario under two impulses and energy consideration for evaluating limit input velocity. FIGURE 9 | FIGURE 9 | characteristics of models with different second slopes under 2016 Kumamoto earthquake ground motion (April 16) designed so that they just collapse after one single impulse. FIGURE 15 | FIGURE 15 | Restoring-force characteristics of models with different strength ratios under repeated 2016 Kumamoto earthquake ground motion (April 16) designed so that they just collapse after one impulse or two impulses [reference model: collapse just after two impulses/slip and bilinear model (wooden structures)].
2016-10-10T18:24:48.217Z
2016-07-26T00:00:00.000
{ "year": 2016, "sha1": "c0767683e74854514466c4da84712d13e70ef8de", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fbuil.2016.00015/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "327c7bc04ae6370e53c491cd09b01e924b42b38b", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
262064758
pes2o/s2orc
v3-fos-license
A hybrid technique based on Lucas polynomials for solving fractional diffusion partial differential equation This paper presents a new numerical technique to approximate solutions of diffusion partial differential equations with Caputo fractional derivatives. We use a spectral collocation method based on Lucas polynomials for time fractional derivatives and a finite difference scheme in space. Stability and error analyses of the proposed technique are established. To demonstrate the reliability and efficiency of our new technique, we applied the method to a number of examples. The new technique is simply applicable, and the results show high efficiency in calculation and approximation precision. Introduction During the last four decades, the topic of fractional calculus has gained massive popularity and importance due to its gradually flourishing applications in numerous diverse fields in system biology, physics, chemistry and biochemistry, medicine, and finance.In most of these new models, fractional orders are more ample than previously used integer orders because fractional derivatives describe memory and properties immanent in several materials and processes that are controlled by aberrant diffusion.Hence, we need to find the solution to these fractional differential equations.In general, the analytic solutions of most fractional differential equations cannot be obtained.Therefore, numerical and approximate methods are valuable in identifying the solution conduct of such fractional equations and exploring their applications. One of the most important groups of partial equations is the diffusion equations, which are used to describe physical phenomena, biology, and economics.This equation describes the behavior of the mass movement of micro-particles in a material.Due to the wider applications of these equations, many researchers in different fields have been interested in studying them [25,35]. Many numerical and approximate methods have been presented, with different techniques for each method.Authors in [25] have developed a technique to solve fractional diffusion problems of variable order.Recently, the Gradient Discretization Method has been proposed to approximate multidimensional diffusion wave and time fractional diffusion equations [13].The finite difference method has been proposed to solve a three-dimensional time fractional diffusion equation [20], and the fractional diffusion equation is solved by a neural network [36][37][38].For the application of the diffusion equation for noise reduction and fractional terminal value problems see [9,10,31,39].Spectral methods are pivotal in treating different types of differential equations, and the most commonly used trial functions are the various orthogonal polynomials, such as Fourier used trigonometric polynomials, Jacobi, Chebyshev and Legendre.The collocation spectral method [12,14,17,18,26] is applied to solve the fractional differential equations. The golden ratio and Lucas polynomials with their generalizations are of major interest; both appear in different applications in various disciplines such as physics, biology, computer science, and statistics.Therefore, many researchers have written about them in a variety of papers.The main motivation for using the suggested scheme is that both fractional and higher-order derivatives can be easily calculated using Lucas and Fibonacci polynomial relations.Meanwhile, the promising results for these polynomials, which have been considered in several applications in the area of ordinary differential equations, gave us strong motivation for discussing these polynomials for solving partial fractional differential equations.Moreover, the proposed technique achieves better accuracy for a small number of collocation points, which reduces the computational procedures time and cost.In the area of ordinary differential equations, Elhameed and Youssri [2,4,5] introduce a relation between Lucas polynomials and Chebyshev and give accurate solutions to boundary value problems.[3,4,6] Lucas polynomials have been used for solving coupled fractional differential equations.For integro differential equations [32] presents Lucas sequence, and [15] proposes Lucas polynomial approach to get an approximate solution of higher-order differential equations.This article is organized as follows: In Sect.2, we introduce the basic principles and notations of fractional calculus with Lucas polynomials.In Sect.3, we demonstrate the formulation of the proposed numerical scheme.And Sect. 4 provides stability and error analysis.In Sect.5, numerical examples are introduced to ensure the accuracy of the presented method, and a conclusion is given in Sect.6. Basic principles and notations In this section, we will introduce some necessary definitions and notations used to describe the numerical schemes. Caputo fractional derivative There are different ways to define fractional derivatives, and the most commonly used fractional derivatives are the Grünwald-Letnikov derivative, the Riemann-Liouville derivative and the Caputo derivative [33] Definition 2.1 Caputo fractional derivatives with order α > 0 of the given function f (t) are defined as: where n is a non-negative integer and n − 1 < α < n. Lucas polynomials The well-known Lucas polynomials of ordern which are defined on interval (0,1) have an explicit form: Also, Lucas polynomials may be generated by recurrence relations: 3) The Lucas polynomials have the power form: ) (2.5) For an arbitrary function u(t) can be written in terms of the Lucas polynomials as: Applying the fractional differential operatorD α to Eq. (2.6) and with the aid of relation (2.5), we will get the following relation: where where i ≥ α , i ≥ j and δ i is defined as: The numerical scheme In this section, we use spectral collocation methods with the well known finite difference method to approximate the solution of the time-fractional diffusion partial differential equation [19] where c D α t denotes the Caputo fractional derivative of order α with respect to t, k 1 and k 2 are constant parameters, and h (x) , A (t) , B (t) and g (x, t) are known functions. To start constructing our approximation scheme for two variable function u(x, t) wherex ∈ [0, 1] and t ∈ [t 0 , T ], denote by h = x the grid size in x-direction, x i = i.x for i = 0, . . ., n x , n x is a positive integer, the function u(x n , t) and its derivatives at x = x n , are discretized and expanded as: where L k (t) denotes Lucas polynomial of order k and c n k are unknown coefficients to be determined, the first and second derivatives have the forms: and the Caputo fractional derivative of order α with respect to t for u (x, t) at x = x n , as defined in (2.7) with initial and boundary conditions Equation (3.6) and condition (3.7) are collocated together at time t i = ( i N ), N = 1, 2, . . ., n t , to generate a system of nonlinear equations with (n x + 1) × (n t + 1) unknowns, using Mathematica-9 package to solve them, approximated solutions were obtained. Lemma 4.1 If u (x, t) is an infinitely differentiable function at the origin, then u (x n , t) can be expanded at positions x n in terms of Lucas polynomials as Lemma 4. 3 The modified Bessel function of the first kind I μ (t) satisfies the following inequality be the well-known golden ratio.The following inequality for Lucas polynomials holds: The following are satisfied: The series converges absolutely. after using Lemma 4.2, we obtain which led to the following after the application of Lemma 4.3: which proves part one of Theorem 4.1, To prove part two, we will consider the series ∞ i=0 c n i L i (t), and Eq.(4.6). using Lemma 4.3 and 4.4, we obtain Now, since therefore, the series converges absolutely. Proof By Theorem 4.1 and Eq.(4.7), we can write which can be rewritten as where (m + 1) and (m + 1, d n σ ) denote gamma and incomplete gamma functions, and inequality in (4.8) can be written as x m e −x dx, 123 since e −t < 1, ∀t > 0, then we have Theorem 4. 3 The Lucas approximation scheme when using the finite difference formula (3.6) for discretization in position variable for solving fractional diffusion partial differential equations is stable when Proof To start our method error analysis, we will apply u( into our scheme Eq. (3.6), and take the difference between them to get estimated error e m (x by using Eq.(3.5), the left hand side will be: begin with k = m + 1 and equating Lucas polynomials coefficients , (4.10) after rearrangement of similar coefficients, Eq. (4.10) becomes wheres = η α (m + 1, m + 1).From Eq. (4.11) and for k 2 > 0, we observe the following: for polynomial boundary conditions of degree less than m, c 0 m+1 will vanish.Also we have c 2 m+1 < c 1 m+1 and c 3 m+1 < c 2 m+1 , repeating this process until which leads to the Von Neumann stability condition. which proves the desired condition. Numerical results In this section, we implement the proposed scheme for solving several examples of fractional diffusion partial differential equations.In this section, the performance of the proposed technique is illustrated with four examples.All the numerical experiments are executed under Mathematica 9 running in an Intel (R) Core (TM) i3-CPU @ 3.70 GHz machine. Example 5.1 Consider the fractional diffusion boundary value problem (3.1) [19]with (5.1)By applying the proposed scheme same as Eq.(3.6), we will obtain (5.2) with initial and boundary conditions ( Exact solution for this case u (x, t) = e x t β , with error defined as: Table 1 shows maximum absolute errors E ∞ m for Example 5.1 with β = 6 and three different values for α = 0.5, 0.7 and 0.9 at h = 0.05 for different values for m = n t and t j = j/n t , j = 1, 2, . . ., m, compared with the well-known shifted Legendre collocation method SLCM at the same values for m = n t and n x = 4 with needed CPU time for each of them.Calculated errors by our scheme.LCM indicate less time and better accuracy with actual solutions.Meanwhile, raising the value ofm to be greater than or equalβ = 6 has no significant effect on the results. Moreover, Fig. 1 displays the approximate solution obtained by the proposed method and maximum absolute errors which indicate high accuracy with exact over the whole interval; also, Table 2 shows the maximum absolute errors at m = 8 and different values for h with estimated order of convergence using the relation Example 5.2 Consider the diffusion problem (3.1) [19]with k 2 = 1 and Table 1 Lucas and shifted Legendre collocation method Maximum absolute errors for Example 5. Exact solution for this case u (x, t) = e x t β , by applying the proposed scheme same as Eq.(3.6), we will obtain , with initial and boundary conditions Table 3 shows maximum the absolute errors E ∞ m for Example 5.2 with β = 4 and α = 0.5, 0.7 and 0.9, with m = 8 at different values for h, and Fig. 2 displays the approximate solution and maximum absolute errors; calculated errors in Table 3 Exact solution for this case u (x, t) = t β sin( x 2 ), by applying the proposed scheme same as Eq.(3.6), we will obtain with initial and boundary conditions Exact solution for this case u (x, t) = x 2 (3 − 2x)t 3+α , by applying the proposed scheme same as Eq.(3.6) and unlike previous examples, we will change both n x and n t to get better accuracy. Figure 4 displays the approximate solution and maximum absolute errors obtained by using the proposed method for Example 5.4 for α = 0.9 and m = n t = n x = 16.Errors indicate high accuracy with the actual solution.Also, Table 5 shows the maximum absolute errors for Example 5.4 with several values for m = n t = n x , calculated errors indicate high accuracy with actual solutions. Figure 5 displays approximate solution and maximum absolute errors for Example 5.4 for α = 1 and m = 6 and n x = 16, errors indicate a very high accuracy with the actual solution.All errors obtained by our proposed method gave better performance than the method from [19]. Table 6 shows maximum absolute errors for Example 5.5 in three different cases for α i , i = 1, 2, 3 and Fig. 6 displays the approximate solution and maximum abso- Summary and conclusion In this paper, we construct a novel numerical method to solve fractional diffusion partial differential equations using finite difference schemes together with a spectral collocation method based on Lucas polynomials.Stability and error analysis have been proven, and results obtained by applying our scheme to different examples indicate high accuracy and convergence for various values of fractional derivatives.Moreover, we need only a limited number of collocation points to get better accuracy with excellent comparison in CPU times. Funding Open access funding provided by The Science, Technology amp; Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). Theorem 4 . 2 If u(x n , t) satisfies Theorem 4.1 with e m (t) = ∞ k=m+1 c n k L k (t) estimated global error, then we have Fig. 5 Fig. 5 3D graphs for approximate solution and max absolute errors for Example 5.4 at α = 1 and m = 6 and h = 16 Table 4 Maximum absolute errors for Example 5.3 with β = 2 and m = 8 Table 5 Maximum absolute errors for Example 5.4 with n x = n t Table 6 Maximum absolute errors for Example with m
2023-09-20T15:07:24.039Z
2023-09-18T00:00:00.000
{ "year": 2023, "sha1": "8231b9d9de176269e841b696b5974826a668f402", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s41808-023-00246-4.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "7fbb86d7956ca49154566c5a1ac057943d43f76a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
70319820
pes2o/s2orc
v3-fos-license
A comparison of latin hypercube sampling techniques for a supply chain network design problem Currently, supply chain network design becomes more complex. In designing a supply chain network to withstand changing events, it is necessary to consider the uncertainties and risks that cause network disruptions from unexpected events. The current research related to the designing problem considers network disruptions using Monte Carlo Sampling (MCS) or Latin Hypercube Sampling (LHS) techniques. Both have a disadvantage that sample points or disruption locations are not scattered entirely sample space leading to high variation in objective function values. The purpose of this study is to apply a modified LHS or Improved Distributed Hypercube Sampling (IHS) techniques to reduce the variation. The results show that IHS techniques provide smaller standard deviation than that of the LHS technique. In addition, IHS can reduce not only the number of sample size but also and the computational time. Introduction Supply chain network design is a critical decisionmaking process that affects the efficiency of enterprise management. Especially in high volatility business environment. This is due to the risk and the uncertainty of the raw materials of suppliers. A disaster occurs in one country will affect the global supply chain, resulting in disruptions, such as in 2011, a major flood in Thailand, one of global supply chain disruptions [1]. Therefore, the proactive supply chain network design is important for all organizations. It is necessary to consider the risks that occur in countries where the suppliers, manufacturers, and warehouses are located. And also, the ability to recover from disruptions. In many supply chain network design studies. In this paper, we use a two-stage stochastic programming model [2] for supply chain design. Our supply chain network consists of suppliers, manufacturing, warehouses, and retailers. A single product is considered. The objective function is to maximize profits under disruptions. The two stages involve 1) to find an initial solution by a heuristic method. Then, 2) evaluation of those solutions from step 1 with difference disruptions by using Monte Carlo Sampling (MCS) In general, MCS techniques are commonly used. However, it requires many sample size to increase the precision of the solution leading longer computation time. Latin Hypercube Sampling (LHS) and its variants can reduce the number of sample sizes and use less time to evaluate a solution. In this paper, we compare two type of LHS to improve the efficiency of the two-stage stochastic programming. Literature review Since our main is on the sample techniques for stochastic programming, therefore only MCS, LHS, and its variants are discussed. MCS techniques [3] is generated by using a random number that independent of each other and uniformly distributed on the interval [0, 1]. MCS is a matrix (M) with dimensions N x S where N is the number of samples and S is the number of independent variables McKay et al., 1979 proposed one of the sampling techniques used for computer-aided design, The LHS techniques provide a more uniform random sample distribution. LHS is a matrix (L) with dimensions N x S The design steps is shown below: Step 1 The matrix P(N, S) is consists of random shuffles of integers ranging from 1 to N. Step 2 The matrix M(N, S) is generated from a random number in Monte Carlo method, where M(N, S) ∈ U[0, 1] which are independent of each other. Step 3 The matrix L(N, S) is constructed from equation (1 ) , and this matrix is used to simulate the event. [ ] Brian K. B., 2002 [4][5] developed Improved Distributed Hypercube Sampling (IHS) from the LHS techniques, which adds conditions for finding the distance between points by using the Euclidean distances. The design process of the IHS techniques is similar to the design of the LHS techniques. It is different to construct the P(N, S) matrix with the distance between the points that are described in Section 3.3.2. Supply chain network design In this paper, we consider a network with four echelons that consists of suppliers, manufacturing, warehouses, and retailers. As shown in Figure 1, the location of supplier, manufacturing, and warehouses depends on the probability of occurrence of disruptions within the supply chain network. The probability in each location of suppliers, manufacturing, and warehouses are varying. The goal is to decide the location of the supplier, and warehouses while maximizing profit. The probability of disruption occurs in scenario k First-stage Decision Variables QSMsmk Quantity of raw material purchased from supplier s by plant m in scenario k QMWmwk Quantity of products shipped from plant m to warehouse w in scenario k QWCwck Quantity of products shipped from warehouse w to retailer c in scenario k LDck Quantity of sales lost at retailer c in scenario k 3. Objective function The objective is to maximize the expected supply chain profit (Z) which is the difference between the total cost and the expected revenue. Solution Methodology We use two-stage stochastic programming which is composed of 2 Stages: 1) Simulated Annealing algorithm (SA) which is used to find the initial solution. Then, 2) disruptions are generated to evaluate the solution's objective function. In the first stage, we used an SA to find the solution which includes 2 steps. The first step is to set the model's initial parameters. The second step is to find the initial solution, which is the fixed cost of suppliers' location, warehouse's location and the size of the warehouse. In the second stage is to generate disruptive scenarios by using two types of LHS techniques. We generate a sample average problem instead of finding the answer to all possible events. This technique is called sample average approximation (SAA) [3]. Then we find the value of the objective function, which is the mean of the maximum profit that we obtained under disruptive scenarios by using the CPLEX program. The structure of finding the solution as shown in Figure 2. Simulated Annealing The Simulated Annealing method [7] is used to find the near optimal solution. Disruptive Scenarios In this study, we use IHS to generate disruptive scenarios to ensure that all disruptive scenarios will occur in each input variables. We expect to obtain solutions that have a smaller standard deviation than the LHS techniques. Euclidean distances that use in IHS can be calculated from the equation 2, should be close to or equal to dopt, which can be calculated from equation 3. This techniques has a distribution of sample points covering the sample space and have low coefficients of variation. It is one of the most popular sampling techniques used in improving the performance of a solution. This research utilizes the sampling techniques by the hypercube distribution method to randomly sample the disruptions to ensure that random events can represent all events. Where xik Integer at scenarios i of variable k xjk Integer at scenarios j of variable k d(x i, xj) Distance between xik and xjk, ,i ≠ j dopt Optimal distance IHS is a matrix (I) with dimensions N x S where N is the number of samples and S is the number of independent variables. The design steps are as follows. Step 1: Define initial parameters (N, S, D) where D is the number of sets of numbers generated. Step 2: Create the final P(N, S) matrix from the design approach. Step 3: Define the initial parameter r: = N-1 Step 4: When the required number of cycles (r ≤ 2) is not reached For the answer matrix. Complete the following steps. 1. Create a matrix A(c, S) by c = r. (D-1) + 1, ..., rd and d = 1, ..., D. The random number of integers is 1 to N and is not chosen from the matrix P(N, S). 3. Put an integer in matrix A where l is the distance between the nearest point and dopt from equation 3. Put in the matrix P(r, S) Step 5 Take the integer value from 1 to N and not be chosen from the final matrix P (N, S). Put in the matrix P(1, S) Step 6 Create the Matrix I(N, S) from Equation 1. Then, Compare the distribution of random numbers obtained from the LHS techniques and IHS techniques. As shown in Figure 3. Figure 3. shows a scatter plot of LHS techniques. The number of samples size (N) is 40 with two input variables (S). When dividing the sample space into 5x5 in each row and each column it contains eight points, which shows that the sample points are uniformly distributed in the corresponding row and column. It shows that each input variables has all portions of its range. But the distribution of sample points is also adjacent, not spreading the sample space. Because of the design, there are no conditions to find the distance between sample points. While Figure 4. shows that the dispersion of the sample points has spread over the sample area. In this paper, the sampling techniques by LHS and IHS method is used as a set of disruptive scenarios in the evaluation of the objective function. Result and discussions We compared the two sampling techniques of LHS and IHS, which are used to randomize disruption events in designing supply chain network problem. The sample size (N) was varied from 20, 50, and 100 respectively. Each sample size run was repeated 10 times and calculate their average and standard deviation, as shown in Table 1. Based on the comparison of the standard deviation, IHS outperforms LHS. It reduces the standard deviation by 22.45%, 15.54%, and 21.99%, for all three sample sizes. Furthermore, smaller sample size with IHS provides the similar or lower standard deviation of those of larger sample size with LHS. Conclusion This study presents a comparison of two sampling techniques Latin Hypercube Sampling and Improve Distributed Hypercube Sampling. The IHS is used with two-stage stochastic programming to solve a supply chain network design. IHS provides a lower standard deviation, smaller sample size, and better computation efficiency. In addition, a modification of LHS or Optimal Latin Hypercube Sampling will be further studied to improve computational efficiency.
2019-02-19T14:06:42.680Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "211802dad6681a013d4d714e82fd6e7c5905b797", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/51/matecconf_iceast2018_01023.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5126428844f96063c6d65d2a7b6434e53ba4d1ed", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
23642973
pes2o/s2orc
v3-fos-license
Using Jazz as a Metaphor to Teach Improvisational Communication Skills Metaphor helps humans understand complex concepts by “mapping” them onto accessible concepts. The purpose of this study was to investigate the effects of using jazz as a metaphor to teach senior medical students improvisational communication skills, and to understand student learning experiences. The authors designed a month-long course that used jazz to teach improvisational communication. A sample of fourth-year medical students (N = 30) completed the course between 2011 and 2014. Evaluation consisted of quantitative and qualitative data collected pre- and post-course, with comparison to a concurrent control group on some measures. Measures included: (a) Student self-reports of knowledge and ability performing communicative tasks; (b) blinded standardized patient assessment of students’ adaptability and quality of listening; and (c) qualitative course evaluation data and open-ended interviews with course students. Compared to control students, course students demonstrated statistically significant and educationally meaningful gains in adaptability and listening behaviors. Students’ course experiences suggested that the jazz components led to high engagement and creativity, and provided a model to guide application of improvisational concepts to their own communication behaviors. Metaphor proved to be a powerful tool in this study, partly through enabling increased reflection and decreased resistance to behaviors that, on the surface, tended to run counter to generally accepted norms. The use of jazz as a metaphor to teach improvisational communication warrants further refinement and investigation. Introduction Conceptual metaphor is a linguistic device that helps humans understand and communicate complex concepts by mapping them on to well-known or concrete concepts [1]. Metaphor is powerful, because it forms a bridge between the abstract and the concrete, using images and that are culturally accessible. For the past several years, we have been exploring connections between jazz performance and patient-physician encounters [2], using jazz as a metaphor to explore the improvisational aspects of medical communication. These explorations led us to develop an elective course for fourth-year medical students aimed at fostering students' improvisational medical communication skills. In this study, we sought to investigate the effects of using jazz to teach communication skills, and to understand the learning processes that students experienced. Conceptual Model The conceptual model for our course (Figure 1) is based on frameworks that use the arts to teach various topics in medical education [3][4][5][6]. We selected jazz as the art because of its focus on improvisation. While many fields discuss improvisation as a central concept, the improvisational part of jazz is well aligned with human conversation. As Ingrid Monson (building on the work of Paul Berliner) has noted, jazz musicians often describe "jazz as a musical language, improvisation as musical conversation, and good improvisation as talking or 'saying something'" [7,8]. Figure 1 indicates some characteristics of communication in the realms of jazz and medicine. Our strategy in each of the course sessions was to "pull" students from the realm of medicine into the realm of jazz (through guided listening and reflection exercises), thus exploring course communication concepts within the realm of jazz as a first step. The second step was to engage learners in exercises designed to help them to translate their understandings of these concepts back into the medicine realm in a way that they would find relevant, meaningful, and useful to their medical practice. Through repeating cycles of this process, the course itself became improvisational, with teacher and learners engaging in a series of unfolding conversations characterized by back and forth sharing of meaning, insight, and discovery [9]. For the purposes of our course, we defined learning as a substantive change in behaviors or attitudes, measured before and after the course, that would relate to the patterns of communication by course participants. We made several assumptions about our learners. First, we assumed that many of our learners would have had very little exposure to jazz. Second, since our population of learners consisted of fourth-year medical students, we assumed that, at the outset of the course, they would already have established their own medical communication habits, and might resist adopting nuanced and advanced levels of skill in domains where they already felt competent. We therefore designed our activities to use the jazz metaphor to foster student exploration of course concepts in unfamiliar (i.e., jazz music) conceptual territory. We hypothesized that, since many of the students would not have had substantive exposure to jazz prior to the course, the process of immersing first in jazz would help to minimize preconceived notions about communication that might act as barriers to students adopting new behaviors. In addition, we wanted to expose students to situations that are nonlinear and emergent, requiring listening, inductive thinking, and complex adaptive decision making [10]. While these characteristics could describe medicine as well as jazz [11][12][13], much of the literature on medicine's culture speaks to the contrary. This literature suggests that, by the fourth year, students have been acculturated into a hierarchical environment wherein "command and control" decision-making is the norm, and many adopt the belief that medicine is characterized by linear, cause-and-effect problems best solved only by algorithmic and deductive thinking [14]. Assuming We made several assumptions about our learners. First, we assumed that many of our learners would have had very little exposure to jazz. Second, since our population of learners consisted of fourth-year medical students, we assumed that, at the outset of the course, they would already have established their own medical communication habits, and might resist adopting nuanced and advanced levels of skill in domains where they already felt competent. We therefore designed our activities to use the jazz metaphor to foster student exploration of course concepts in unfamiliar (i.e., jazz music) conceptual territory. We hypothesized that, since many of the students would not have had substantive exposure to jazz prior to the course, the process of immersing first in jazz would help to minimize preconceived notions about communication that might act as barriers to students adopting new behaviors. In addition, we wanted to expose students to situations that are nonlinear and emergent, requiring listening, inductive thinking, and complex adaptive decision making [10]. While these characteristics could describe medicine as well as jazz [11][12][13], much of the literature on medicine's culture speaks to the contrary. This literature suggests that, by the fourth year, students have been acculturated into a hierarchical environment wherein "command and control" decision-making is the norm, and many adopt the belief that medicine is characterized by linear, cause-and-effect problems best solved only by algorithmic and deductive thinking [14]. Assuming that at least some of our students would espouse such beliefs, we purposely designed our course to guide students through repeated cycles of first exploring particular communication concepts within the metaphor of jazz, followed by guided activities to translate each concept back into medicine. Course Design The "Jazz and the Art of Medicine" course is four weeks in duration, and includes 12 h of in-class or simulation activities (3 h each week), and 8 h of clinical practice. In addition, students complete one 90-min writing assignment per week. Each week of the course is devoted to one improvisational communication topic. The topics include: • Balancing communicative structure with communicative freedom when talking with patients • Listening for deep meanings in patients' communications • Developing one's own authentic "voice" as a communicator [15] • Effectively using space (including communicative, physical, psychological, and topical) in the medical encounter [16] A detailed example of the third session (developing one's "voice") appears in Appendix A. All course teaching was done by one of the authors (PH), who has background in medical education, communication skills training, and jazz, specifically consisting of work as a physician and patient-physician communication researcher, jazz radio station disc jockey and program director (WPSU FM 91. 5,[1985][1986][1987], and as a current member of the board of directors of the Central Pennsylvania Friends of Jazz (www.friendsofjazz.org). During the weekly class sessions, students first participated in a series of guided jazz music listening exercises and discussions. Our selection of jazz pieces for the course was mainly driven by their salience for discovery about the topic of the session. For example, during the "voice" session, as demonstrated in Appendix A, we chose different versions of the same song by different artists to foster learner exploration of how the conversation and the meaning of what is being said is influenced by the persona of the conversational participants. In this particular example, comparing and contrasting singers Sarah Vaughn and Billie Holiday, and pianists Ahmad Jamal and Bill Evans served this purpose well. For the entire course, we used a variety of selections spanning traditional jazz to jazz fusion. After exploring concepts within the jazz realm, students translated insights and ideas about each communication concept from jazz to medical practice, using a trigger video of a medical interview and a series of questions for reflection and discussion. After each weekly class session, students spent 2-3 h participating in the care of patients in order to have an opportunity to apply the new communication concept. We secured placements in outpatient clinics in the specialty that each student intended to pursue, and instructed students to practice the concept and explore how it applied or could be applied within their chosen specialty. We instructed clinical preceptors in these settings to assign students to provide direct supervised patient care, so that students would experience the flow and pace of the typical work environment, while also applying the communication concepts [17]. Finally, we gave students a weekly reflective writing assignment that synthesized the multiple learning experiences (jazz, medical translation, clinical practice) into a plan for ongoing communicative practice and personal development. In addition to the classroom and clinical activities, each student interviewed a standardized patient (SP) pre-and post-course. We audio recorded the SP encounter at each time point, provided students with their own recordings, and prompted students to review the recording from the pre-course session as they worked on their weekly reflective writing assignments. The case we used for the standardized patient has been previously described [18], and presents an advanced cross-cultural communication challenge, with the actor portraying communicative clues that signify important contextual history. She divulges information only if the student recognizes and specifically explores the clues. This case, similar to others that have been described [19], is based on the notion that practicing physicians often commit contextual errors by missing key patient-centered information, ignoring patient clues, and using a high control style during the medical encounter [20][21][22][23][24]. Since the improvisational concepts we taught were aimed at fostering communicative adaptability and advanced listening abilities, we hypothesized that, if the course were successful, students would improve their performance from the first to the second time point. Evaluation Design All aspects of our evaluation design were approved by the Penn State College of Medicine Institutional Review Board. We focused our evaluation strategy on changes in students' knowledge, attitudes, and behaviors, and collected three types of data related to these learning outcomes. First, we distributed a survey to students at the beginning and end of the course that included student self-assessments of knowledge related to communication skills and ability in performing tasks related to the overall objectives of the course (see Appendix B). The survey also included the Patient Practitioner Orientation Scale (PPOS) [25], the Mindful Attention Awareness Scale (MAAS) [26], and self-ratings of communication confidence based on items from the Harvard Medical School Communication Skills Form [27]. These scales measure attitudes toward patient-centered care (PPOS), mindful practice (MAAS), and communicative tasks (Harvard Communication Skills Form) related to essential communication elements described in the Kalamazoo Consensus Statement [28]. Second, we measured patient-perceived communication outcomes based on each student's behaviors with the standardized patient at the beginning and end of the course. For each student, the SP completed a survey immediately after the interview that included measures of: (a) the degree to which she felt listened to; and (b) the degree of adaptability which the student demonstrated toward her communication and narrative during the medical interview. The adaptability items appear in Appendix B and the listening items have been previously published [29]. For the standardized patient portion of our evaluation, we also recruited a control group consisting of 10 fourth-year medical students with backgrounds similar to those that completed the month-long course. We informed the control group that they were participating in a study of medical student communication, and asked them to interview the SP twice (an initial interview and a second time one month later) under identical conditions to those of the course students. We audio recorded the interviews and made these recordings available to control students in the month between their two interviews, but provided no further instruction. The standardized patient completed the same survey immediately after control students' interviews. We did not inform the SP about the course or control status of the students. We compared quantitative student survey data at the beginning and end of the course using the Wilcoxon signed-rank test because the data were not normally distributed and the sample size was small. For the standardized patient data, we compared mean scores between course and control students at each time point (baseline and one-month) using two-sample t-tests when the data were normally distributed and the Wilcoxon-Mann-Whitney test when the data were not normally distributed. Similarly, to compare students' performance across the two time points, we used either paired t-tests or the Wilcoxon signed-rank test. We used Cohen's D to assess effect sizes. Finally, since the course was part of a humanities selective graduation requirement for Penn State students, all students completed a standardized course evaluation administered by the Penn State Department of Humanities. We collected qualitative comments from these evaluations, and augmented these data with one-hour individual semi-structured interviews with six of the eight students in the 2011 cohort. Interviews were conducted by an educational researcher (JJ), who was not directly involved in teaching course sessions, and who had not had prior contact with students. In an effort to understand learners' perspectives, the interviews probed experiences during the various components of the course, and effects of various course activities on perceptions and motivation regarding medical communication. We approached the qualitative data by performing an analysis of student qualitative evaluation comments and transcripts of student interviews through close reading and discussion between two of the investigators (PH and JJ). These investigators used a narrative framework to approach the data, focusing on learner stories and their meanings through an analysis of storied elements such as character, setting, plot, and agency [30]. This analytic dyad was balanced by participation of the course teacher (PH) and an independent educational researcher (JJ). Both were careful to examine their own assumptions as the analysis unfolded. In an effort to check the conclusions drawn, a third investigator not involved in teaching or collection of data, but who is versed in qualitative analysis (HS) reviewed the data, codes, and conclusions to corroborate the content. Results Thirty fourth-year students in four yearly cohorts (2011, eight students; 2012, seven students; 2013, six students; and 2014, nine students) completed the course. Sixteen students were female. The specialties that students planned to pursue included anesthesia (two students), emergency medicine (one student), family and community medicine (four students), internal medicine (five students), neurology (two students), obstetrics and gynecology (five students), otolaryngology (one student), pathology (one student), pediatrics (three students), psychiatry (one student), radiology (two students), and surgery (three students). Ten fourth-year students participated in the control group and completed the two standardized patient interviews; six of these students were female. Specialties that control students planned to pursue included dermatology (one student), emergency medicine (two students), internal medicine (one student), obstetrics and gynecology (two students), pediatrics (three students) and surgery (one student). We performed preliminary analyses on the data from the first eight course students and ten control students in 2011. Since the results of those analyses did not differ substantively from those for the entire cohort, we report results from combined data across all four years of course students. Results of the course student survey appear in Table 1. As shown in the table, student self-assessments of knowledge improved on all four global knowledge items. In addition, student self-assessments improved on a composite rating of seven abilities related to the objectives of the course. Student attitudes toward patient-centered care and mindful practice did not change over the period of the course. Finally, students' ratings of confidence in completing essential communication tasks improved over the period of the course. Standardized patient outcomes appear in Figure 2. All statistically significant outcomes are indicated in the figure. The course group demonstrated significant gains from pre-to post-course in both adaptability and quality of listening. While there were no significant differences between the course and control groups on the pre-course measures, the course group scored higher on the post-course adaptability evaluation, and gained significantly more than the control group on the listening evaluation. Cohen's D scores indicated large effect sizes (d > 0.8) for all statistically significant comparisons. We focused our qualitative data collection and analyses on probing students' course experiences, specifically in relation to our conceptual model. Based on our analysis, four important themes emerged. First, students described the course as an engaging classroom experience, wherein they were actively involved and invested in the course content and activities: • We focused our qualitative data collection and analyses on probing students' course experiences, specifically in relation to our conceptual model. Based on our analysis, four important themes emerged. First, students described the course as an engaging classroom experience, wherein they were actively involved and invested in the course content and activities: • "We were able to incorporate the musical concepts with the patient concepts, and so it Second, students indicated that, in contrast to previous didactic classroom experiences focused on communication, the use of jazz provided a fresh approach to learning, facilitating new and creative ways to communicate with patients: • "Because I'm not [familiar with] jazz, I had to think differently from the beginning. I had to think outside the box. My brain was being used in ways I wasn't used to, and that made it easier to learn concepts about communication, whereas if this was in the standard classroom, no music, no talking, and even [just] a standardized patient, I don't think I would have been as open and ready to try new things as much as I was." • " . . . for 3 1 2 years we're taught a very structured technique of talking to patients. And so to do something different . . . to communicate it in a different way has been interesting." • "I definitely gained a whole new perspective on the music of jazz and also I think the art of communication . . . there were similarities and things that we could learn from the music and then about ourselves and what we were doing as far as our communication skills." Third, in addition to helping students to approach communication differently, participants suggested that the jazz metaphor also provided a model to guide their understanding of communication concepts: • "I think it's one thing to just be told that this is what you are supposed to do, but another thing to hear the music and see these musicians who are doing the same thing in their form of communication and to be able to use that as a model for us in terms of the communication that we need with the work we do." • "By using jazz, it was a great model to . . . help us understand communication in a way that is relevant and in a way that almost all of us can relate to." • "it probably makes the concept stick a little bit better because you have a visual, or an audio in this case . . . .even if you forget the concepts, you can always think back to that and remember, 'Oh yeah, in jazz they do this . . . '" Finally, students suggested that they became increasingly aware of their own agendas as they interacted with patients. A key recognition was that communication checklists taught in various history-taking courses often dominate their interviews and leave little space for patients to tell their stories: • "I summarized the course for myself and said, 'step away from your notes and your list of questions and your list of fill-in-the-blanks and just have a conversation.'" • "Being a fourth-year, you think you know it all at this stage and by the time we went through the course, I was like: 'I know nothing about communication' . . . I need to revamp the way I talk to patients and how I gather information from patients" • "It takes more of a mental effort to sort out what the patient is saying. It also means that you're not really in control anymore, the patient is in control. And that shift mentally for a medical student or for a doctor is pretty-it's challenging because you want to be in control. You want to be the doctor. But, to communicate effectively . . . it's not the right thing to do." Discussion Why use the arts, and jazz in particular, to teach medical communication skills? After all, the communications literature is well populated with topical frameworks [28,[31][32][33] and proven methods to promote better communication skills [34][35][36]. The uniqueness of this study lies not necessarily in the actual skills that students practiced. Rather, the major innovation of our course lies in using jazz to provide a metaphorical frame for student discovery and understanding of improvisational communication processes. While we were pleased that students in this course demonstrated self-reported gains in knowledge and communicative skills as well as substantive improvements in standardized patient-assessed performance compared to controls, we believe that our qualitative data provide clues to two important events that represent a new direction for communications training programs. First, students repeatedly talked about having lowered resistance to trying out new communicative strategies and adjusting previously used strategies. In order to develop narrative competence [37], physicians need to develop a skill set that often runs counter to generally accepted ways of conducting the interview. Such skills include the ability to collect information in a non-linear fashion, and to share control with the patient over communicative processes, in effect "co-constructing" a history with the patient rather than "taking" it from them [38]. However, many students and practicing physicians may have an inclination to dismiss such notions as unrealistic under the pace and time pressures of real-world practice. The value of spending time exploring such communication behaviors in the jazz realm lies in jazz's foreignness compared to medical practice. Since most students were not very familiar with jazz at the outset of the course, they did not have many preconceived notions about what is and is not realistic within jazz, so were able to take the course's communication concepts more at face value than immediately dismissing them without due consideration. By first starting class sessions with discussions about jazz, students may have been able to develop a different understanding of each of the four communication concepts of the course, and this different understanding may have primed them to more seriously explore how each concept would operate within their own specialty and medicine in general. Second, we propose that the central jazz theme of improvisation provided an overall umbrella to guide integrating multiple individual communicative acts into students' ongoing behaviors. For example, during the session on communicative space, students explored several distinct communicative skills, including using silence, pacing, communicative latencies (the time between turns at talk), and open-ended questioning, all aligned toward having a good improvisation process with patients. This idea of "good improvisation" was clarified by the jazz listening exercises, and provided a concrete frame in which to explore and practice individual communicative skills, such as asking open-ended questions. When students subsequently attended clinical sessions as part of the course, they did so not only with specific behaviors to try, but also with a vision of the kind of harmonious improvisation that those behaviors were intended to create, and they were prepared to experiment and start figuring out for themselves how such behaviors would fit together in achieving this aim. We believe this study suggests a new way of thinking for medical educators who have often approached teaching communication to future physicians with lists of best practices, key phrases to memorize and use, and various questioning techniques. While using the arts to teach medical topics is not new, this study suggests that medical students may respond positively to jazz concepts as a way to understand and use, in practice, communicative presence, adaptability, and engagement with patients, particularly in response to intentional pedagogical strategies employed to maximize the learning both within the jazz and medical realms. As promising as our results are, however, this study raises many questions about implementation that may influence the subsequent learning impact, and these need further study. For example, it is unlikely that students automatically make connections between the art and their own medical practice; how can these connections be enhanced by the educator? What pedagogies and strategies can teachers use to maximize the effectiveness of translation from the art to the bedside? We have proposed a set of strategies in our curriculum, but other approaches may be equally or more effective; what are such techniques, and can they engage students who have no or only a passing interest in jazz? Can the lessons learned in this study be broadly applied to the use of the arts in general education [6]? Our study has several limitations. First, even though we employed a control group for the standardized patient evaluation, it represents a single study at a single school. Penn State is the first medical school in the US to establish a Department of Humanities and has a reputation for teaching Medical Humanities. Its students may therefore be somewhat unique, and many cite the Humanities presence as a factor in choosing to attend this school. It would be illuminating to study the course at additional schools. Second, the outcomes we studied represent immediate changes in behaviors and attitudes, and may not reflect long-term changes in the participants' medical practice. Additional study is needed to assess the downstream effects of this intervention. Conclusions In conclusion, our experience with this course suggests that using jazz as a metaphor can be a powerful tool in fostering patient-centered communication. This power is derived partly from an ability to suspend resistance to behaviors that may run counter to generally accepted norms [39,40]. We believe that the use of art as metaphor in general, and jazz to teach improvisational communication in particular, warrants further refinement and investigation. A. Introduction In this session, we will explore elements of "voice". This session is a bit abstract, and while people often use the word "style" interchangeably with "voice", style is only one aspect of one's voice. Without specifically defining voice, the session is designed move through some listening and viewing exercises aimed at thinking about various aspects of voice, allowing learners to formulate their own conceptual definition of the concept. The objectives for the session are as follows: By the end of this session, learners should be able to: 1) Articulate key communication elements that lead to an impression of style; 2) Understand how others articulate elements, and keep or modify one's own impressions as a result; 3) Articulate one's own internal preferences and worldview that shapes their voice; 4) Plan to try out new communication strategies. Without specifically defining voice, here is a bit of background information: If one wants to become a jazz musician, they must first learn the fundamentals. They must have a comprehensive working knowledge of all the scales (e.g., c major, a minor, b-flat major, etc., etc.), and all the songs (called "jazz standards"-there are anywhere between 250 and 500 of them) that are played in jazz. They must be able to play any song in any key at the drop of a hat. They must have complete mastery of their instrument; they must know their instrument so well that they can just think of a phrase and their fingers will automatically play it without them having to think about the fingering, where to pluck the string, how hard to blow, etc. However, if this is all one does, they will never be a great jazz musician. There are plenty of musicians who have mastery of their instruments. There are even some who know all of the jazz scales and songs. The truly good and great jazz musicians have all gone one step further-they have developed their VOICE on their instrument. This is more than style; I think of it as more like channeling their own personal vision, conception, ideas, and personhood through their instrument. A serious jazz fan can tell the difference between John Coltrane and Sonny Rollins (two great tenor saxophonists in the 50 s and 60 s) within 3 notes, and can identify them amid the hundreds of saxophonists who played in that era. That's because Coltrane and Rollins had fully developed and distinctive voices. B. Jazz listening exercises First Exercise: Play for the students two versions of the song "They Can't Take That Away From Me" by Sarah Vaughn and Billie Holiday. (Sarah Vaughn from the 1957 album "Swingin", and Billie Holiday from the 1957 album "Songs for Distingue Lovers") These two versions were released in the same year, are done in the same key, and are played at the same speed. Also provide students with a sheet with the lyrics to the song. Have students take a quick look at the lyrics, then listen first to the Sarah Vaughn version, and next to the Billie Holiday version. Have students discuss the following prompts in groups of 2 3) The final question is the most abstract, but most important question: Which one of these singers would you most like to "be like" as a physician? Spend some time pondering this one, and try to think "out of the box" about this. After the students have discussed the prompts, have the groups write their answer to the final prompt (Vaughn or Holiday) on a sheet of cardstock, then hold it up, so that all groups can see which singer the other groups picked. Facilitate a discussion among all of the groups about what led them to choose the singer that they chose. Second Exercise: Repeat the same process, this time using two piano players. Since there are no words, this means that students will have to listen to and derive meaning from the other aspects of language, such as the sounds, arrangements of notes, speed, or whatever aspects of the music that their minds gravitate toward. (This is analogous to the nonverbal and paraverbal (i.e., the sounds) language used between doctors and patients). Play for the students two versions of the song "Emily", originally written by Johnny Mandel and Johnny Mercer. (The first version is by Ahmad Jamal from the 1968 album "Tranquility", and the second is by Bill Evans from the 1967 album "California Here I Come".) Invite students to focus specifically on the musical "voice" of each pianist as they listen to the solos during each track. As with Sarah Vaughn and Billie Holiday, have students discuss the following prompts in small groups of 2-4, followed by a large group discussion of the second prompt: 1) List 3 defining characteristics of each instrumentalist's "voice" as you listen. Based on what you hear, speculate on what you think these three musicians "are like" as people. If you met them on the street, what would your first impressions of them be? Why? What did you hear that led you to these speculations? 2) Now, once again: Which one of these instrumentalists would you most like to "be like" as a physician? Spend some time pondering this one, and try to think "out of the box" about this. Be prepared to describe why you chose the instrumentalist that you did. C. Patient-Physician Communication Exercise Show the students a video of a doctor-patient encounter (The video we use can be provided by contacting Paul Haidet at phaidet@pennstatehealth.psu.edu). Direct students to pay particular attention to the doctor's voice as they watch. Discussion prompts (students discuss in groups of 2-4, followed by facilitated classroom discussion): 1) In 3 words or less, describe this doctor's voice. Explain what language or communicative behaviors you saw and heard that led you to choose the words you did. 2) Would you consider this doctor to be a "patient centered" doctor? Why or why not? What did you see and hear that shaped your impressions? 3) Which of the two singers was this doctor's voice most similar to? Explain your choice. 4) Which of the three instrumentalists was this doctor's voice most similar to? Explain your choice. This completes the in-classroom portion for week #3. After the in-class session, students attend a 4-hour clinic session (within their chosen specialty), working to take care of patients. Within two days of completing the clinic session, each student should complete and submit a writing assignment in response to the following prompts: D. Writing Prompts (post-clinic) As you write, try to keep focused on your core being (e.g., who ARE you?), and the communicative elements that go with that core being: 1) Describe your "voice." What are its distinguishing characteristics right now, and what distinguishing characteristics do you want to develop as you move forward in your career? 2) What three words would you want patients to use to describe YOUR voice? 3) How will you know if your patients will perceive you that way? 4) Extra Credit: With what types of patient "voices" will you have difficulty maintaining your own voice? Why? A. Pre-and Post-Course Student Survey Items a. Knowledge items (7-point response choice for each item, anchors applied to the lowest and highest response choices) i. Rate your knowledge of jazz ("little or no knowledge"-"comprehensive knowledge") ii. Rate your enjoyment of jazz ("little enjoyment"-"enjoy jazz a great deal") iii. Rate your understanding of improvisation ("little or no understanding"-"comprehensive understanding") iv. Rate your understanding of patient-physician communication processes ("little or no understanding"-"comprehensive understanding") b. Self-assessment of abilities related to course objectives (students instructed to rate their level of confidence in being able to do each behavior right now using a 7-point response scale anchored by "not confident at all" and "completely confident") i. Adapting when interacting with patients ii. Using my own personal style when interacting with patients iii. Giving patients space to talk while also managing time effectively iv. Paraphrasing what I have heard patients say in a way that feels natural and unforced v. Understanding patients' unique perspectives vi. Being perceived by patients as a "good listener" B. Standardized Patient Adaptability Assessment Items (6-point Likert response scale for each item) a. The student modified his/her pace of speech to be more similar to mine. b. The student modified his/her tone to be more similar to mine. c. When I offered my beliefs about my symptoms, the student responded in a non-judgmental way. d. When I verbally indicated uncertainty/confusion, the student clarified or explained in language I could understand. e. When I non-verbally indicated uncertainty/confusion, the student followed up with a question or probe. f. This student was able to understand the impact of my life circumstances on my illness. g. Overall, this student was skilled at recognizing my non-verbal cues (body language, facial expressions, eye contact, fidgeting). h. Overall, this student was skilled at recognizing my verbal cues (word choice, tone of voice, use of pauses and emphasis). i. Generally, during the encounter the student seemed open to my input. j. Generally, during the encounter the student was able to include my input in our discussion.
2017-12-14T21:02:37.670Z
2017-08-04T00:00:00.000
{ "year": 2017, "sha1": "e41cbc3624d8b72fb383bc1140fc0b590329290b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9032/5/3/41/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c4e61e5f8a16ff8c1c51b4caa918c3ca3a554f7b", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
201963663
pes2o/s2orc
v3-fos-license
Homicide in Denmark 1992–2016 We present the findings for homicides in Denmark for 1992–2016. There were 1417 homicide victims (62.2% males, 37.8% females) that were killed in 1321 homicide events. The most common methods were sharp force trauma (33.2%), gunshot (22.2%), blunt force trauma (21.9%) and asphyxia (17.6%), and all methods exhibited a reduction during the study period. The homicide rate was 1.05 per 100,000, 1.32 per 100,000 for males, and 0.78 per 100,000 for females. Domestic homicides were the largest main group of homicides (76.5% of all female victims vs. 23.6% of male victims). Of the non-domestic homicides, 84.2% of victims were male, the largest group being in the setting of nightlife and/or intoxication. Most female victims (76.9%) were killed by someone in their family, while the largest share of male victims (34.5%) were killed by a friend or acquaintance. The offenders were males in 87.9% of all homicides. Introduction Interpersonal violence has wide public attention and claims many lives every year worldwide [1]. Homicide as a medical manner of death can be identified as deaths from intentional trauma inflicted by another person and includes murder, aggravated assault and in some countries (including Denmark) legal intervention [2,3]. The homicide rate in a country is an indicator of the level of interpersonal violence, but the association varies between countries and through time [4]. The variation originates from factors, such as weapons used, access to medical facilities and access to trauma centers, as well as more general changes in society. The homicide rates in the Western World increased from the late 1960s until the early 1990s, when it started decreasing [4]. To understand the changes, it is necessary to look at homicides on a more detailed level than just the rates. The Danish Register of Causes of Death is based on death certificates [5] and only publishes broad annual data for homicides, i.e., the number of homicides broken into sex, age groups and region, with restrictions on how small groups can be reported. Data on the homicide methods for recent years are not readily available to the public but can be acquired for a fee. As all homicide victims in Denmark by law are required to undergo medicolegal autopsy, the autopsy reports are a valid alternative data source for homicide studies, allowing for more information to be collected than from a death certificate. Homicide in Denmark from the perspective of Forensic Medicine has not been subjected to a national review since the period of 1946e1970 (N ¼ 892) [2]. However, there have been regional studies of homicides in Denmark/Norway (Copenhagen/Oslo) for the period of 1985e1994 (N ¼ 431, N Copenhagen ¼ 275) [6] and in Southern Denmark for 1983e2007 (N ¼ 166) [3]. These studies focused on various general elements regarding the act of homicide, such as methods, motives and relation between victims and offenders. Since 1970, there have been many changes in Danish society, ranging from general demographics and family structure to access to advanced medical facilities and illegal drugs. How this has affected the homicide pattern on a national level is unknown. A contemporary study of homicides in Denmark will be of use in death investigations and can lead to a better understanding of homicides on a regional and global scale. We provide data on all homicides in Denmark for the 25-year period of 1992e2016. Materials and methods In 1992e2016, Denmark had an average population of 5.41 (5.16e5.71) million [7]. All medicolegal autopsies were performed at one of the three departments of forensic medicine in Copenhagen, Odense or Aarhus. We retrieved 1439 autopsy files from the departments of forensic medicine for 1992e2016 coded for homicide as the manner of death. The main documents on file were the autopsy reports (including crime scene examination of the victims), initial police reports, crime scene photos and autopsy photos. Approximately one-third of the files had supplementary police reports and court documents, which we studied in cases with unclear descriptions in the main documents. A total of 97 deaths coded as homicides (false positive) were excluded due to simple misclassification in the file system (38 deaths), the homicides occurring abroad (35 deaths) or a court ruling the death as a non-homicide (24 deaths); the latter typically involved single injuries from knives or firearms. To find misclassified homicides (false negative), we retrieved all autopsy files where the National Police had taken photos at the autopsy, indicating the death being suspicious of homicide. This yielded 34 additional deaths that had been misclassified as nonhomicidal, which was confirmed by police files and/or court files. We also compared the list of homicides to the annual mass media accounts (via the media service, infomedia.dk), generating 41 additional misclassified deaths, which were confirmed by autopsy reports and/or court reporting. For the current study, we collected information about the victim and offender, circumstances of the homicides, motive, relation between victim and offender, homicide method, and whether the offender attempted or committed suicide. We registered data for each homicide electronically using Epi-Data (EpiData Association, 2010, Odense, Denmark. http://www. epidata.dk), followed by exporting the data to Stata (StataCorp. 2015. Stata Statistical Software: Release 14. College Station, TX: StataCorp LLC.) and Rstudio (RStudio Team (2015). RStudio: Integrated Development for R. RStudio, Inc., Boston, MA; http://www. rstudio.com/) for statistical analysis and data visualization. Where appropriate we analyzed data with linear regression and Kruskal-Wallis rank sum test, with a significance level of 0.05. The project has been approved by the Danish Data Protection Agency. Results The 1417 homicide victims included 881 (62.2%) males and 536 (37.8%) females. The mean annual number of homicides was 56.7 (29e78), 35.2 (18e49) for males, and 21.4 (10e35) for females. The homicide rate was 1.05 per 100,000, 1.32 per 100,000 for males and 0.78 per 100,000 for females. The highest rate was 1.51 per 100,000 in 1992, and the lowest rate was 0.52 per 100,000 in 2012. The most common homicide methods were sharp force trauma, gunshot, blunt force trauma and asphyxia, accounting for 95.1% of all homicides (Fig. 1). In the deaths from asphyxia, 68.0% of the victims were females, while 68.6% of the victims were males in all other methods combined. The most common methods all exhibited a significant reduction over the 25 years, resulting in an overall decrease of homicides ( Fig. 2) with an annual decrease of 1.4 homicides per year (linear regression: P < 0.001, F ¼ 38.9, R 2 ¼ 0.63). The homicide rate (per 100,000) showed an annual decrease of 0.03 per year (linear regression: P < 0.001, F ¼ 53.4, R 2 ¼ 0.7) (See Supplementary Fig. 1). The decrease affected both sexes. The homicides were committed in 1321 events distributed in 1249 events with one victim and 72 events with multiple victims (168 victims, 2e6 victims per event). The annual decrease for single victim events was 1.25 homicides per year (linear regression: P < 0.01, F ¼ 32.6, R 2 ¼ 0.59) and for multiple victim events 0.25 homicides per year (linear regression: P < 0.05, F ¼ 7.9, R 2 ¼ 0.26) (Fig. 3). There were no notable sex differences in the number of homicides in multiple victim events. Age of victims The mean age of male victims was 37.3 years (0e91, sd ¼ 17.8) and 39.3 years (0e91, sd ¼ 21.3) for females (Fig. 4), and 67.1% of the victims were 25e64 years old. The highest age-adjusted rate for a given age was 2.3 per 100,000 for the age of 0e1. Time of homicide The time of day for the homicide was known in 75.1% of the homicides. Most of the homicides occurred in the nighttime, i.e., from 6 p.m. to 6 a.m., often on Friday and Saturday evenings and following nights (Fig. 5). For male victims with known time of the homicide, 71.6% were during nighttime vs. 55.3% for female victims. There were no significant seasonal or monthly variations (Kruskal-Wallis rank sum test, standardized for days per month). Domestic vs. non-domestic homicides We have grouped the homicides (Figs. 6 and 7) based on the typology of the European Homicide Monitor [8]. This typology recognizes the importance of victim-offender relations as well as the difficulties in pinpointing a single motive for each homicide. Domestic homicides were the largest main group of homicides and accounted for 76.5% of all female victims vs. 23.6% of male victims. Intimate partner homicides include current partners as well as ex-partners and showed the strongest sex difference, as 298 of the 376 intimate partner homicide victims were female (79.3%) and accounted for 55.6% of all female victims. In contrast, only 8.9% of male victims were killed by a current or ex-partner. The rate of intimate partner homicides was 0.28 per 100,000, 0.44 per 100,000 for female victims, and 0.12 per 100,000 for male victims. When including homicide events with multiple victims, e.g., killing of a Supplementary Table 1). partner and a child, 410 (28.9%) homicides had an intimate partner component. The intimate partner homicides with male victims were to a much higher degree committed in a setting with the victim provoking the homicide by being violent or threatening with violence towards the offender (at least 33.3% for male victims vs. 0.7% for female victims.). On the other hand, the intimate partner homicides with female victims had separation and/or jealousy as a part of the motive (at least 38.2% for female victims vs. 9.0% for male victims). There were no differences between the sexes of victims in the domestic child homicides, but 75.0% were killed by their father (see below). For the 799 victims of non-domestic homicide, 673 were male (84.2%). Homicides in the setting of nightlife and/or intoxication was the largest single group with 222 male victims. Triviality, such as a spilled beer or a wrong glance, was an element of the motive in at least 61.3% of those homicides. Relationship between victim and offender The relationship between victim and offender showed strong differences between female victims and male victims (Fig. 8). For female victims, partner/ex-partner was by far the largest group (56.0%), and some kind of family relation accounted for 76.9% of all female victims. In comparison, male victims were dominated by a friend or acquaintance relationship to the offender (34.5%). For child homicides, there were no differences between the sexes. In the 9.1% of homicides where mental illness was stated as part of the motive only 7.0% of victims had no relation to the offender. The victim was a relative in 67.4%, the largest group being parents. Location of homicides The location of the homicides form clusters that overlap with the larger cities (Fig. 9). The majority (76.6%) of homicides occurred in residential areas (Fig. 10), and most of these homicides occurred inside (84.9%). In contrast, 52.1% of the homicides were at bars and other service areas and 98.6% of the homicides in traffic areas occurred outside (26.4% in cars). A larger share of female victims than male victims were killed inside (85.8% vs. 66.9%). The location of the homicide was in the home of the victim and/or the offender in 80.8% of the female victims and 56.5% of the male victims. The offenders The offender's sex was known in 1333 (94.1%) homicides, with 87.9% of all homicides having only male offenders (Fig. 11). In 12.4% of the homicides, there were multiple offenders, and none of the homicides were female-only. Domestic homicides accounted for 47.2% of the homicides with male offenders vs. 80.7% with female offenders. The same numbers for intimate partner homicides were 29.1% vs. 50.4%. There were 1134 homicides (1051 homicide events) with one offender, in which the age of the offender was available. The mean age of male offenders was 36.2 years (13e88, sd ¼ 13.8) and 35.4 years (13e71, sd ¼ 11.7) for females, and 74.6% of the offenders were 25e64 years old (Fig. 12). Suicide after homicide The offender committed suicide following 9.8% of the homicide events and attempted suicide following 3.6% of the homicide events. Male offenders were responsible for 90.5% of the suicides and 81.3% of the attempts (Fig. 13). There were no sex differences in the proportion of offenders who committed or attempted suicide. Domestic homicides accounted for 87.8% of the homicides followed by suicide or attempted suicide. In domestic homicides, the offender committed or attempted suicide in 24.9% of homicides with one victim vs. 66.7% of homicides with multiple victims. Of the single victim homicides with suicide or attempted suicide, 64.1% were intimate partner homicides, while 63.2% of the multiple victim homicides were child homicides. Half of the offenders who committed suicide used the same method as for the homicides, whereas only one-fifth of the offenders who attempted suicide used the same method. Most (84.5%) suicides were committed immediately following the homicide. Table 1 shows multiple regression models with victim sex as response variable and homicide method, year of homicide, victim age, main homicide type (domestic, criminal milieu and noncriminal related) and time of day as predictor variables with modelling of interaction between homicide method and main homicide type. Except for year of homicide all predictors had significant effect on victim sex, most pronounced for main homicide type. Discussion Denmark has a relatively low homicide rate that is comparable to the reported homicide rates of other Western European countries, as is the drop since the 1990s [4,8,9]. The reasons for fluctuations in the homicide rate are manifold. Psychosocial factors that often go hand-in-hand, such as substance abuse, mental illness and prolonged unemployment, are responsible for some of the variations [10]. Substance abuse is thought to result in an increased risk of violence due to proximal factors, e.g., cognitive impairment and increased aggression, as well as distal factors, e.g., lifestyle and contact with other intoxicated people [11]. With respect to male victims, the largest subgroup was homicides in the nightlife and/or with intoxication as the main component. It is well known that most homicides occur in the evening or nighttime and weekends with overrepresentation of alcohol intoxication and/or illicit drug use [2,6,8]. This result is similar to homicides in Denmark during 1946e1970 [2], although the study focused on alcohol. The high rate of triviality as part of the motive in the homicides in the nightlife and/or with intoxication indicates the potential for prevention by reducing alcohol intake. The role of a civilizing process is mentioned as an explanation for the great decline in homicide rate through historical time, whereas technology of healing, i.e., better telecommunication, transport and medical advances is thought to have a role in more recent history [4,12]. In our study period, the public's access to mobile phones and localization technology (Global Positioning System (GPS), etc.) has increased dramatically improving response times for ambulance services [13] and has lowered the rate of death at the scene [14]. There has also been a remarkable development in pre-hospital and in-hospital care to trauma patients with reduced mortality [15,16], although there are some suggestions that survival has not increased in trauma patients in Denmark that reach hospitals alive [17]. That sharp force trauma, gunshot and blunt force trauma are the most common methods is a drastic change from the homicide methods in Denmark during 1946e1970, when asphyxia, poisoning and blunt force trauma were the most common [2] (see below). For 1966e1970, however, poisonings were reduced to less than 4%, with sharp force trauma rising to a shared third position. The Netherlands, Sweden and Finland have a distribution similar to our study [8]. The type of trauma is an important factor when studying trauma-associated mortality. We hope to address this in detail in upcoming studies of each of the common homicide methods in this study, using trauma scores to stratify according to the potential for survival. Males were prominent in both victim and offender statistics, which is not surprising [8]. The proportion of homicides with female victims and/or offenders followed the proportion of domestic homicides, as seen worldwide [1]. Intimate partner homicide was the driving force behind this sex difference. In homicides in Denmark during 1946e1970 [2], 85% of the victims of intimate partner homicide were female, and the rate was 0.13 per 100,000, i.e., about half compared to 1992e2016. In Southern Denmark during 1983e2007, the rate of intimate partner homicide was 0.2 per 100,000 [18]. The proportion of female and male homicides that were due to intimate partner homicides is somewhat higher in our study than the estimation for high-income countries as a whole [19], but both the proportion and rates are similar to findings in Sweden for 1990e2013 [20]. It is likely that the share of these homicides with separation and/or jealousy was much higher for female victims than we have reported, as the offender committed suicide in 26.6% of these homicides, resulting in a lack of information regarding motive. For 1946e1970, Hansen described a large decline in child family homicides by carbon monoxide poisoning from town gas with female offenders, owing to better support for single mothers and limited availability of the poisonous gas [2]. It was thought that the ease with which it was possible to turn on the gas, coupled with its high toxicity, made homicide-suicides common. This result can explain the low rate of female perpetrated homicide in 1992e2016 of 12% compared to 1946e1970, where 32% of the offenders were female, with a decrease to 19% for 1966e1970 [2]. In the study from Southern Denmark during 1983e2007 [3], the female offender ratio was 10%, suggesting that the decline in these homicides continued from 1970. During 1946e1970, 25% of all homicide victims died in homicide events with multiple victims [2] compared to 12% in our study, due to the abovementioned reduction in child family homicides. This also explains the drastic reduction in homicide-suicide events (from 30% to approximately 10%) between the two periods. Our findings on homicide-suicide events are consistent with findings from The Netherlands, U.S.A. and Switzerland [21,22] on domestic homicides being the majority, the high frequency of multiple victim events and the common use of firearms. Our data regarding the use of firearms is more in line with the data from The Netherlands than the U.S.A. and Switzerland, most likely reflecting the low rate of gun ownership. The majority of victims were in the age group of 25e64 years [8]. This is in stark contrast to 1946e1970, where 42% of the victims were younger than 15 years old (with an age-adjusted homicide rate of 1.35 per 100,000) [2]; again, this decline is a result of the drop in child family homicides. The offender age distribution is similar to 1946e1970 [2] and other European countries [8]. A challenge in epidemiological homicide research is whether all the actual homicides have been included [23,24]. Using official mortality statistics has the advantage of having medically based diagnoses of the manner of death, defined by the International Classification of Diseases, that are more robust compared to the changing legal definitions that can obscure criminal statistics [4]. However, official mortality statistics has some limitations in the sense that the coding of the death certificates occurs prior to the final verdict in court, which means that a true homicide can be registered as a non-homicidal manner of death and vice versa [4]. By combining information from the departments of forensic medicine with court reports and mass media accounts, we believe that we have succeeded in identifying as many homicides as possible while erring on the side of caution by only including the homicides where an investigation and/or trial was in agreement. We have compared the annual number of homicides from our study to the Danish Register of Causes of Death and found that we have an average of 7.9 more annual homicides, supporting that death certificates are an incomplete source for identifying homicides. There are, of course, homicides that will never be recognized, either because the victim is never recovered or due to misdiagnosis at the death investigation. We have identified the homicides in this material, where the homicide was either concealed, staged or misdiagnosed initially and will describe the characteristics in a future study. Our study is limited by available documents, i.e., autopsy reports, initial police reports, photos and in a limited number of case files also supplementary police reports and court documents. This has led to missing variables regarding detailed offender characteristics, such as psychopathological-and other motivational aspects. We were able to extract general information about mental illness, but it is likely that some offenders had mental illness not described in the available documents. We plan to report general information about mental illness and substance abuse relating to the homicides in an upcoming study with data from the clinical forensic examination and autopsies of the homicide offenders. An avenue of further studies could be a more detailed report on these aspects with data from the psychological evaluation of offenders, as they are crucial in understanding homicide offending in general [25] and in subgroups, such as sexual homicide [26e30]. Conclusion We have presented a retrospective overview of homicides in Denmark for a recent 25-year period. The most pronounced difference to the latest similar study from 1946 to 1970 was the large reduction in female perpetrated family homicides. In the intervening period, there has been a rise in the homicide rate with a change in the common homicide methods from poisoning and asphyxia to sharp force trauma and gunshots. Domestic homicides are still the most common group, but with a shift towards intimate partner homicides, often with female victims and male offenders. Males were most often killed by male friends or acquaintances, typically in a setting of intoxication. We found a significant reduction in the homicide rate for the common methods of sharp force trauma, gunshot, blunt force trauma and asphyxia in the study period. Further studies from our research material will hopefully help explore what has contributed to the reduction in the homicide rate. Funding Thomsen reports grants from Brødrene Hartmanns Fond, during the conduct of the study. Disclosure The other authors have nothing to disclose.
2019-09-09T18:39:24.375Z
2019-08-24T00:00:00.000
{ "year": 2019, "sha1": "65e5fca20186ad97676e38f0e07e800b196810f6", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.fsisyn.2019.07.001", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4da5598bb9a0cd07a7fd86ba50d4446b39c9634b", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
92391927
pes2o/s2orc
v3-fos-license
Standard area diagram set for bacterial spot assessment in fruits of yellow passion fruit This study developed and validated a standard area diagram set (SADs) for severity assessment of bacterial spot (Xanthomonas axonopodis pv. passiflorae) in fruits of yellow passion fruit (Passiflora edulis). The SADs consisted of eight severity levels (1%; 3%; 5%; 10%; 21%; 38%; 65%; and 80%). For its validation, 20 raters, who initially estimated the disease severity without the aid of the SADs, were divided into groups (G1 and G3, inexperienced; G2 and G4, experienced). Subsequently, G1 and G2 performed the second evaluation without the proposed SADs, and G3 and G4 completed the second evaluation using the proposed SADs. The accuracy and precision of the assessments were determined by simple linear regression and by the Lin’s concordance correlation coefficient. The increase in the accuracy was confirmed by the reduction in the constant and systematic errors, indicating that the estimated severities were close to the actual values when the SADs was used. Inexperienced raters benefited the most from the use of the SADs, and 60% and 100% of them presented constant and systematic error-free estimates, respectively. Precision increased with the increase in the coefficient of determination, the reduction in absolute errors, and the increase in the reproducibility of the estimates between pairs of raters. Introduction Yellow passion fruit (Passiflora edulis Sims) stands out as the most cultivated and commercialized species of the genus Passiflora due to its fruit quality and yield (FALEIRO et al., 2011).However, this species is susceptible to several diseases, such as bacterial spot (Xanthomonas axonopodis pv.passiflorae), which is widespread in all producing regions, depreciating the fruit quality and value and reducing the crop's production cycle (JUNQUEIRA et al., 2016).Bacterial spot lesions on the fruit are large, with initial green and greasy appearance, which later becomes brown.They are circular or irregular in shape, with well-defined edges, which can coalesce and cause larger lesions (FISCHER;REZENDE, 2008).Initially, lesions are superficial; however, the pathogen can penetrate the pulp and promote its fermentation, resulting in fruit rotting (PERUCH et al., 2011). Quantifying disease severity is fundamental in epidemiological studies (DE BEM et al., 2016) for the evaluation of control strategies (MARCUZZO et al., 2016) and identification of resistance sources (GYAWALI et al., 2018).In breeding programs of yellow passion fruit, this evaluation has been carried out using descriptive scales (KUDO et al., 2012;BATISTTI et al., 2013;VIANA et al., 2014;NOGUEIRA, 2016).These scales are subjective and do not allow adjusting the visual acuity when evaluating the severity levels (CAMPBELL; MADDEN, 1990), which impairs the precise quantification of the injured area (SANTOS et al., 2017).Conversely, diagrammatic scales or standard area diagram set (SADs) are valuable tools for the identification of variations in disease resistance among genotypes.When applying scales, the accuracy and precision of the disease severity estimates are significantly improved, resulting in fewer experimental errors (LAGE et al., 2015;DE PAULA et al., 2016;SANTOS et al., 2017).Consequently, heritability estimates for disease resistance are more reliable, which increases the potential gains from selective breeding (VIEIRA et al., 2014). The use of SADs has contributed to increasing the accuracy and precision of estimates of disease severity caused by Xanthomonas to other plants, such as grapes (NASCIMENTO et al., 2005), peach (CITADIN et al., 2008), common beans (LIMA et al., 2013), orange (BRAIDO et al., 2015), andtomato (DUAN et al., 2015).Despite the great relevance of diseases in yellow passion fruit crops, the only SADs validated for diseases quantification in this species is the one developed by Fischer et al. (2009) for the evaluation of anthracnose in fruits.Considering the lack of standardized methods to quantify the bacterial spot severity in this fruit, this work aimed to: (1) develop and validate a SADs for the evaluation of bacterial spot severity in yellow passion fruit; (2) compare the accuracy, precision, and agreement of disease severity estimates, with and without the aid of the SADs; and (3) compare the accuracy, precision, and agreement of the estimates of inexperienced and experienced raters. Development of the SADs Fifty fruits of yellow passion fruit (BRS Gigante Amarelo and Yellow Master FB200 commercial cultivars) showing symptoms of bacterial spot were collected at Paraná Farm commercial orchard, located in Nucleo Rural Pipiripau,Planaltina,DF (lat. 47°29'56,92'' S;long. 15°30'15,08'' W,and alt. 955 m).The adaxial surface of each fruit was photographed with a digital camera (Canon Powershot SX40 HS, 12.1 megapixels; Canon Inc., Tóquio, Japão), set at 45 cm height from the fruit level.The resulting images were analyzed for the diseased area (necrotic + chlorotic), using the image analysis software IMAGE J (SCHNEIDER et al., 2012).The percentage of lesion area (% lesion area) was determined by dividing the lesion area by the total fruit area. The SADs' upper and lower limits were based on the minimum and maximum values of bacterial spot severity found in the image analysis of the 50 fruits.Intermediate levels were established following logarithmic increments (NUTTER; SCHULTZ, 1995).A standard fruit was used as the template, and diagrams with different severity levels were created using the IMAGE J software.The patterns of lesion distribution detected on the actual fruits were maintained. Validation of the SADs The SADs was validated using images of 50 fruits with different intensities of symptoms.Twenty raters (ten with previous experience and ten without previous experience in disease quantification) were selected and divided into four groups of five raters (G1 and G3, inexperienced; G2 and G4, experienced).Initially, each group estimated the disease severity, in percentage, for each of the 50 fruit images randomly organized, without the aid of the SADs (non-aided evaluation).Subsequently, the same images were presented to G1 and G2, who performed another non-aided evaluation, and to G3 and G4, who conducted the evaluation using the proposed SADs (SADs-aided evaluation). The accuracy and precision of the raters were determined by linear regression between the actual severity (independent variable) and the visually estimated severity (dependent variable).The accuracy of estimates of each rater was determined by a t-test applied to the intercept of linear regression (a) to verify if it was significantly different from 0, and to the slope of the line (b), to test if it was significantly different from 1 (P ≤ 0.05).Intercept values significantly different from 0 indicate the presence of constant errors, whereas values of the slope of the line different from 1 indicate the presence of systematic errors (NUTTER; SCHULTZ, 1995).Consequently, the most accurate raters were those whose estimates provided linear regression equations with values of "a" and "b" not significantly different from 0 and 1 by the t-test. The precision of estimates of each rater was obtained by the coefficient of determination of the regression analysis (R 2 ) and the variance of absolute errors (the difference between estimated and actual severities) (KRANZ, 1988).Absolute errors were compared by the t-test (P ≤ 0.05).Raters with higher values of R 2 were considered as of higher precision.Evaluations of the absolute errors considered the criteria used in disease quantification training programs [Distrain (TOMERLIN;HOWELL, 1988) and Disease.Pro (NUTTER; WORAWITLIKIT, 1989)], which classify raters as excellent (errors up to 5%) or good (errors up to 10%).The mean maximum error (absolute value) was also recorded for each group, indicating, in absolute value, the difference between the farthest estimate and the actual severity value.The reproducibility or inter-rater reliability was measured using the R 2 values for each pair of raters, based on estimates of non-aided evaluations and SADsaided evaluations (NUTTER; SCHULTZ, 1995). Accuracy and precision (agreement) of the estimates of each rater, with and without the use of the SADs, was also determined based on the Lin's concordance correlation coefficient (LCCC; ρ c ).The LCCC combines measures of accuracy and precision to assess the relational fit of pairs of observations to the concordance line (45º) (with intercept = 0 and slope = 1), or 1:1 line, and is defined by ρ c = C b .r.The element C b is a bias correction factor that measures how far the best-fit line deviates from 45°; therefore, it corresponds to a measure of accuracy.In its turn, r is the correlation coefficient between the estimated severity (Y) and the actual severity (X), which measures the precision (variation) or the scattering of points around the best-fit line.When the perfect agreement between estimated and actual severity occurs, the points fall on the concordance line.As a result, r = 1, C b = 1, and ρ c = 1 (LIN, 1989;BOCK et al., 2010). Linear regressions and absolute errors analyses were performed using the Genes software (v. 1990.2017.37).The LCCC was calculated using the MedCalc software (v.17.9.7). Results and discussion The bacterial spot severity recorded in yellow passion fruit naturally infected in the field was between 1% and 79.5%.The observed lesions showed typical symptomatic patterns of the disease, with circular or irregular shape, brown color, and in most cases, they covered large areas of the fruit (FISCHER; REZENDE, 2008).From these disease severity ranges, a SADs was proposed, which was divided into eight severity levels (1%, 3%, 5%, 10%, 21%, 38%, 65%, and 80%) (Figure 1).The high severity levels reported in this study are commonly observed in yellow passion fruit orchards due to the difficult control of this disease and the susceptibility of commercial cultivars to this bacterium (ISHIDA; HALFELD-VIEIRA, 2009).To better represent the severity values identified for the bacterial spot, the SADs composed of a larger number of diagrams are frequently used in pathosystems that involve the species Xanthomonas (NASCIMENTO et al., 2005;LIMA et al., 2013;DUAN et al., 2015), as established in this study. The accuracy analysis was performed to verify the proximity between the values of estimated severity and actual severity (NUTTER; SCHULTZ, 1995).Figures 2 to 5 show the linear regressions obtained between the actual and estimated severities for all raters in evaluations 1 and 2. The accuracy of the estimates reduced in the second evaluation performed by G1, and the number of estimates with constant errors [i.e., intercept different from 0 (P ≤ 0.05)] increased.Conversely, the accuracy of raters who showed constant errors in the first evaluation in G2 increased in the second non-aided evaluation.In the SADs-aided groups, 60% and 40% of the raters in G3 and G4, respectively, had intercept values equal to 0 (P ≤ 0.05) (Table 1).These results indicate a reduction of constant errors for all disease severity levels verified in the evaluation 1 (non-aided). Regarding the slope of the line, 75% of the G1 raters had an improvement in the accuracy levels in the second evaluation, with a coefficient significantly equal to 1 (P ≤ 0.05).In G2, the number of raters who had an increase in the accuracy was equal to those whose accuracy was reduced.Among the SADs-aided groups, G3 showed the highest percentage of raters with improvement in accuracy levels due to the significant reduction in the systematic errors of the estimates (100% in G3 vs. 50% in G4).In this sense, inexperienced raters appear to benefit more by the use of the SADs than those of the other groups since 60% of the raters in G3 did not show systematic and constant errors (Table 1). Mean R 2 values were high in all groups and evaluations (Table 1).One of the reasons for the raters' good performance may be the distribution pattern and size of bacterial spot lesions.According to Bock et al. (2010) and González-Domínguez et al. (2014), the accuracy and precision of estimates are directly influenced by the number of lesions in relation to the leaf area.The higher the number of lesions for a given area, the higher is the overestimation.Moreover, the general trend is to overestimate disease severity at severity levels lower than 10%.Thus, diseases that result in fewer but larger lesions, such as bacterial spot, tend to be estimated with fewer errors than those that result in several but smaller lesions, regardless of the distribution pattern (KRANZ, 1988;HAU et al., 1989). In the second evaluation, the mean R 2 value did not increase in G1 and reduced in G2.Conversely, the use of the SADs increased the precision in G3 and G4.This increase was more significant in the group of inexperienced raters (G3), whose R 2 value increase from 0.85 (non-aided) to 0.93 (SADs-aided).In the experienced group (G4), precision increased from 0.91 to 0.94 (Table 1).These results indicate that, with the SADs, estimates were related to the actual value in both groups.They also show a greater increase in the precision for the inexperienced raters when compared with the experienced raters.Different studies have already compared the raters' performance, indicating the existence of diversity in the individual ability to assess the severity of a particular disease.Studies usually state that the use of the SADs may be more advantageous for inexperienced raters than experienced raters (FISCHER et al., 2009;YADAV et al., 2013;GONZÁLEZ-DOMÍNGUEZ et al., 2014;NUÑEZ et al., 2017).The use of the SADs for disease evaluation makes the assessment more accurate and precise as it guides the raters in the data collection.The SADs does not replace the experience and knowledge of characteristic symptoms of a pathogen or physiological stress.However, it can improve the efficiency of the inexperienced and experienced raters by providing a reference point for comparison (VENTURINI et al., 2015). In addition to the determination coefficient, the good precision of raters can be detected by determining the absolute or residual error (the difference between estimated and actual severity).Regardless of the rater's experience, the precision increased with the use of the SADs, which was confirmed by the lower dispersion of data in the regression (Figures 2-5) and the reduction of absolute errors (P ≤ 0.05) (Table 2), resulting in differences between the SADs-aided and non-aided evaluations in the same group. The distribution of errors of non-aided evaluations ranged from -12.3 to +32.7 (G1); -20.5 to +17.0 (G2); -37.0 to +31.9 (G3); and -16.8 to +30.0 (G4).In the second non-aided evaluation, errors ranged from -16.5 to +27 (G1) and -20.5 to +27.0 (G2).In SADs-aided evaluations, distribution of errors ranged from -20.5 to +25.7 and -28.4 to +24.2 in G3 and G4, respectively.The mean maximum error of the actual severity, in absolute value, decreased by 20.2% in the second evaluation performed by G1; conversely, in G2, an increase was observed.The mean maximum error reduced with the use of the SADs, corresponding to a 32.5%-lower value for the inexperienced raters and 24.1%-lower value for the experienced raters in relation to the non-aided evaluation (Table 3).The reduction in absolute errors in G3 and G4 demonstrates that the precision of the visual estimates increased with the use of the SADs.This increase indicates an approximation between the estimates of the lessaccurate and the more-accurate raters and corroborates studies previously reported (DE PAULA et al., 2016;CORREIA et al., 2017;NUÑEZ et al., 2017;SANTOS et al., 2017), considering that the proposed SADs aims to standardize the disease quantification. G1 and G2 raters had greater absolute errors in the second evaluation, resulting in an increase in estimates with errors higher than 10% (-10 to +10) (Table 3).SADs-aided evaluations had a decrease in the percentage of estimates with errors greater than 10% when compared with the non-aided evaluation.Thus, 89.6% (G3) and 93.6% (G4) of the estimates were concentrated within the range of 10%, which is considered as satisfactory in studies on the SADs validation (NUTTER; WORAWITLIKIT, 1989).In SADs-aided evaluations, the percentage of estimates within the range of 5% (-5 to +5) was higher in G3 and G4, which indicates that the raters' estimates were closer to the actual severity value.Although such behavior was also detected in G1, this increase (1.3%) was much more discreet than those observed for the SADs-aided groups (14.8% and 16.9% for G3 and G4, respectively) (Table 3). The precision of the evaluations was also analyzed by the reproducibility of the estimates among the raters, who had access to the same images sample, with and without the aid of the SADs.According to Belasque Junior et al. ( 2005), when the R 2 value of the comparison between two raters is close to 1.00, raters' estimates are repeated.In the first evaluation, R² values of regressions of estimates between pairs of raters in G3 and G4 ranged from 0.72 to 0.87 (mean 0.80) and from 0.80 to 0.91 (0.85), respectively.In the second evaluation, R² values varied from 0.85 to 0.93 (0.89) in G3, and from 0.85 to 0.96 (0.91), in G4.The use of the SADs provided higher R² values for 100% and 90% of the combinations in G3 and G4, respectively, evidencing the increase in the precision of the estimates when using the SADs. The R 2 and r coefficients inform on the precision of an estimate.However, they do not report on the accuracy of a model (PEREIRA et al., 2008).The Lin's concordance correlation coefficient (ρ c ), however, is the product of the elements precision (r) and accuracy (C b ), reflecting the degree of agreement between estimated and actual values (LIN, 1989).The Lin's concordance correlation coefficient confirmed the increments in the accuracy and precision of the raters, which were previously described for the SADs-aided evaluation.In the first evaluation, G1 and G2 had higher ρ c values, indicating that the values were closer to the actual values in relation to groups G3 and G4.Nevertheless, when using the SADs, the agreement between the actual and estimated severity values increased, as confirmed by the approximation between the best-fit line (between actual and estimated severity) and the 1:1 line (actual severity equal to the estimated severity) (Figures 4 and 5).ρ c values varied from 0.93 to 0.97 in G3 (mean of 0.94) and from 0.93 to 0.98 in G4 (mean of 0.95), representing an increase of 10.6% (G3) and 5.6% (G4) when compared with the non-aided evaluations of these groups (Table 4).No increase was observed for the agreement in the groups that performed the double nonaided evaluation (Table 4, Figures 2 and 3). r and C b values also increased in the SADs-aided evaluations, unlike the results of G1 and G2 (Table 4).Less accurate and precise raters benefited the most by the use of the SADs, showing the largest increments in the evaluated parameters.Thus, G3 exhibited a 4.3% and 6.5% increase in accuracy and precision, respectively, while G4 showed a 2.1% (accuracy) and 4.3% (precision) increase.Conversely, more accurate and/or precise raters did not respond as well to the use of the SADs as those who initially had less accurate and precise estimates.In fact, raters 14, 19, and 20 demonstrated a slight increase in errors (Table 2) and/or no increment or slight loss of accuracy, precision, and agreement (Table 4).These results indicate that the SADs helped standardize the evaluations of the several raters, as also reported by Yadav et al. (2013). Conclusions The proposed SADs increased the ability of the raters to accurately and precisely estimate the disease severity, showing to be efficient to increase the agreement between the estimated values and the actual values and the reproducibility of estimates among raters.Therefore, the SADs can be used in epidemiological studies, in the evaluation of control strategies for this disease, and in studies on the resistance to bacterial spot in plant breeding programs.They can also help reduce the training time of raters, so that accurate and precise estimates are achieved more quickly. Figure 2 . Figure 2. Bacterial spot (Xanthomonas axonopodis pv.passiflorae) severity on fruits of yellow passion fruit (Passiflora edulis Sims) estimated by inexperienced raters, without the aid of the standard area diagram set in the first (A-E) and second evaluations (F-J).Solid line = linear regression of actual severity x estimated severity.Dotted line = perfect agreement (linear regression of actual severity = estimated severity).Brasilia, DF, Brazil, 2018. Figure 3 . Figure 3. Bacterial spot (Xanthomonas axonopodis pv.passiflorae) severity on fruits of yellow passion fruit (Passiflora edulis Sims) estimated by experienced raters, without the aid of the standard area diagram set in the first (A-E) and second evaluations (F-J).Solid line = linear regression of actual severity x estimated severity.Dotted line = perfect agreement (linear regression of actual severity = estimated severity).Brasilia, DF, Brazil, 2018. Figure 4 . Figure 4. Bacterial spot (Xanthomonas axonopodis pv.passiflorae) severity on fruits of yellow passion fruit (Passiflora edulis Sims) estimated by inexperienced raters, without the aid of the standard area diagram set (SADs) in the first evaluation (A-E) and with the aid of the SADs in the second evaluation (F-J).Solid line = linear regression of actual severity x estimated severity.Dotted line = perfect agreement (linear regression of actual severity = estimated severity).Brasilia, DF, Brazil, 2018. Figure 5 . Figure 5. Bacterial spot (Xanthomonas axonopodis pv.passiflorae) severity on fruits of yellow passion fruit (Passiflora edulis Sims) estimated by experienced raters, without the aid of the standard area diagram set (SADs) in the first evaluation (A-E) and with the aid of the SADs in the second evaluation (F-J).Solid line = linear regression of actual severity x estimated severity.Dotted line = perfect agreement (linear regression of actual severity = estimated severity).Brasilia, DF, Brazil, 2018. *Different letters in the same row indicate significant differences (Student´s t-test, P ≤ 0.05).
2019-04-03T13:08:33.451Z
2018-11-14T00:00:00.000
{ "year": 2018, "sha1": "3da346aff9679a7028e6167d0378fdf2af979ab3", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rbf/v40n6/0100-2945-rbf-40-6-e-039.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3da346aff9679a7028e6167d0378fdf2af979ab3", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
42211959
pes2o/s2orc
v3-fos-license
The role of microorganisms at different stages of ecosystem development for soil formation Soil formation is the result of a complex network of biological as well as chemical and physical processes. The role of soil microbes is of high interest, since they are responsible for most biological transformations and drive the development of stable and labile pools of carbon (C), nitrogen (N) and other nutrients, which facilitate the subsequent establishment of plant communities. Forefields of receding glaciers provide unique chronosequences of different soil development stages and are ideal ecosystems to study the interaction of bacteria, fungi and archaea with their abiotic environment. In this review we give insights into the role of microbes for soil development. The results presented are based on studies performed within the Collaborative Research Program DFG SFB/TRR 38 ( http://www.tu-cottbus.de/ecosystem ) and are supplemented by data from other studies. The review focusses on the microbiology of major steps of soil formation. Special attention is given to the development of nutrient cycles on the formation of biological soil crusts (BSCs) and on the establishment of plant–microbe interactions. Introduction Microbial communities can be considered as architects of soils (Rajendhran and Gunasekaran, 2008) and many ecosystem services that are linked to terrestrial ecosystems, including plant production, safeguarding of drinking water or C sequestration, are closely linked to microbial activities and their functional traits (Torsvik and Øvreås, 2002). Vice versa, the soil matrix as well as chemical and physical properties of soils, like quality and amount of soil organic matter, pH, and redox conditions, have a pronounced influence on the dynamics of the microbial community structure and function in soils (Lombard et al., 2011). This close interplay between abiotic conditions and the soil biosphere is one of the most fascinating issues as far as earth sciences are concerned, with huge implications on environmental as well as human health (van Elsas et al., 2008). Due to the complex interactions, it is not surprising that the formation of soils with a high level of fertility is a result of more than hundreds of years of soil "evolution" (Harrison and Strahm, 2008). As a result of global change in general and the loss of soil quality in particular, many soils are threatened. Thus, there is a huge need to develop strategies for a sustainable protection of soils for future generations. In this respect the knowledge gained from soil chronosequences might help to improve our understanding about the development of biotic-abiotic interplays and to identify factors that drive the formation of soils (Doran, 2002). Studies on the development of abiotic and biotic interactions are very complex. They require both different spatial and temporal scales (Ollivier et al., 2011). Microbes act on a scale of µm 3 and form biogeochemical interfaces with the soil matrix, shaping their own environment (Totsche et al., 2010). It remains largely unknown how many interfaces are connected and how many interfaces are needed for the Published by Copernicus Publications on behalf of the European Geosciences Union. stability of soil. Furthermore, microbes can change their phenotype within minutes depending on the present environmental conditions at those interfaces by gene induction or repression (Sharma et al., 2012). The corresponding transcripts often have half times in the range between seconds and minutes. Putting this in the frame of soil formation which may take centuries is a highly challenging issue. In addition to that, the diversity of soil microbes is huge and can still be considered as a black box (Simon and Daniel, 2011); consequently nobody is so far able to give exact numbers on the species or, even more important, on the ecotype richness in one unit of soil. Microbes are also able to easily exchange genetic information, which induces a very fast and ongoing diversification of organisms in natural environments, and the genetic flexibility of the whole soil microbiome can be considered enormous (Monier et al., 2011). Finally, most functional traits, for example the degradation of plant litter or the development of food web structures and closed nutrient cycles, are not a result of a single organism but of microbial communities which closely interact which each other (Aneja et al., 2006). Even the development of symbiotic interactions between plants and microbes in soil (e.g. mycorrhization of plants or legume-rhizobia interactions) are much more complex than described in text books, including the involvement of a diverse number of "helper organisms" during the infection phase (Frey-Klett et al., 2007). Timescales for community development and stable microbiomes are therefore still a highly challenging topic of research, and in many cases concepts do not even exist for the formation of microbial communities on the basis of single organisms being present at a certain point in time of ecosystem genesis. Forefields of receding glaciers are ideal field sites to study the initial steps of soil formation, as in a close area of some square kilometres a chronosequence of soils of different development stages can be found. As time is substituted by space, a simultaneous comparison of the formation of organismic interactions and of abiotic-biotic interfaces at different development stages is possible. Since the end of the Little Ice Age around 150 yr ago, most alpine glaciers have been receding at an increasing rate (Paul et al., 2004). A recent survey on 97 Swiss glaciers revealed that today most glaciers show an annual recession of dozens of metres (Paul et al., 2007). A similar trend can be observed in alpine zones dominated by permafrost. Permanent permafrost is more and more occurring in deeper soil horizons (Paul et al., 2007). Detailed studies on glacier recession and soil formation have been performed at the Damma Glacier in Central Switzerland (Kobierska et al., 2011). The length of this glacier has been monitored since 1921, and the rate of recession is currently about 10 m per year (http://glaciology. ethz.ch/swiss-glaciers/). The forefield has a north-eastern exposition, an inclination of about 21 %, and is located at 2050 m a.s.l. (http://map.geo.admin.ch/). As depicted in Fig. 1, the glacier forefield is flanked by two lateral moraines, (1) the glacier terminus, (2) the glacier stream, (3) moraine from 1992, (4) moraine from 1928, (5) the south flanking moraine and (6) the north flanking moraine, which both date back to the end of the Little Ice Age in 1850. The small pictures are closeups from 10, 50, 70 and 120 yr ice-free soils, where the 10 yr site is situated in the initial part of the glacier forefield (6-13 yr), 50 and 70 yr sites in the intermediately developed sites (50-80 yr) and the 120 yr site in the most developed part of the forefield (110-160 yr). Photos: A. Zumsteg. which emerged during the Little Ice Age around 1850. Two brief advancements of the glacier in 1928 and 1992 resulted in two further moraines, which divided the forefield in three parts: initial (6-13 yr), intermediate (50-80 yr) and developed sites (110-150 yr). In this review we summarized our knowledge about the role of microorganisms in soil development by using current data from the chronosequence of the forefield of the Damma Glacier and how this improves our view of soils as the most important bioreactor on earth. Setting the stage for microbial activity: the role of weathering The formation of fertile soils from inorganic bedrock implies a complex interaction of physical, chemical and biological processes. The rate of the soil development is dominated by variables such as climate, bedrock type, topography, time, microorganisms and plants (Paul and Clark, 1996;Egli et al., 2011). Usually hundreds of years are required to convert inorganic precursors to humus and eventually distinct soil horizons. Biogeosciences, 10, 3983-3996, 2013 www.biogeosciences.net/10/3983/2013/ The climate in alpine zones is characterized by high precipitation and pronounced fluctuations of temperatures. For example, in the cities of Basel, Bern and Zurich (Switzerland, all at an altitude of < 550 m a.s.l.) the annual precipitation is in the range of 140-160 cm yr −1 (http://www.meteosuisse. admin.ch/web/en.html). In contrast, in the Alps, at an altitude of > 2500 m a.s.l., the precipitation is usually > 250 cm yr −1 . Nevertheless, microbes can be subject to dry stress in parts of the glacier forefield, which are dominated by rocks and sand with a low water holding capacity. In addition, solar radiation (UV and visible range) increases with altitude due to decreasing optical air masses (Blumenthaler et al., 1997). For UVB, the increase is about 9 % per 1000 m and in highaltitude glacier forefields; severe effects of UVB on microorganisms cannot be excluded. UVB damage results from a direct absorption of radiation by target molecules such as DNA and proteins, and usually microorganisms produce secondary metabolites as a photoprotective mechanism. In the mountains not only the radiation during daytime but also the back radiation during night-time is higher than in the lowlands. As a consequence, the temperature fluctuations on the rock and soil surfaces are very pronounced and can easily go beyond 40 • C within 12 hours (unpublished data). It is remarkable that the snow cover in winter and spring provides a good insulation and temperatures on snow covered soils usually do not fall below 0 • C. Exceptions can be time periods in fall when the air temperatures drop significantly below 0 • C and the snow cover only amounts to a few centimetres (Körner et al., 1999) The forefield of a receding glacier consists of very heterogeneous and distinct morphotypes such as moraines, rock fields, floodplains, sand hills, erosion channels and mudslides. From a geomorphological point of view, these features are formed by complex interactions of glacial, periglacial, fluvial and gravitational forces. In order to allow a meaningful analysis of experimental data along the chronosequence, these morphotypes have to be considered, and in addition to that geostatistical methods have to be employed. In the past, the mapping of the morphotypes required laboursome field work using triangulation, while today it is usually done by lidar, radar interferometry and photogrammetry (Harris et al., 2009). The weathering of rocks depends on the composition of the bedrock as well as on the environmental conditions. For example, calcareous rocks are mainly subject to chemical weathering (acidic dissolution of the calcium carbonate) whereas siliceous rocks are mainly fractured as a consequence of freezing-thawing cycles. At the Damma Glacier for example, the bedrock consists of Aare granite composed of quartz, plagioclase, potassium feldspar, biotite and muscovite (Kobierska et al., 2011). The mineralogy of the sand, silt and clay fractions in the glacier forefield reflects the composition of the bedrock. Besides chemical and physical processes at this stage, bacteria and fungi can also contribute substantially to the weathering of mineral surfaces. Several biogeochemical processes have been described which catalyse the decay of minerals and the mobilization of nutrients. The mechanisms include enzymatically catalysed reactions, the local reduction of the pH, and the production of complexing agents such as cyanide, oxalate and gluconic acid and the excretion of transport vehicles such as siderophores (Mavris et al., 2010;Styriakova et al., 2012). However, many reports on the role of bacteria and fungi for weathering mainly cover phenomenological experiments in the laboratory (e.g. using fluorescence staining of biofilms on mineral surfaces or scanning electron microscopy of weathered minerals) and do not necessarily allow the prediction of the rates of biological weathering under field conditions. Direct field observations are rare and include for example a study on the role of microorganisms in phosphorus (P) cycling in the forefield of the Damma Glacier (Tamburini et al., 2010), which revealed a shift from substrate derived P at initial sites to internal P turnover at more developed sites. For the establishment of microbial life in a glacier forefield, both the chemical composition of the bedrock as well as the physical structure of the weathered fractions are important. Siliceous rocks contain a number of minerals which contain essential elements (e.g. apatite is a source of phosphorous) and thus favour microbial life. In contrast, the weathering of calcareous rocks releases very few elements which facilitate the growth of microorganisms. Some rocks such as serpentinites even release toxic elements (e.g. Nickel, Cadmium) which prevent plant life (Bratteler et al., 2006) and which may also inhibit microbial activities. Soil aggregation is of upmost importance in controlling microbial structures, functions and plant life. Ideally, the weathered fractions in a glacier forefield include sand (fraction 2 mm to 63 µm), silt (63 to 2 µm) and clay (< 2 µm), and thus allow a good diffusion of gases and bacterial motility as well as high ion exchange and water holding capacities. Particularly clay is often found in glacier forefields (Kobierska et al., 2011;Mavris et al., 2011), and this is essential for aggregate formation and the stabilization of soil organic matter (Paul and Clark, 1996). In the forefield of the Damma Glacier, there is little change in the phyllosilicate clay mineralogy, whereas the amount of poorly crystalline Fe oxides and Al phases increased with soil development, reflecting a growing potential for soil organic C stabilization (Dümig et al., 2011). In contrast to strongly increasing quantities, only small changes in the composition of the Fe and Al pools were detected during initial pedogenesis. Fe oxides and inorganic Al phases mainly remained poorly crystalline. Development of initial C and N cycles At the Damma Glacier important macronutrients, such as phosphorus and sulfur (S), are part of the mineral composition. Therefore, microbial activity might accelerate the www.biogeosciences.net/10/3983/2013/ Biogeosciences, 10, 3983-3996, 2013 release of those elements from the bedrock to supply living organisms with P and S. In contrast, C and N are not part of the mineral composition and are scarce in the initial soils, stressing the importance of studying these geochemical cycles from the view of microorganisms. The concentrations of the most abundant nutrients are summarized in Table 1. In the initial soils, total C content fluctuates around 700 µg C g −1 . Microbial C reached up to 50 µg C g −1 , which is a value also found in semi-arid grassland soils (Dijkstra et al., 2006), indicating a high turnover rate of C (Table 1). This is in agreement with soil respiration rates, which are in the range of 130 µg C g −1 a −1 (Gülland et al., 2013a). The source of organic C in the initial soils, however, is a matter of controversy. Three different sources potentially contribute C to the initial soil. First, the deposition of allochthonous organic matter, such as plant litter, insects, and soot particles, contributes considerable amounts of C (Hodkinson et al., 2003). Measurements of allochthonous C deposition range from 7.5 kg C ha −1 a −1 at the Damma Glacier (Brankatschk et al., 2011) to 34 kg C ha −1 a −1 at Toolik Lake, Alaska (Fahnestock et al., 1998). Second, C inputs from close-by cyanobacterial and algal communities, such as cryoconite holes or patches of snow algae, might contribute C to the forefield (Kaštovská et al., 2007;Sawstrom et al., 2002;Stibal et al., 2008). Third, ancient C might be present in the forefield of the Damma Glacier. During the Holocene, glacier basins were vegetated. Dating the sub-fossil remains of trees and peat, warm periods, e.g. around the years 2000, 3900 and 4900 BP, were identified (Joerin et al., 2006). At this time glacier valleys were covered by peat bogs with birch and willow trees. Similarly, at the Damma Glacier it appears possible that C originating from ancient vegetation is mixed into the initial soil. This hypothesis is supported by 14 C measurements on the carbon dioxide released from the initial soils by respiration. The δ 14 C value of −68.1 indicates the degradation of ancient C as the main C source in the initial soils (Gülland et al., 2013b). In addition, the C content at 5-10 cm depth (400-500 µg C g −1 ) was similar to the C content in the top soil of the most recently deglaciated soils (700-1100 µg C g −1 , Bernasconi et al., 2011), indicating that organic material has been blended into the soil. Esperschütz et al. (2011) used 13 C labelled litter to study the microbial food web in the initial soils. C flow through the food web was traced using phospholipid fatty acids and phospholipid ether lipids. In the initial soils, archaea, fungi and protozoa were enriched in 13 C. The community pattern changed only slightly towards the developed soils, where actinomycetes were involved in litter degradation. After twelve weeks of incubation, the litter degradation in the initial soils was comparable to that in the developed soils, again highlighting the activity of heterotrophic microorganisms in the initial soil. Gülland et al. (2013a) studied C losses from the initial soils at the Damma Glacier that were ice-free for 10 yr. Within the study period of three summer months, 33 g C m −2 were released via respiration and 2 g C m −2 leached from the soil. Taking into consideration the total C stocks of 90 g C m −2 at the Damma Glacier, these data indicate a highly active microbial community degrading the soil organic matter. Similarly, Bardgett and Walker (2004) described a heterotrophic stage of C decomposition atÖdenwinkelkees Glacier, Austria. In contrast to small changes in clay mineralogy, pronounced shifts of soil organic matter quality with increasing age of the clay fractions were found at the Damma Glacier (Dümig et al., 2012). Clay-bound organic matter from the 15-yr-old soils was mainly inherited organic C rich in aromatic compounds and rich in compounds carrying carboxyl groups. With increasing age of the clay fractions (75 and 120 yr), the formation of organo-mineral associations started with the sorption of proteinaceous compounds and microbial-derived carbohydrates on mineral surfaces. In the acidic soils, ferrihydrite (determined as oxalate-soluble iron) was the main provider of mineral surfaces and thus important for the stabilization of organic matter. We assume that sorption is not the only protective mechanism as poorly crystalline Fe phases also interact with organic matter by coprecipitation or micro-aggregation. These results show that organo-mineral associations already evolve in early stages of soil development, whereby mineral weathering and organic matter accumulation proceed in different timescales (Dümig et al., 2012). Different pools of N are present in the initial soils. As summarized in Table 1, the content of total N ranges around 70 µg N g −1 , while the contents of microbial N (6 µg N g −1 ), nitrate (0.1 µg N g −1 ), and ammonium (0.03 µg N g −1 ) are considerably lower. As for C, different N sources contribute N to the glacier forefield: (i) N fixation by microorganisms and (ii) N deposition. As shown in Fig. 2, N fixation is very low in the initial soils at the Damma Glacier. Duc et al. (2009) detected N fixation activity in the range of 2 pmol C 2 H 4 g −1 h −1 , using the acetylene reduction assay. Another study found N fixation rates below 0.2 pmol N h −1 g −1 in the initial soils, using the stable isotope incorporation method (Brankatschk et al., 2011). The lowest N fixation activity was accompanied by the lowest abundance of the N fixation marker gene nifH (2 × 10 6 copies g −1 ) at initial sites, confirming the presence of few microorganisms capable of the N fixation process. In contrast to the N fixation, the deposition of N is several orders of magnitude higher (Brankatschk et al., 2011). Estimations for wet deposition of nitrate and ammonium ranged between 7 and 11 kg N ha −1 a −1 . The total N deposition is estimated to be 10 to 15 kg N ha −1 a −1 . This indicates the importance of the N deposition as the primary N source in the forefield, and is supported by stable isotope measurements. The N in the initial soils exhibits δ 15 N values of −4 to −2, which appear to be typical for initial soils in cold climates and can be explained by the negative δ 15 N of atmospheric N . The quantification of organic detritus Table 1. Plant and soil parameters from initial (6-13 yr), transient (60-80 yr) and developed (110-150 yr) sites and a reference site (> 2000 yr) outside the glacier forefield. All nutrient values and the microbial biomass C are given in µg g −1 . Data were summarized from ), Brankatschk et al. (2011, Göransson et al. (2011), Hämmerli et al. (2007 and Noll and Wellinger (2008 on snow revealed that approximately 0.6 kg N ha −1 a −1 is deposited as particulate organic matter. Decomposition and mineralization of this organic matter was suggested to be the dominant N transformation process in the initial soils at the Damma Glacier (Brankatschk et al., 2011), which is declared as initial phase in Fig. 2. Also, the marker genes for the breakdown of organic matter, chitinase (chiA) and protease (aprA) were detected in the initial soils. The abundance of chiA gene was 7 × 10 5 copies g −1 , and the aprA gene was quantified with 6 × 10 6 copies g −1 . Activities of other N turnover processes such as nitrification and denitrification were low (< 2 nmol N g −1 h −1 ) in the initial soils (Brankatschk et al., 2011). The abundance of the marker gene for nitrification amoA of ammonia oxidizing bacteria (AOB) was two orders of magnitude higher (2 × 10 6 copies g −1 ) than for ammonia oxidizing archaea (AOA) (Brankatschk et al., 2011), which stands in contrast to many others studies (Leininger et al., 2006;Schauss et al., 2009). It might be that the conditions in the initial soils are more ideal for AOB. On the one hand ammonium is supplied from atmospheric deposition and mineralization of organic matter, while competition about ammonium with plants is low. On the other hand the low pH of the soil is more favourable for AOB than AOA (Gubry-Rangin et al., 2011). However, at the same time potential nitrification measurements were low (Brankatschk et al., 2011). This can be explained by two scenarios: (i) The AOB community at the initial sites of the glacier forefield is inactive per se or (ii) the AOB community is adapted to the harsh condi-tions at the initial sites and is not able to adapt to laboratory conditions, thus turnover rates during potential nitrification measurements are low. From the analyzed marker genes for denitrification, the nirK gene was the most abundant one with 1.5 × 10 8 copies g −1 . Since potential denitrification activity was low, the high nirK gene abundance might indicate the presence of facultative anaerobic bacteria in the initial soils, and would support the hypothesis of the dominance of mineralizing microorganisms that are adapted to temporarily waterlogged conditions, e.g. during heavy rains or snow melt. The established C and N cycle in the glacier forefield leads to the accumulation of protein-rich organic matter in the very early stages of soil development, which includes a stronger accumulation of carbohydrate-rich material in the course of time, both most probably of microbial origin (Dümig et al., 2012). P, next to N, is frequently found to be the limiting nutrient in terrestrial ecosystems. Therefore, the release of P from the minerals in the bedrock was at the centre of studies at the Damma Glacier. The granite bedrock contains approximately 400-600 µg P g −1 soil that is bound to apatite. However, recent field studies in alpine habitats suggest that apatite has a limited impact on the microbial structures and function (Ragot et al., 2013). The amount of organic P and deposited P in the initial soil is low, accounting for 50-60 µg P g −1 (Tamburini et al., 2010;Göransson et al., 2011) and 0.1-1.2 kg P ha −1 a −1 (Binder et al., 2009). An even smaller proportion of the P, however, is freely available as phosphate (0.8 µg P g −1 ) as determined by resin bag experiments . Nevertheless, the heterotrophic microorganisms as well as plants from the initial soils were not P limited . However, the isotopic analysis of the plants P source showed that the minerals are the major P source and deposition plays a minor role (Tamburini et al., 2010). This might underline the microbial importance of mineral dissolution during the initial soil formation process. The dissolution of P and other nutrients from the minerals might be accelerated by microorganisms releasing organic acids or chelators. This was investigated in mineral dissolution experiments using bacterial isolates from the Damma forefield (Lapanje et al., 2012). Selected isolates were screened for high mineral dissolution potential; however, the abiotic controls using citric and hydrochloric acid released elements at significantly higher rates than the bacterial isolates did (Lapanje et al., 2012). Future studies need to investigate the P release from mineral bedrock in the initial soils, as the mechanisms have not been studied in detail, but are crucial for the development of the ecosystem. Significant amounts of S are required to maintain the high ecosystem productivity that was measured at the forefield of the Damma Glacier . In the bedrock a total of approximately 5 µg S g −1 is present (Lazzaro et al., 2009;Bernasconi et al., 2011), and concentrations of available sulfate in the initial soils range from 0.25 µg S g −1 (Table 1). However, in the developed soils the S content is in the range of 300 µg S g −1 . Borin et al. (2010) pointed out the importance of S-oxidising bacteria as first colonizers in microbial communities at the Midtre Lovénbreen Glacier, Svalbard. If similar processes prevailed at the Damma Glacier, this would indicate that sulfate is quickly released from the minerals of the initial soil and might therefore be available for plants early on. A study on desulfonating bacteria, i.e. bacteria that release sulfate from organic matter, found very high diversity at the Damma forefield (Schmalenberger and Noll, 2010). Therefore, it was hypothesized S might be a limiting nutrient. As indicated above, the S content in the soil is low in the initial soils and the S stocks in the developed soil cannot be explained by mineral dissolution only. Therefore, the S deposition might significantly contribute to the S budget of the forefield. The deposition of S is estimated to be 200 to 350 mg SO X m −2 a −1 (Nyiri et al., 2009); however, a detailed analysis on S sources in the Damma forefield, e.g. deposition measurements or isotope analysis, is lacking. , 10, 3983-3996, 2013 www.biogeosciences.net/10/3983/2013/ Biofilms and soil crusts as hotspots of nutrient turnover Initial sites of glacier forefields are characterized by less vegetation and low nutrient contents. In 2005 the vegetation cover at the Damma Glacier was below 20 % at a distance of 80 m to the glacier front, which corresponds to approximately 13 yr of soil development (Hämmerli et al., 2007). Therefore, it is obvious that the initial processes of soil formation and input of nutrients rely on the activity of microorganisms. In principal two main functions can be assigned to the microbes: (i) the biological weathering of the bedrock material and (ii) the formation of interfaces for nutrient turnover at vegetation free sites. Regarding biological weathering, Frey et al. (2010) demonstrated that isolates from the granitic sand in front of the Damma Glacier were able to effectively dissolve the siliceous bedrock material. The main underlying mechanism for the dissolution is the formation of biofilms on the mineral surface. Organisms being organized in such biofilms exude organic acids like oxalic acid, which on the one hand lead to a ligand-promoted dissolution and on the other hand to a proton-promoted dissolution because of the decreasing pH. The released elements are then captured in the polysaccharide matrix of the biofilm and display therefore a nutrient hotspot in the bare substrate. In addition to that mechanism, Büdel et al. (2004) attributed an important role in biological weathering to cryptoendolithic Cyanobacteria, which enhance weathering by alkalization of the substrate during photosynthesis. It was ascertained by several researchers that the trophic base at initial sites of glacier forefields is established by first colonizers like Cyanobacteria, green algae, lichens, mosses and fungi, which often conglomerate and form biological soil crusts (BSCs) (Belnap et al., 2001a). The formation of BSCs is strongly linked to the environmental conditions present, as well as to the parental material. As shown in Fig. 3, BSC development at the Damma Glacier is very heterogeneous and strongly depends on the right equilibrium of water availability and water holding capacity of the substrate. These BSCs fulfil different important roles in the ecosystem development. Most BSCs forming organisms are able to perform photosynthesis and/or N fixation, and thus enhance C and N content of the soils. In this regard, researchers stated that under optimal conditions the performance per unit ground surface of BSC is similar or even higher than that of vascular plants (Yoshitake et al., 2009;Pointing and Belnap, 2012). Moreover, Dickson et al. (2000) showed that the N 2 -fixation activity of BSCs was already measurable at 3 • C. Thus, the "vegetation period" of BSCs starts much earlier than that of vascular plants and nutrient input is prolonged, which is especially advantageous at glacier forefields. Nutrient acquisition is further supported by the excretion of exopolysaccharides by several Cyanobacteria, which are often coated with clay particles. The negatively charged clay particles are associated with positively charged nutrients in turn, and prevent those from leaching (Belnap et al., 2001a). At the Damma Glacier the highest numbers of Cyanobacteria have been found at the initial sites, including Lecanoromycetes, to which a lot of lichen-forming species belong (Zumsteg et al., 2012). A second study, which focused on nifH-carrying microbes, revealed a cyanobacterial community comparable to mature BSC from the Colorado plateau (Duc et al., 2009;Yeager et al., 2004). The dominant species were Nostoc sp. and Scytonema sp. Both are able to produce pigments, which enable them to withstand high solar radiation. This property is a big advantage at glacier forefields because radiation strongly increases during summertime and day temperatures can reach up to 40 • C at the soil surface , while the thermal radiation at night hampers the storage of heat energy from the day (Landolt, 1992). Thus, poikilohydric organisms, which form BSCs, have a big advantage as they are able to withstand drought and high solar radiation, and as soon as they are rewetted the activity strongly increases. Although crust forming bacteria have been detected at initial sites of the Damma Glacier, the formation of bacterial dominated crusts at the sampling site was not observed (Duc et al., 2009). This might be mainly attributed to the exposed position of some parts of the initial sites to the glacier tongue, which leads to regular disturbances of the surface by the glacial stream. However, as soon as sites are more protected against erosion, either due to moraines or their location on hydrologic islands, moss and lichen dominated crusts develop (Bernasconi et al., 2011, Fig . 3), which is in accordance with observations in the Negev Desert (Israel) (Zaady et al., 2000). Interestingly, these types of crusts are often associated with vegetation patches (Duc et al., 2009). It is very likely that crusts were there before plants established, as they are known to pave the way for vascular plants, because on the one hand BSCs improve soil fertility and on the other hand they create advantageous micro-environments for plant germination and growth (Belnap et al., 2001b). Regarding the soil fertility, data from different BSCs indicate that crusts have 200 % higher N content than uncrusted soils from the same site (Harper and Belnap, 2001;Rogers and Burns, 1994;Pointing and Belnap, 2012). However due to increased microbial activities and leaching of N to deeper soil layers (Johnson et al., 2007), nitrogen is still one of the limiting factors in BSCs. Brankatschk et al. (2011) demonstrated that deposited N and C are important drivers for the ecosystem development at initial sites; the ability of Cyanobacteria to trap nutrient-rich deposits via their exopolysaccharide sheath even facilitates that effect (Reynolds et al., 2001). Thus, also decomposing microbes are stimulated in BSCs, which in turn supply plants in the vicinity with a lot of different micronutrients and trace elements. As arctic and alpine plant species often have a shallow root system due to the thin soil layers, they can easily acquire the nutrient sources below BSCs (Billings, 1987). BSCs did not only enhance the soil nutrient status, but also improved the physical conditions for plant establishment. The rough surface structure and the dark colour of BSCs create beneficial conditions for seedling settlement and germination, because they are sheltered against external disturbances and the temperature is higher compared to bare surrounding soils (Escudero et al., 2007;Breen and Lévesque, 2008;Gold, 1998). Moreover, the stress for pioneering plants is generally reduced -on the one hand because of the already mentioned nutrient conditions, and on the other hand because of the volumetric water content being higher in BSCs (Breen and Lévesque, 2008). Altogether, BSCs are able to buffer the diurnal changes in climatic conditions at glacier forefields and therefore reduce stress conditions, which otherwise might hamper the plant development. Although a lot of effort is still necessary to directly characterize the BSCs at the forefield of the Damma Glacier, a lot of data indirectly underline their pivotal role in the ecosystem development. In summary, one can assign several functions to BSCs and biofilms at the forefield of the Damma Glacier. At the beginning of soil formation, microorganisms organized in biofilms contribute to the weathering of bedrock material and the dissolution of trace elements. Afterwards, their main function is the input of nutrients to the bare soil and the stabilization of the surface. That result then is the basis for the settlement and growth of plants, which will finally completely replace BSCs. Role of plants During the succession at the forefield of the Damma Glacier, the coverage of plants increased with soil age from 20 % to 70 % (Hämmerli et al., 2007). Plants play an important role in the stabilization of the slope (Körner, 2004) and the soil development, as their root exudates and decaying litter mate-rial are the main sources of organic matter at initial sites of the glacier forefield (Duc et al., 2009). Although the microbial biomass is much lower at initial sites, the decomposition of plant material is as fast as it is at developed sites over a time period of twelve weeks (Esperschütz et al., 2011). Interestingly, initial sites of the Damma Glacier and other alpine glaciers are dominated by forbs and grasses like Leucanthemopsis alpina, Agrostis gigantea or Cerastium uniflorum, whereas legumes like Lotus alpinus or Trifolium pallescens appear later in succession although their symbiosis with rhizobia might be an advantage at N-poor initial sites Göransson et al., 2011;Tscherko et al., 2003). There are two reasons explaining the absence of legumes from the initial sites. First, many legumes like Lotus alpinus form heavier seeds than L. alpina or C. uniflorum (Pluess et al., 2005;Tackenberg and Stöcklin, 2008), and thus dispersal via wind is more difficult. Second, the establishment and maintenance of a rhizobia-legume symbiosis is a very energy consuming process (Merbach et al., 1999), and therefore energy consumption might be too high under these harsh conditions. However, in contrast to alpine glacier forefields, at Glacier Bay (Alaska) symbiotic N fixers like Dryas drummondii and a single species of alder are already dominant at initial sites. That difference might be mainly attributed to climatic conditions. While at Glacier Bay a mild and maritime climate with small annual and diurnal temperature changes predominates, the forefield of the Damma Glacier is characterized by strong temperature variations and irregularly distributed rainfall during the year, both being unfavourable for effective plant establishment (Bernasconi, 2008;Landolt, 1992;Miniaci et al., 2007). Several studies at the Damma Glacier showed that pioneering plants display a nutrient hotspot at initial sites (Töwe et al., 2010;Duc et al., 2009;Miniaci et al., 2007). Thus, much higher abundances and activities of microbes were detected in the rhizosphere of pioneering plants. In initial soils this Biogeosciences, 10, 3983-3996, 2013 www.biogeosciences.net/10/3983/2013/ effect is even expanded up to a distance of 20 cm . The phenomenon of enhanced microbial activity and abundance in the rhizosphere of plants is known as the "rhizosphere effect" (Butler et al., 2003;Hartmann et al., 2008). Regarding the Damma Glacier the rhizosphere effect is generally more pronounced at initial sites compared to developed ones (Töwe et al., 2010;Edwards et al., 2006). This observation seems to be a general effect independent of plant species, bedrock material or climatic conditions. For example, Deiglmayr et al. (2006) detected only a significant difference in the nitrate reductase activity at the initial sites of the Rotmoosferner Glacier. This activity was 23 times higher in the rhizosphere of Poa alpina compared to the bulk soil. The enhancement of microbial activity and abundance in the rhizosphere is mainly attributed to uncontrolled leakage or controlled exudation of organic substances like malate, citrate or oxalate. Thus, it is unquestionable that plants display a nutrient hotspot in terms of C, as they provide up to 40 % of the photosynthetically fixed CO 2 to the microbes (Paterson and Sim, 2000). In return, microbes supply the plant with N, phosphate or other nutrients and additionally protect them against herbivores or parasites. Interestingly, the highest abundances of N fixers in the bulk soil in connection with highest N fixation activity in the rhizosphere of L. alpina have been detected at intermediate development stages (transient phase in Fig. 2), where the N content is still low but plant coverage has already strongly increased, which hints at a competition between microbes and plants for N (Brankatschk et al., 2011;Duc et al., 2009). This theory is further corroborated by results from Töwe et al. (2010), where the abundance of nifH carrying microbes was highest in the rhizosphere of L. alpina planted in a 10 yr soil, which was connected with the lowest C / N ratios of plant biomass. However, during incubation the N content of L. alpina grown in the 10 yr soil strongly increased while plant and root biomass stayed stable, as shown in Fig. 4, which fits with the assumption that microbes are able to win the competition over a short timescale because of their higher volume-surface ratio, higher growth rate and substrate affinity (Hodge et al., 2000). On the contrary, plants are more effective over a long time period because of their longer lifespan and the ability to retain the assimilated N (Hodge et al., 2000;Nordin et al., 2004). In addition to the enrichment of N fixing microbes, heterotrophic mineralizers like chiA-containing microbes are also enhanced (Tscherko et al., 2004;Töwe et al., 2010). The degradation of chitin and proteins has the advantage that low-weight organic compounds consisting of C and N are released. The ability of plants to assimilate amino acids, amino sugar or small peptides seems to be ubiquitously distributed among different ecosystems and plays a particular role in cold and wet habitats (Näsholm et al., 2009;Lipson and Monson, 1998;Schimel and Bennett, 2004). In parallel, functional groups leading to N losses like nitrifying or denitrifying microbes were reduced in the rhizosphere, which is in line with the assumption that plants are able to actively influence their rhizosphere community (Singh et al., 2004) by changing their exudation pattern or by actively excreting substances like tannins, polyphenolic substances or monoterpenes (Briones et al., 2003;Kowalchuk and Stephen, 2001;Ward et al., 1997;Cocking, 2003). In this regard, Edwards et al. (2006) detected changes in the exudation pattern of L. alpina along the forefield of the Damma Glacier mainly attributed to a strong reduction of oxalic and citric acid concentrations. In general, it seems that indeed microbial activity and abundance is much higher in the rhizosphere of pioneering plants compared to the bulk soil, but the community composition is strongly driven by the bulk soil community, whereas at developed sites the plant determines the microbial community in the rhizosphere (Duc et al., 2009;Miniaci et al., 2007). The strength of that phenomenon has been proven for functional groups like nifHcontaining microbes as well as for overall bacterial diversity from different sites and by different methods like PLFA analysis (Tscherko et al., 2004;Bardgett and Walker, 2004), clone libraries (Duc et al., 2009), pyrosequencing (Knelman et al., 2012) and fingerprinting methods (Deiglmayr et al., 2006;Miniaci et al., 2007). With ongoing succession a shift from a competition for N to a competition for phosphate takes place (Vitousek and Farrington, 1997). While phosphate is not limited at the beginning of succession as it is released during weathering of the siliceous bedrock material (Ragot et al., 2013), the concentration of bioavailable phosphate steadily decreases along the glacier chronosequence, thus favouring ecto-and ericoid mycorrhizal associations . A similar trend was found at Glacier Bay (Alaska), where a symbiosis with legumes has been found already in very young soils, but the first mycorrhiza-forming plants appear later in succession (Chapin et al., 1994). Anyhow, as soon as vegetation cover is nearly closed, the ecosystem development speeds up so that ecosystem properties are similar to mature ecosystems. This includes increasing amounts of organic C and N, the formation of soil horizons, increasing microbial biomass and enzyme activities even in the bulk soil (Dümig et al., 2011;Brankatschk et al., 2011;Duc et al., 2009;Sigler and Zeyer, 2002). Once a stable plant community has developed, a positive feedback loop establishes. The plants still provide fixed CO 2 via rhizodeposition to the microbial community. Additionally, the high input of dead plant material provides a broad nutrient source for decomposing microbes. Thus, in contrast to sparsely vegetated sites, sufficient amounts of N are released during mineralization of high molecular compounds, which is then again available for plants. Moreover, climatic conditions like water content, temperature and radiation are more stable below a closed plant cover, leading to a reduction of environmental stressors, which are strongly pronounced in initially poorly vegetated sites. Finally, one can summarize that plants play a crucial role in ecosystem development, but their function changes during www.biogeosciences.net/10/3983/2013/ Biogeosciences, 10, 3983-3996, 2013 succession. At initial stages their pivotal role in stabilizing the slope and providing C is unquestionable, but similarly they compete for scarce nutrients like N with the microbes. The succession of plant establishment strongly depends on the external conditions. Thus, at alpine glacier forefields higher plants establish later compared to glacier forefields with maritime climate, where shrubs and small trees can already be found at very young sites (Bardgett and Walker, 2004). 6 Lessons learned from the Damma Glacier chronosequence for ecosystem development 1. Initial ecosystems can emerge as a consequence of different events. These systems include, for example, forefields of retreating glaciers, chronosequences related to volcanic eruptions, post-mining areas and permanent mechanical disturbance at coastal and inland dunes. For initial ecosystems the following phases of development can be assumed: (i) the system properties are driven by physical structures and hydrological processes, (ii) the importance of chemical processes increases and (iii) biological communities drive ecosystem properties and development (Schaaf et al., 2011). However, the individual time of development strongly depends on the external conditions like bedrock material, altitude, meteorological properties, initial C, or the degree of mechanical disturbance. 2. The knowledge gained from the Damma Glacier showed that not only the presence of biota but also the interaction of organisms is the key for ecosystem development. Regarding the microbial N cycle, development takes place in three phases. Initially, allochthonous N sources are mineralized. As soon as plant biomass and with that competition on the rare N sources increase, N fixation becomes more important. Finally, when the surface is completely covered with plants the N cycling community becomes more complex, and N is mostly derived from biomass. It is obvious that biological "hotspots" like BSCs can accelerate the development of initial ecosystems. It has been shown that the development of the nitrogen cycle in the Damma chronosequence and that in the BSCs of inland dunes are characterized by similar phases. 3. The presented review indicates that the development of initial ecosystems cannot be accelerated by "simple" management strategies like reforestation, irrigation or fertilization. Biotic-abiotic interactions and appropriate soil structures must be established first to pave the way for higher plants. For example, the growth of higher plants in an initial phase can be limited because applied fertilizers will leach into the groundwater and essential mycorrhizal communities are lacking. Therefore, it will be a future challenge to find and establish adequate strategies to accelerate the development of initial ecosystems.
2017-10-19T05:35:54.839Z
2013-06-18T00:00:00.000
{ "year": 2013, "sha1": "30dceaa3546211f0a6b6d264ac37de1a40d5de68", "oa_license": "CCBY", "oa_url": "https://www.biogeosciences.net/10/3983/2013/bg-10-3983-2013.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4bd457bd5cac21a337e811141477ceae9edc4ec0", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
25204591
pes2o/s2orc
v3-fos-license
Primary Idiopathic Chylopericardium-Case Report Pericardial effusions; chylopericardium idiopathic primary. The accumulation of chyle in the pericardial space, or chylopericardium, is a condition occurring most frequently after trauma, cardiac and thoracic surgery, or in association with tumors, tuberculosis or lymphangiomatosis. When its precise cause cannot be identified, it is called primary or idiopathic chylopericardium. This is a rare clinical entity. We report the case of a surgically treated 20-year-old female patient. A brief review of the literature and comments on the clinical presentation, etiopathogenesis, ancillary diagnostic tests and treatment options are also presented. Primary Idiopathic Chylopericardium Case Report Introduction Isolated chylopericardium was first described by Hasebrock in an autopsy case 1 , in 1888.This is a rare clinical entity in which chyle accumulates in the pericardial cavity [1][2][3] .It may be caused by surgical trauma, irradiation, tuberculosis, caval obstruction, and primary or metastatic mediastinal tumors 1,[3][4][5] .The pathophysiology common to all these conditions seems to be thoracic duct obstruction without development of collateral drainage 4 .Congenital lymphangiomatosis and lymphangiectasia may also be the cause of chylopericardium 6,7 . However, the precise etiology cannot be established in many cases.In order to describe these cases, Groves and Effler 8 , in 1954, introduced the terms "primary" or "idiopathic" chylopericardium. We report the case of a surgically treated idiopathic or primary chylopericardium. Case Report A 20-year-old female patient from the city of Lençóis Paulista (State of SP) diagnosed with a large pericardial effusion was referred to our hospital in May 10, 2005. History of present illness: mild dyspnea on heavy exertion for six months, which progressed to moderate exertion felt for two months.She denied edema, dizziness, palpitations or chest pain. Physical examination Good general state of health, mucous membranes moist and pink, no cyanosis; P=HR= 96 bpm; BP = 110/60 mmHg.No jugular venous distension or lower limb edema.Spleen negative percussion sign, nonpalpable.Apical impulse not visible, not palpable; no thrills; heart sounds with normal rhythm and slightly muffled; no heart murmurs. Ancillary tests Normal blood count, Na, K, Mg, Ca, blood glucose, BUN, creatinine, protein profile, LDH, AST, and ALT; negative latex text; normal PCR, and negative serological tests for hepatitis B, C and HIV.Chest radiography: enlarged cardiac silhouette (Figure 1). Echocardiogram -significant pericardial effusion with no signs of restrictive diastolic filling. Hospital course In May 17, 2005, the patient underwent pericardial drainage via the subxiphoid approach using Marfan's technique; 700 ml of a thick milky chylous fluid were drained.A pericardial biopsy was performed and fluid and blood samples were collected for tests.The total volume drained in the postoperative period was of approximately 500 ml, and the pericardial catheter was removed on day 4, 48 hours after drainage had ceased.The patient was discharged on postoperative day 7, and lymphoscintigraphy was scheduled to be performed on an outpatient basis. Lymphoscintigraphy performed 14 days after pericardial drainage showed lower limbs with normal aspect and abnormal thoracic radiopharmaceutical accumulation, which could represent the presence of lymph in the pericardial or pleural space, secondary to an obstructive process or partial aplasia of the thoracic duct (Figure 2).Twenty days after discharge, the patient was rehospitalized with significant pericardial effusion and complaint of dyspnea on moderate exertion.She was operated on via midsternal thoracotomy, and approximately 1,000 ml of a thick milky chylous fluid were removed.Partial pericardiectomy associated with pericardioperitoneal shunt was performed.The pleural and mediastinal cavities were drained. In the postoperative period, the volume drained remained between 150 and 400 ml/day (mean of 240 ml/day) for 15 days, to further decrease until it ceased completely on day 30. Discussion From Groves and Effler's 8 description in 1954, Dunn 1 , in 1975, reported 22 cases of "primary" or "idiopathic" chylopericardium described in the literature.Up to 1992, Akamatsu et al 6 reported 79 cases and, up to 1997, Yüksel et al 2 reported 89 cases.From 1997, we found 25 new cases reported, in a total of 114 cases described up to 2007.In the Brazilian literature we only found one case reported by Fernandes et al 3 in 1998. The pathophysiology of primary chylopericardium may be related to an abnormal connection between the thoracic duct and the pericardium, with the presence of fistulas, chyle reflux associated with lymphatic hypertension with loss of the valve mechanism and increased permeability of lymphatic vessel walls 3 . Clinical manifestations may vary from the absence of symptoms to signs of cardiac tamponade.The most common symptoms are dyspnea, fatigue and cough 5 . The differential diagnosis includes all causes of pericardial effusion, and in most cases chylopericardium is only confirmed by pericardiocentesis, with the finding of a chylous pericardial fluid containing chylomicrons and high triglyceride levels.When the diagnosis is confirmed, study of the lymphatic vessels using lymphoscintigraphy is indicated, because it helps identify lymphopericardial fistulas, anatomical variations or partial aplasia of the thoracic duct 3,5,9 .Computed tomography scan may be useful to rule out lymphangiomatosis. Conservative treatment for chylopericardium consists of adopting a low-fat diet with medium-chain triglycerides, which are absorbed via the portal rather than via the lymphatic vessels, as occurs with long-chain triglycerides 3 .However, in contrast to post-traumatic chylopericardium, the conservative treatment of primary idiopathic chylopericardium is hardly ever successful 5 . In these cases, surgical treatment is indicated, including pericardioperitoneal window, pericardiectomy, and ligation with resection of the thoracic duct just above the diaphragm 6,7 . Akamatsu et al 6 reported that of the 79 cases described in the literature up to 1992, conservative treatment was chosen for 10 (13%), and recurrence of the effusion was observed in six (60%) of these patients.The remainder 69 patients (87%) were surgically treated; 21 (27%) using a pericardial window, seven (9%) using ligation plus resection of the thoracic duct, and 41 (52%) using ligation plus resection of the thoracic duct associated with pericardial window. Thoracic duct ligation and resection just above the diaphragm associated with partial pericardiectomy has been the treatment used in most cases 6 .More recently, this procedure has also been performed by means of thoracoscopy 4 . With the management adopted in the present case -partial pericardiectomy associated with pericardioperitoneal window, after more than two years of follow-up the patient is well, refers no complaints, and her follow-up was uneventful except for a small right pleural effusion.Thus, we believe the patient outcome was favorable, despite the inconvenience of the high volume of fluid drained in the first postoperative days and the long hospital stay. Potential Conflict of Interest No potential conflict of interest relevant to this article was reported. Sources of Funding There were no external funding sources for this study. Study Association This study is not associated with any post-graduation program.
2017-06-18T01:58:43.687Z
2009-06-01T00:00:00.000
{ "year": 2009, "sha1": "3ccf7803158118d79cfc1771cf0c084dacea3a1b", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/abc/a/VHtXTzBzqwKTfYFzMHfLBbb/?format=pdf&lang=pt", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3ccf7803158118d79cfc1771cf0c084dacea3a1b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1510123
pes2o/s2orc
v3-fos-license
Extreme heat and cultural and linguistic minorities in Australia: perceptions of stakeholders Background Despite acclimatisation to hot weather, many individuals in Australia are adversely affected by extreme heat each summer, placing added pressure on the health sector. In terms of public health, it is therefore important to identify vulnerable groups, particularly in the face of a warming climate. International evidence points to a disparity in heat-susceptibility in certain minority groups, although it is unknown if this occurs in Australia. With cultural diversity increasing, the aim of this study was to explore how migrants from different cultural backgrounds and climate experiences manage periods of extreme heat in Australia. Methods A qualitative study was undertaken across three Australian cities, involving interviews and focus groups with key informants including stakeholders involved in multicultural service provision and community members. Thematic analysis and a framework approach were used to analyse the data. Results Whilst migrants and refugees generally adapt well upon resettlement, there are sociocultural barriers encountered by some that hinder environmental adaptation to periods of extreme heat in Australia. These barriers include socioeconomic disadvantage and poor housing, language barriers to the access of information, isolation, health issues, cultural factors and lack of acclimatisation. Most often mentioned as being at risk were new arrivals, people in new and emerging communities, and older migrants. Conclusions With increasing diversity within populations, it is important that the health sector is aware that during periods of extreme heat there may be disparities in the adaptive capacity of minority groups, underpinned by sociocultural and language-based vulnerabilities in migrants and refugees. These factors need to be considered by policymakers when formulating and disseminating heat health strategies. Background Each summer Australia experiences periods of very hot temperatures, and extended heatwaves with maximum temperatures exceeding 35°C for several consecutive days are not uncommon. Despite the population being acclimatised, thermal tolerance can be exceeded when heat extremes occur, and the consequent health impacts can range from marginal increases in morbidity to significant increases in mortality [1][2][3][4]. It is well established that the elderly, the young and the sick are disproportionately at risk [3]. However, other subgroups are also vulnerable and with warmer temperatures imminent, identifying these groups is important for public health authorities in formulating targeted inventions. The number of permanent immigrants to Australia has increased over several decades to the extent that more than one quarter of the nation's population of 23.3 million is overseas-born, and a further one fifth has at least one parent born overseas. Immigrants include those arriving through the Migration Program (including skilled and family stream migrants) and the Humanitarian Program for refugees forced to leave their homeland [5]. Many migrants arrived from South-East Asia in the 1970s and in recent years the proportion of migrants from Asia, as well as other countries, has increased. According to the 2011 national census, almost half of the 'long-standing migrants' who arrived before 2007 and more than two thirds of the recently arrived, speak languages other than English at home [6]. The impacts of heat on the health of people in migrant and minority groups are not well documented. Some studies conducted in the United States have shown that heat-related deaths can be high in people of African American descent [7][8][9][10], and undocumented immigrants from Mexico entering the United States across borders adjoining the Arizona desert [11,12]. However, in countries outside of the United States the literature on susceptibility and adaptation to heat in culturally diverse groups is scarce. This may be due to the issue not previously being seen as an area of public health concern or that data have not been readily available for epidemiological studies. Ancestry has not been well recorded in health statistics to date, and people in non-English speaking communities have often been excluded from traditional health research studies [13]. Despite the nation's diversity in cultures, language and climate experiences, it is unknown if migrants seamlessly adapt to Australia's hot summers or if certain barriers are encountered which could affect wellbeing during bouts of extreme heat. This is a critical gap in public health knowledge, particularly in countries where migration and cultural diversity is increasing and the climate is warming. The aim of this research was to ascertain using qualitative methods, if barriers affecting adaptation to extreme heat exist within culturally and linguistically diverse (CALD) communities in Australia and if so, to identify vulnerable subgroups. Methods The study was based in Adelaide, South Australia, and data were also collected in Melbourne, Victoria; and Sydney, New South Wales ( Figure 1). Of the three cities, Adelaide has the lowest population (1.2 million) and the warmest climate; Melbourne has a population of 4.1 million and the coolest climate; and Sydney has the largest population (4.6 million) and a more humid climate [5]. The percentage of residents born overseas in the respective states of South Australia, Victoria and New South Wales is 22%, 26% and 27% respectively [5]. Cross-cultural research calls for a degree of flexibility in sampling and recruitment as standard community sampling techniques can be unduly time-consuming and expensive [14]. Careful consideration was therefore given to a sampling strategy that would adequately answer the research question whilst providing information about issues affecting a range of immigrants of diverse cultural and linguistic backgrounds, ages, and length of stay in Australia. Hence, using purposive sampling methods, key informants closely associated with a range of migrant groups were identified through a research reference group and a comprehensive internet search of services and support groups. Stakeholders from three main sectors (state and local government; non-government organisations and service providers; and migrant and refugee health services) were contacted by telephone and/or email and interested persons provided with further information. On occasions the primary contact declined to participate and suggested a secondary contact more experienced in the research topic. An advantage of our sampling strategy was that stakeholders acted as conduits and were able to speak freely about their observations and experiences of barriers and enablers encountered by clients and community members. Snowball sampling also resulted in a convenience sample of members of an Asian community group and a recently arrived refugee family. Interviews and group interviews/focus groups (with between two and five participants) were conducted in the warm months between December 2011 and April 2012 in a range of venues. In Adelaide, interviews and focus groups were held at the University of Adelaide or on site at the respondents' organisation. In Melbourne, sessions were held at the respondents' place of employment, and a family home, whilst in Sydney sessions took place at venues in the multicultural inner west/western suburbs. Informed, written consent was provided by respondents prior to the commencement of the interviews and confidentiality was assured. For one community group information sheets and consent forms were translated and a bi-lingual speaker assisted with the session. The interview topic guide was informed by a literature review and the research reference group comprising a panel of experts. Questions related to experiences with extreme heat in the communities, factors contributing to vulnerability, adaptive behaviours, and knowledge of heat-health warnings, as detailed previously [15]. Whilst the questions for community members were essentially the same as those for other stakeholders, the wording was modified slightly. Respondents were encouraged to use the questions as a guide only and to expand on points of interest. All except one interview was digitally recorded and subsequently transcribed by either the researcher or an independent service. Data analysis Transcripts were imported into the qualitative data analysis software package NVivo 9 (QSR International, Doncaster, Australia). Data for each city were analysed separately using the framework approach. Described by Ritchie and Spencer (1994), framework analysis uses a systematic approach to data management to provide coherence and structure to qualitative data [16,17]. Passages of text representing repeated themes were identified and assigned headings according to the context and coded to as many relevant categories as possible to reduce the likelihood of missing key points. The data were then synthesised in a chart format using headings identified from the thematic analysis [16]. This approach enhances rigour, transparency and validity of the analytic process [18]. Analysis was both deductive, with categories derived from prior knowledge, and inductive, with categories emerging purely from the data [19]. Ethics approval for the study was received from the University of Adelaide, Monash University and the South Australian Department of Health. The study adheres to the 'RATS' qualitative research review guidelines for reporting qualitative studies (www.biomedcentral.com/authors/rats) (Additional file 1). Results In total there were 36 respondents across the three cities, with the majority being from Adelaide (Table 1). Most were involved in service provision to, and liaison with, clients in CALD communities. Many were migrants themselves (or descendants of immigrants) from Africa, Asia, Europe or the Middle East. Respondents spoke about the barriers and enablers facing some in migrant and refugee communities during periods of extreme heat in Australia, and also commented that some were relatively unaffected. In depth narratives revealed the disparities between the communities regarding abilities to cope with heat and one respondent spoke of the migrant population not being considered by authorities: "Extreme heat, it happens every year but nobody thinks of the migrants and how it affects them, … OK, you survive like everybody else but not everybody is prepared the same way for it and not everybody has the resources to manage that time." Coordinator, Adelaide Eleven emergent and often inter-linking themes were identified from the narratives: 'Cultural factors', 'Fluid intake', 'Health issues', 'Heat is different', 'Housing', 'Language barriers', 'Isolation', 'Low literacy', 'Power costs', 'SES', and 'Transport'. Displayed in Table 2 are the themes from the Adelaide narratives, some of which were reiterated interstate -i.e. the first six of these eleven themes also emerged from the Melbourne narratives, whilst from Sydney there were five ('Cultural factors', 'Health In some cultures drinking water is more acceptable than in others. Some people will not use air conditioning if they are not feeling well. Different climate pattern in Australia. Drier heat which may not cool down in evening. Assumption that people from hot climates can adapt. Sunburn can be a problem. Past issues with new arrivals not knowing how to use household appliances. 5 Many are wary of local government. Church representatives may be able to relay heat messages. Older people often think they will cope like a younger person. Visual impairments in the aged can lead to air conditioners being set to 'heat' not 'cool'. Some Italian and Greek migrants have lush gardens and significant trees that shade houses. 6 Ramadan can be an issue if it falls in hot weather. Cultural sensitivity require the wearing of cultural garments in the heat. Education needed re dressing children for hot weather. Dark clothing can be worn during periods of grieving. Some people prefer to drink hot rather than cold water. Some shopping centres have removed seating. Access to air conditioning can save lives but some won't use it. In Australia it is different to Africa where it is tropical, with a cool wind. The newly arrived don't know how hot it gets. Heat is dry here. Many live in homes without AC or in old homes that take days to cool down. In first 6 months accommodation is basic, crowded. Asylum seekers may have no secure housing. 7 Electrical appliances can be new to refugees. Swimming often not part of cultural norm. People socialise less with neighbours here. Cultural issues with fluid intake and Ramadan, and cultural dress codes during hot weather. Some in new communities get dehydrated -need to be reminded to drink water. Some don't like the taste. Water required boiling in refugee camps. Sun here burns the skin. Gallstones common as people don't drink enough. There can be religious and cultural barriers for CALD women seeking health care and treatment in emergency departments. Different type of heat and it doesn't cool down at night. The heat is very direct, dry, and uncomfortable. Incorrect assumptions that people from hot countries are used to the heat. Houses may not be in good condition. There should be fans in each bedroom. If their house is hot people will go to shopping centres. issues', 'Housing', 'Language barriers' and 'Power costs'). Another theme identified in each city was 'Who is vulnerable?' Cultural factors and norms Although many new arrivals adapt quickly in Australia, some can be unaware of the need for adults and children to dress lightly during the heat to aid thermoregulation. Additionally, some cultural and religious mores at times dictate the wearing of traditional heavy, dark coloured garments not ideally suited to hot weather. A culture-specific barrier that was raised in Melbourne and Adelaide by stakeholders from Africa was that sometimes people in visible minorities reportedly do not feel comfortable "hanging around" in cooled spaces such as shopping centres because "you stand out when you're different". By contrast, a refugee from Bhutan stated that going to shopping centres was a more practical alternative for his community than cooling off at swimming pools, because "95% of Bhutanese they don't know how to swim". Cultural differences surrounding preferences for hot food and being unable to drink between dawn and sunset during the Islamic month of fasting (Ramadan) can be problematic during hot weather. Respondents also highlighted that due to previous experience in their home countries, many migrants are wary of officials or people in uniform offering assistance, and that access to culturally appropriate emergency health care can be an issue for some women. The practice of Asian women using sun umbrellas to preserve skin colour and Muslim women wearing culturally appropriate swimwear were mentioned as examples of cultural adaptation to the hot climate. The strong family connections and social networks of migrant groups can be beneficial during the heat, particularly for older people in CALD communities who are cared for by their families. By contrast, a Sydney respondent mentioned that the cultural norm of elders living with their family may not always reduce vulnerability if there is a disincentive for them to use air conditioning because of the added cost to the household. Health issues and lack of fluid intake Some migrants and refugees do not drink enough water for reasons which include a dislike of the taste, a lack of awareness about the need to keep hydrated during hot weather and recollection of poor water quality in refugee camps. However, one respondent stated he drank more water now than when he was in Africa. Another spoke of people who have built up a "resistance" to lack of water because of past experiences, and can "go for hours without water". Health care providers and others also spoke of people having insufficient fluid intake leading to health issues such as kidney stones, gall stones, headaches and constipation. A manager spoke about refugees preferring soft drinks to water as it is a "sign of affluence" and of the consequent impact on physical and dental health. Another respondent pointed out that promoting water to migrants as the "standard drink" should be encouraged. It was mentioned that for older people, a reluctance to drink water can be related to incontinence issues and that messages about dehydration for the young as well as older people, need to be reinforced: "But I still think a lot of the key messages in keeping hydrated and what to do when working with young children or caring for young children, some of those messages I don't think are still reaching the communities." Diversity Officer, Melbourne A physician in Adelaide mentioned that people in new and emerging communities can have a range of comorbidities, nutritional deficiencies and mental health issues which can affect vulnerability. Also mentioned was the mental anguish that can be experienced during periods of extreme heat by being confined to a hot house. Strong descriptive terms such as "emotionally disturbing" and "tormenting" were used. Health issues were raised by respondents in Melbourne who spoke about the effect of the dry heat causing people to "feel exhausted and tired" and that chronic health conditions influenced vulnerability. Valuable information about heat and its effect on the health of new arrivals was gained from discussion with a refugee family. When asked if very hot weather affects how people feel, the respondent answered passionately that it was "affecting the total health of the people". He spoke about headaches, feeling lazy, itchy skin rashes and sunburn. The respondent expanded on the lack of acclimatisation and underlying health problems that could be contributing factors: "They came from refugee background so they never had proper amount of nutrition food in their camp life and lack of light so they lack vitamin D as we too so they don't have a high resistance capacity of all those things …. This community is facing a high problem … in refugee camp was some kind of terrible lack of nutritious food, lack of good, er water … and lack of medical capacities." Community member, Melbourne Heat is different Many respondents who were migrants commented that the heat in Adelaide and Melbourne was different to that with which they were familiar. They spoke of the dry heat; that the temperature often does not cool down at night, and that sunburn can be an issue. Moreover, a Sydney community worker said that people in her Asian community were not used to wearing sunscreen as sunburn was rare in their country. A newly arrived community member said he was not aware of the climate in Melbourne before coming to Australia, and compared to Bhutan he found it "extremely hot, extremely hot". Furthermore, it was mentioned by more than one respondent that Australians stereotypically make assumptions about people from hot countries and their ability to cope with the heat: "The problem we have as Africans in the heat is that the sun here you can actually feel it burning your skin where [as] … sun [in Africa] does not burn your skin."… "Most Australians think that, especially Africans … are used to heat… But, as I said before, it is a different type of heat…" Health care worker, Adelaide Socioeconomic status, housing and power costs Narratives revealed that when migrants and refugees arrive they are often unable to gain employment and can face financial disadvantage. Poor educational attainment for some makes this quest more difficult. Low socioeconomic status (SES) can be linked to poor housing, and difficulty in paying utility bills. One respondent also spoke of the sense of "obligation" felt by people in his community to send monetary support to family in their home country, adding to financial stress. Housing was mentioned by most respondents who said that usually rental accommodation for migrants is very basic with no air conditioning and often no fans. Sometimes occupants can stay in these properties as they age and their vulnerability increases. Compared to Adelaide, there were some differences in Melbourne where a lower proportion of homes have air conditioning. A program coordinator said that people once thought they could cope with the heat but now "because of the changing weather and more hot days, people are installing air-conditioning." A manager from Sydney said that central air conditioning should be standard in Australian homes as central heating is in Europe. Another respondent spoke of housing issues for two main groups -older people and new arrivals: "What we have got is -the two different groups who are impacted, the older people are often in old houses that don't have insulation and … the houses aren't good for … the heat. The newly-arrived are in rental properties and often at the lower end of the market too and … don't necessarily insulate their houses for their tenants." Program Coordinator, Sydney Adelaide has hot summers and the vast majority of homes are air conditioned. In each of the interview sessions in Adelaide, the high cost of power was mentioned as a major barrier to air conditioner usage. This issue was raised to a lesser extent in Sydney, where a notable difference was the numerous clubs and gambling venues offering a cooled, welcoming environment. These are often frequented when the weather is hot, leading to financial stress for gamblers who are then unable to pay their power bills. Rising utility costs are a concern for many including older migrants and low income earners in the general population including those in new and emerging communities: "…This is the community in general … the increasing rising costs of electricity is a huge issue and factor…. People still will make that decision consciously not to put their air-conditioner on because they don't want the stress and the worry about getting that bill." Language barriers and low literacy Having poor English proficiency can be a barrier during hot weather, and can increase vulnerability and isolation in people unable to access services, receive information or communicate to others. Language barriers can exist not only for new migrants of non-English speaking backgrounds, but also, as reported by respondents, longstanding elderly migrants who may revert to their first language and culture due to age-related neuro-cognitive conditions. Many older migrants who arrived post World War II, and recent humanitarian entrants, have had minimal if any, schooling and cannot read well even in their native language. These low literacy levels can also affect the transfer of information, the uptake of heat-health messages and the ability to read safety signs (e.g. at the beach), as mentioned by one respondent. A service manager in Sydney explained that older people in new and emerging communities find it particularly difficult to learn English. Similarly, an Adelaide respondent said this can lead to limited verbal communication as younger family members who were born and raised in refugee camps often do not speak the traditional dialects of their elders. Furthermore, a refugee family in Melbourne said that being unable to understand the language was a "real barrier" to being able to access information about extreme heat. Additionally, language barriers can hamper access to health care: "If I don't speak English … for example I had someone sick at home -so even if I find a place to help I don't know how to say it how to describe it, what I need." Isolation and transport issues Respondents spoke of strong social and family connections in CALD communities; however, as mentioned above, certain factors can lead to individuals or families becoming linguistically or socially isolated. In Adelaide, accessing cooler places can be a problem for people without transport options, thereby adding to social isolation and vulnerability during extreme heat. Although asylum seekers, humanitarian entrants and others may lack connections in the community, isolation was mainly spoken about in the context of older people. "And we do have clients that don't have English and they are living on their own and some cases they are the only ones in the country. We even have clients who don't have any other relatives, so that really isolates them." Service provider, Adelaide Who is vulnerable? Respondents mentioned that amongst the vulnerable were people from areas in Africa, Bhutan, Middle East and the cool European and Scandinavian countries. Also mentioned were asylum seekers, mothers with babies (particularly single mothers), young children, people with low SES and low income, the homeless, people with poor English and the isolated in CALD communities. People with a disability and their carers, people with mental health problems and multiple chronic illnesses, and those taking certain medications were also vulnerable. Most often mentioned however, were the newly arrived, low SES migrants and refugees in new and emerging communities and who are not acclimatised to the conditions, and older people in migrant communities, especially those who lack English proficiency, as highlighted by these extracts: "The older ones are particularly vulnerable because of the language and other cultural issues and … -there is an attitude of: we are going to stick it out and cope with it." Program Coordinator, Melbourne "So I would suggest that the newly-arrived because they don't understand this environment, … they are at a loss about how to cool themselves." Program Coordinator, Sydney There are some similarities in risk factors for these two groups as shown in Table 3 which summarises some of the points previously mentioned. Although some respondents stressed the importance of the issue and concerns for their community members in the heat, others thought that in the context of the complexity of issues facing those in the midst of resettlement, weather is unlikely to rank as a priority: "It comes last for them to know: oh, okay, the sun is burning me or I have to drink water -who cares if I have to drink water or not if I don't have money to pay my bill, you know, that comes not being essential in a priority." Health care worker, Adelaide Discussion This qualitative investigation has given voice to stakeholders and people in cultural and linguistic minorities about the topic of extreme heat in Australia. Whilst the definition of 'culturally and linguistically diverse' in Australia is broad, respondents' narratives related mainly to people or their descendants who have migrated from countries abroad with cultural differences to Australia, and the main spoken language is not English; hence in this instance 'CALD' is used as a descriptor in these terms. This study draws on previous research in Adelaide recognising a need to investigate potential heat-susceptibility in non-Australian born residents [20,21] given the paucity of current literature on this topic [22]. It also builds on international evidence that points to a disparity in the risk of heat-related illness in people of different ethnic/racial backgrounds [10,[23][24][25][26]. Findings have identified a range of multi-factorial issues that may hamper some migrants and refugees in adapting to periods of very high to extreme summer temperatures in Australia. These relate to cultural factors including wearing garments more suited to cool weather, not drinking enough water, and unfamiliarity with certain aspects of Australian culture including the use of sunscreen. Health issues, socioeconomic disadvantage and poor quality rental accommodation for low income migrants, social isolation, language and literacy barriers limiting access to heat health warning messages, and lack of acclimatisation to the 'different' heat in south-eastern Australia can also impact on the potential risk of harm during heat extremes. The vulnerable individuals in CALD communities were often identified as older people, new arrivals (i.e. who settled in Australia within the last 5 years), and people in new and emerging communities. Older people in general can have declining physical and mental health that can increase heat-susceptibility. However, they generally do not consider themselves to be at risk [27] and are reluctant to using cooling systems [21]. Older people in new and emerging communities may be doubly at risk, particularly if they lack English proficiency skills which can add to isolation and limit access to harm minimisation information. This is mirrored by other studies reporting that ethnic minority language groups can be vulnerable to extreme heat because of exclusion from access to English-based reports and heat information [28,29]. As a consequence there can be a lower uptake of adaptive behaviour messages [23]. Language barriers not only apply to the recently arrived but also the ageing post-war European migrants who can become nostalgic later in life and revert to their primary culture and language, as described by Schmid and Keijzer [30]. Stakeholders mentioned a range of physical and psychological conditions affecting humanitarian entrants and older migrants. In a Sydney study of access to health care for recently arrived refugee families, it was found that few owned a house or car, nearly all were unemployed, and most did not have functional English language skills [31]. There was also the disadvantages of low literacy skills, financial handicap, language barriers, lack of transport, not knowing where to seek help, and poor health knowledge [31]. These findings parallel the narratives of respondents in this study and highlight the barriers for resettled refugees that can hinder acculturation. Not being physiologically and behaviourally acclimatised to the local climatic conditions can influence risk [25,32] and can be a factor in heat-related deaths in Australia [33]. Immigrants of different skin colours and pre-migration climatic experiences commented on the different type of heat in Australia. However, this was not the case in Sydney, where humidity is higher during the summer months [34]. Migration-related factors can influence tolerance and adaptation to extreme heat and it is understandable that newly arrived migrants may suffer in the uniquely dry heat of south-eastern Australia. Turning on home air conditioners, and using air conditioned cars to drive to cooler places as practiced by most Australian-born families [35] are options unavailable to the financially disadvantaged. This lack of ability to attain thermal comfort during extreme heat has the potential to increase the risk of adverse heat health outcomes. Conversely, using cooling devices is highly protective [36,37], however usage is expensive and we found the high cost of power to be a common barrier mentioned in narratives from Adelaide and Sydney. Adelaide has the third highest household electricity costs in the world behind Denmark and Germany [38], reportedly as a result of the high power demand caused by air conditioner usage during hot weather [39]. Smarter technologies and improvements to housing design are needed to reduce the health impacts of high temperatures [32] and lower the need for home air conditioning. Publicly cooled spaces can be frequented by people not wishing to incur high energy bills at home. Disturbingly however, there was evidence that for some people in new and emerging communities the risk of being marginalised in public can influence adaptive behaviour and was a deterrent to people retreating to shopping centres. This is supported by another study which claims that in a predominantly Caucasian society, "visibility" due to different skin colour, attire or accent, can render refugees and others vulnerable to "street discrimination" [40]. Notwithstanding these issues migrants, by necessity, can be resilient and have a high adaptive capacity, and certain cultural norms and life experiences can be beneficial to the resettlement process. Enablers to heat adaptation include strong family structure and social networks that exist within collectivist communities. High social capital and having elders live with the family reduces the likelihood of isolation which is known to be linked to societal vulnerability and a risk factor for heat-related mortality [23,41]. Simple harm minimisation behaviours can mitigate the health threat posed by extreme heat, but these are not necessarily intuitive, particularly to those who have not long resided in Australia. Multilingual heat-health advisories could be broadcast via a range of ethnic media outlets and community networks during heatwaves to increase awareness about the health risks of heat exposure including dehydration, and inform about behaviours to minimise the risk of harm in the heat [22]. Furthermore, a better understanding and knowledge of effective health promotion measures within collectivist societies, and the influence of cultural practices and sensitivities on health outcomes, will better inform population health programs and services [42]. This study has several limitations. Sample sizes were relatively small and there were few interviews at the community level. However, this scoping study has laid the foundations for a further study currently being undertaken involving community members. The migrant population of Australia is vastly heterogeneous and findings are not intended to be generalisable beyond the scope of the study. Findings may reflect problems that exist in only a minority of migrants and refugees if recruitment inadvertently resulted in a biased sample. Indeed, among immigrants arriving as part of the skilled migration program employment rates can be low and English proficiency high [43] and it would therefore be less likely that heat risks in this group would differ to that of the equivalent Australian-born population. Nevertheless, this study has given voice to those who have expressed genuine concerns about the potential impact of extreme heat on the disadvantaged with cultural and linguistic vulnerabilities, and an unmet need for access to appropriate information about adaptive behaviours. Further qualitative and quantitative research is required to investigate potential disparities in the impacts of extreme heat on minority groups in Australia.
2017-06-21T19:50:32.032Z
2014-06-03T00:00:00.000
{ "year": 2014, "sha1": "ffaba6b7cc597ea1e1e3050ca2359cedb13d6c76", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-14-550", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd2fe7cbe45f5589338f6cce745e353df3e715a9", "s2fieldsofstudy": [ "Environmental Science", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
74598097
pes2o/s2orc
v3-fos-license
Successful Thrombolysis of a Thrombosed Mechanical Aortic Valve Prosthesis Using a Slow Intravenous Tenecteplase Infusion © Under License of Creative Commons Attribution 3.0 License This article is available at: www.intarchmed.com and www.medbrary.com Introduction Prosthetic valve thrombosis (PVT) is considered a serious complication following mechanical valve replacement. It may result in disabling peripheral thromboembolism and life-threatening deterioration in a patient’s hemodynamic status unless dealt with appropriately and promptly. Treatment of PVT includes administration of thrombolytic agents or surgery. Various thrombolytic treatments include streptokinase, urokinase and recombinant tissue plasminogen activators have been reported with variable success rate. However, the data on the use of Tenecteplase (a synthetic tissue plasminogen activator) is limited. We report here a case of 31 years old male patient with aortic PVT that successfully treated with slow inravenous tenecteplase infusion restoring complete valve function. Introduction Prosthetic valve thrombosis (PVT) is considered a serious complication following mechanical valve replacement.It may result in disabling peripheral thromboembolism and life-threatening deterioration in a patient's hemodynamic status unless dealt with appropriately and promptly.Treatment of PVT includes administration of thrombolytic agents or surgery.Various thrombolytic treatments include streptokinase, urokinase and recombinant tissue plasminogen activators have been reported with variable success rate.However, the data on the use of Tenecteplase (a synthetic tissue plasminogen activator) is limited.We report here a case of 31 years old male patient with aortic PVT that successfully treated with slow inravenous tenecteplase infusion restoring complete valve function. regurgitation and was on oral anticoagulant (warfarin), maintaining a therapeutic International normalized ratio (INR) during follow up.He presented 3 years later with progressively worsening dyspnea of New York Heart Association (NYHA) class II of one month duration.He admitted to not taking his oral anticoagulation for two months prior to presentation.On examination, vital signs were stable with an absent ejection click and a grade 4/6 systolic murmur.Transthoracic ehocardiogram revealed decreased mobility of one leaflet with a peak and mean aortic valve gradients of 140 mm Hg and 90 mm Hg, respectively (Figure 1).Left ventricular function was normal.Fluroscopy showed severely restricted mobility of the leaflet.His INR was normal. After discussing various options, we elected to treat him with slow intravenous infusion of Tenecteplase (Metalyse, Boehringer Ingelheim) 35 mg diluted in 50 ml normal saline over 24 hours and to be repeated according to the patient's clinical and echocardiographic response.No bolus doses were given.Following thrombolysis, unfractionated heparin infusion and warfarin were commenced until INR is therapeutic.Small dose Aspirin was added to patient therapy.After thrombolysis, the patient subjectively felt relieved within 6 hours and the prosthetic valve click became sharp, and the systolic murmer decreased to grade 1/6.We continued thrombolysis for further 24 hours.After 48 hours his vitals were stable, the click was well audible and his murmer disappeared.Two dimensional echo showed full mobility of aortic valve leaflets, and aortic valve peak and mean gradients were decreased to 22 mmHg and 12 mmHg, respectively (Figure 2).Fluroscopy showed completely mobile prosthetic aortic leaflets.The patient was discharged home with no complication 5 days later. Discussion PVT is defined as any obstruction of prosthesis by non-infective thrombotic material.It has an estimated incidence of 0.03%-4.3%per year [1] and is reported to occur in 0.5%-8% of the leftsided prosthetic valves and in up to 20% of tricuspid prostheses.The most common cause of PVT is inadequate anticoagulation therapy.[1] Optimal Lengyel et al considered thrombolysis as the first line of treatment for obstructive PVT, independent of NYHA class and thrombus size, if there are no contraindications. [3] On the other hand, in a recent series of 210 patients reported by Roudant, surgical treatment was associated with significantly better long-term results in terms of recurrence and mortality and a lower incidence of embolic complications, which reached 15% in the fibrinolysis group (vs.0.7 % In the surgery group). [4] The fibrinolytic agents used for treatment of PVT are streptokinase, urokinase and recombinant tissue plasminogen activator (alteplase).The newer fibrinolytic agent Tenecteplase is a synthetically engineered variant of alteplase designed to have increased fibrin specificity, greater efficacy, increased resistance to plasminogen activator inhibitor-1(PAL-1) and a longer half life.It has been used extensively in acute myocardial infarction but there are anecdotal reports of its use in treatment of mitral and aortic PVT.Charokopos et al were the first to publish report of Tenecteplase for aortic PVT.[5] Our patient was symptomatic for 1 month.As Tenecteplase is more fibrin specific, easy to administer and we had previously used it successfully in Mitral valve thrombosis, [6] we felt it was a good choice.In case thrombolysis had failed, re-exploration and redo aortic valve replacement was planned.To our knowledge this is a newly described dose regimen of Tenecteplase in the thrombolysis of a thrombosed prosthetic aortic valve to be given without any intravenous bolus. PVT can be successfully treated with Tenecteplase.More experience of its use and the rate of its administration might establish its role as the thrombolytic of choice in management of PVT. Figure 1 : Figure 1: Pre-treatment transthoracic ehocardiogram showing decreased mobility of one leaflet with a peak and mean aortic valve gradients of 140 and 90 mm Hg, respectively. Figure 2 : Figure 2: Post-treatment transthoracic echocardiography showing full mobility of aortic valve leaflets with decreased peak and mean gradients to 22 mmHg and 12 mmHg, respectively.
2019-03-12T13:11:52.956Z
2016-02-14T00:00:00.000
{ "year": 2016, "sha1": "8e3ea67bbeb870a86c44749584f9346687c83998", "oa_license": "CCBY", "oa_url": "http://imed.pub/ojs/index.php/iam/article/download/1469/1127", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8e3ea67bbeb870a86c44749584f9346687c83998", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1579094
pes2o/s2orc
v3-fos-license
Factorization for Jet Radius Logarithms in Jet Mass Spectra at the LHC To predict the jet mass spectrum at a hadron collider it is crucial to account for the resummation of logarithms between the transverse momentum of the jet and its invariant mass $m_J$. For small jet areas there are additional large logarithms of the jet radius $R$, which affect the convergence of the perturbative series. We present an analytic framework for exclusive jet production at the LHC which gives a complete description of the jet mass spectrum including realistic jet algorithms and jet vetoes. It factorizes the scales associated with $m_J$, $R$, and the jet veto, enabling in addition the systematic resummation of jet radius logarithms in the jet mass spectrum beyond leading logarithmic order. We discuss the factorization formulae for the peak and tail region of the jet mass spectrum and for small and large $R$, and the relations between the different regimes and how to combine them. Regions of experimental interest are classified which do not involve large nonglobal logarithms. We also present universal results for nonperturbative effects and discuss various jet vetoes. Introduction The field of jet substructure has continued to expand over the past few years, providing valuable tools to study processes in the challenging environment at the LHC [1][2][3]. This is e.g. due to the fact that massive resonances (top quarks, W bosons, etc.) which may be part of a new physics signal are often boosted and the discrimination of their collimated decay products from QCD jets critically relies on jet substructure techniques. This field has flourished due to the excellent performance of the ATLAS and CMS detectors and the development of new substructure techniques. Theoretically one has to predict the dynamics and distribution of radiation inside jets produced by different particles. Most theoretical studies still rely strongly on Monte Carlo parton showers, which are limited in their precision. However, there has been a recent push to developing analytic frameworks which provide theoretical uncertainties and put predictions on a firmer footing. Such calculations may also suggest ways to improve observables, see e.g. refs. [4][5][6]. While the description of jets originating from the decay of highly boosted massive particles (e.g. for pp → Z(→ ¯ )Z(→ 1 jet)) can be carried out to high precision with standard methods (see e.g. ref. [7]), the associated process with the jet originating from color-correlated emissions (e.g. for pp → Z(→ ¯ ) + 1 jet) is much more difficult to handle analytically. A basic and important benchmark observable for studying the radiation inside a jet is the invariant mass m J of a jet, given by the square of the total four-momentum of the jet constituents, m 2 J = ( i∈J p µ i ) 2 . The jet mass spectrum provides key information about the influence of Sudakov double logarithms and soft radiation in a hadronic environment and in particular probes the dependence on the jet algorithm and jet size R, color flow, initial and final state partonic channels, hadronization, and underlying event. The best sensitivity to these effects comes from studying jets in their primal state, without using jet-grooming techniques to change the nature of the jet constituents. While useful for tagging studies, jet grooming fundamentally changes the nature of the jet mass observable, and is known to reduce its utility as a probe of these physical effects [4,8,9]. In the past few years, several analytic ungroomed jet mass calculations for hadron colliders have been carried out [10][11][12][13][14][15]. In ref. [10], the inclusive jet mass spectrum in pp → 2 jets and Z + 1 jet was calculated at next-to-leading-logarithmic (NLL) order. In ref. [11], next-to-next-to-leading-logarithmic (NNLL) order results were obtained for the pp → γ + 1 jet, by examining the jet mass spectrum while expanding around a threshold limit. A similar setup was used in Ref. [14] to obtain the jet mass spectrum for pp → dijets. In ref. [12], the jet mass spectrum was directly calculated for pp → H + 1 jet at NNLL order, where a veto on additional jets was imposed to obtain an exclusive 1-jet sample. The utility of the first moment of the jet mass spectrum as a mechanism to disentangle different sources of soft radiation underlying the hard interaction was discussed in ref. [13]. Recently, in ref. [15] the study of jet mass was extended to angularities for pp → 2 jets at NLL , with an exclusive 2-jet sample without a veto beyond a certain rapidity cut. In this paper, we improve the analytic description of jet mass spectra at the LHC, by systematically taking the effects of realistic jet algorithms into account with factorization formulae. In particular, for small jet sizes the exclusive N -jet cross section contains Sudakov double logarithms of the jet radius R, in conjunction with logarithms of the jet mass and jet veto, and our results enable their resummation at any perturbative order. This allows in particular for NNLL resummation using known anomalous dimensions and the relations provided here. This factorization in the small-R regime is our main focus. We also consider the tail of the jet mass spectrum where the R dependence is important because of the kinematic bound m J p J T R, 1 where p J T is the transverse momentum of the jet. For definiteness, we consider the jet mass spectrum for pp → L + 1 jet, where L is a hard color-singlet state (e.g. γ, W , Z, H) recoiling against the jet. The jet region is determined by a factorization-friendly jet algorithm like anti-k T clustering [16] or the Njettiness partitioning used in XCone [17,18], with a jet radius parameter R controlling its size. The hard signal jet of interest is uniquely identified by imposing a veto on additional jets, for which we consider a range of possibilities, including beam thrust [19] and the standard p T jet veto. Although jet mass measurements typically use R ≈ 1, see e.g. refs. [20,21], we will find that the O(α s ) corrections for m J p J T R are still well approximated by the small-R result, such that the actual expansion parameter is rather (R/R 0 ) 2 with R 0 2. Throughout the paper we will often leave the factors of R 0 implicit when indicating that there are power correction of O(R 2 ) and logarithms ln R. To treat the small-R effects, we build on the recent work of ref. [22], which discussed the systematic resummation of jet radius logarithms for e + e − → 2 cone jets with an energy veto on the radiation outside the jets. This process was also studied in ref. [23] using a similar SCET framework. It was found that the resummation of jet-radius logarithms requires an extension of Soft-Collinear Effective Theory (SCET) [24][25][26][27], most often called SCET + , which contains additional modes that are simultaneously collinear and soft [28][29][30][31]. Recently, in [15] the ln R resummation of ref. [22] was extended to pp → dijets away from the endpoint of the angularity distribution. Note that the resummation of jet radius logarithms at leading logarithmic order was also developed earlier in ref. [32] for several types of jet observables, including the inclusive jet spectrum. However, for these observables the structure of logarithms is different than the jet mass measurements considered here, since no Sudakov double logarithms of the jet radius (of the identified hard jet) arise. For the inclusive jet spectrum the small R expansion also works well for R 1, as recently discussed in ref. [33]. To organize our discussion, we divide the treatment of jet mass and jet radius into several distinct cases. As illustrated in fig. 1, one can distinguish four different regimes with different hierarchies between R and R 0 and the scales m J and p J T R: All of these require distinct factorization formulae to resum the corresponding large logarithms. Specifically, in regimes 1 and 2 these are logarithms of m J /p J T , and in regimes 2 and 3 logarithms of R/R 0 . We also discuss how to appropriately combine these regimes to obtain a complete description for any value of m J and R. In carrying out jet mass resummation, an additional complication is that the restrictions on the radiation inside and outside the jet imposed by the measurement lead to nonglobal (NG) structures. If the kinematic scales related to these constraints are widely separated, the nonglobal contributions can contain parametrically large nonglobal logarithms (NGLs) [34]. In ref. [10] the NGLs were resummed in the large-N c approximation and found to be significant in the peak region for the inclusive jet calculation considered there. Although NGLs were not resummed in ref. [11], their estimated size agreed with ref. [10]. In contrast, if a veto on additional hard jets is imposed, it changes the structure of the nonglobal terms, providing regions of phase space where NGLs are not large and other regions where they are [12]. 2 The NGLs may still have a sizeable relative impact for unnormalized spectra, but in the factorization framework their effects on the small m J -spectrum are tamed by having the same Sudakov suppression as all other terms. For normalized spectra the dependence on the jet veto largely drops out and the effects due to NG structures remain moderate. In particular, in regime 1 with R ∼ R 0 and a range of jet-veto scales there are no large NGLs over the majority of the jet-mass spectrum [12]. We will see that in regime 2 with R R 0 large NGLs can similarly be avoided. However, the associated parametric condition on the jet veto cannot be satisfied over the full jet mass spectrum including the far tail of the spectrum corresponding to regime 3. On the other hand, we will demonstrate that in regimes 2 and 3 the leading NGLs are simply 2 The mitigation of NGLs through additional measurements was first addressed in ref. [35]. those of hemisphere soft functions, which have been studied extensively in the literature, see e.g. refs. [36][37][38]. This has also been seen in the explicit O(α 2 s ) computation for jet shapes in the small R limit in ref. [39]. Approaches for their resummation beyond the large-N c leading logarithms [40] have been developed recently, see refs. [30,[41][42][43][44], and can be directly applied to our case. The outline of the paper is as follows: In sec. 2, we present the factorized cross sections relevant for regimes 1, 2, and 3, focusing on the case of a global generalized beam thrust jet veto. We also discuss the relations among the regimes and their combination and briefly comment on regime 4. The definitions and one-loop expressions of the relevant ingredients are discussed in sec. 3, with calculational details relegated to app. B. In sec. 3, we also validate the relations between the factorization formulae, discuss the leading nonperturbative effects, and compare the predictions of our factorization framework with earlier jet mass calculations. We discuss the extension to transverse energy/momentum vetoes and jet-based vetoes in sec. 4 including a study on the small R expansion of the fixed-order cross section at O(α s ), and conclude in sec. 5. Consistency of RG running is exploited in app. A to determine the anomalous dimensions which allow for the NNLL resummation of jet mass, jet radius, and jet veto logarithms. Factorization for jet mass with jet radius effects To study the jet radius dependence in a jet mass spectrum we consider exclusive pp → L+1 jet processes with the hard jet recoiling against a generic color-singlet state L. We first summarize the basic setup and kinematics of the process in sec. 2.1. We then discuss the modes of the relevant EFT setup and present the associated factorization formulae for each regime in turn. In sec. 2.2, we review the jet mass spectrum for m J p J T R and large-R jets [12] (regime 1), which can be described with standard SCET. In sec. 2.3, we discuss regime 2, where m J p J T R but now has narrow jets R R 0 , which is described using SCET + . The region where the jet mass spectrum turns off, i.e. m J ∼ p J T R, is discussed for small-R jets (regime 3) in sec. 2.4, and briefly for large-R jets (regime 4) in sec. 2.5. In sec. 2.6, we show how the theories for these different hierarchies are related to each other, the relations this implies between the ingredients of the factorization formulae, and how to systematically combine the latter including all relevant kinematic power corrections. The modes and corresponding logarithms appearing for regimes 1, 2, and 3 are summarized in table 1, and their relations and scaling are illustrated in fig. 2. Kinematics and measurements The hard (Born) kinematics of the exclusive pp → L + 1 jet process is characterized by five independent variables, which we choose to be the jet transverse momentum p J T , jet pseudorapidity η J , azimuthal angle φ J of the jet, and the rapidity Y L and total invariant mass q 2 L of the recoiling color-singlet state L. The Born-level momentum conservation (corresponding to the label-momentum conservation in the EFT) is given by ω a n µ where E cm is the hadronic center-of-mass energy, and the direction of beams a and b are denoted as In terms of the hard kinematic variables, the momentum components can be written as Here, Q is the invariant mass of the L+jet system and is a derived quantity with our choice of independent variables. Note that in the hard kinematics the jet (label) momentum is represented by a massless four-vector p µ J . For future convenience, we introduce the following shorthand for the hard phase space measure In the following, we always assume that the jet is hard and not too forward, i.e. p J T ∼ Q and e |η J | ∼ 1. The factorization in the case p J T Q where the jet is soft or close to one of the beams can be performed using SCET + as in refs. [28,30,31,45] for large R jets, and could be extended to narrow jets by combining it with the setup discussed in this paper. We assume that the shape of the jet region is determined by a jet algorithm which clusters collinear radiation first before assigning soft radiation to either the jet or the beam region, with a jet radius parameter R controlling its size. This includes both the anti-k T algorithm as well as XCone [17,18] based on N -jettiness minimization [46,47]. For these jet algorithms, narrow jets are all roughly circular, and deviations are power suppressed in R. We will present results for a jet radius defined in (η, φ) coordinates. 3 The jet mass measurement is encoded by In the following we write the 1-jet cross section with additional kinematic constraints X (e.g. in terms of bins in p J T and η J , and with cuts on the final color-singlet state L) as dσ(X) The sum over the partonic channels κ = {κ a , κ b ; κ J } runs over all flavors of the colliding partons and the energetic parton initiating the jet. We can write the full cross section in terms of the resummed leading power ("singular") cross sections in SCET denoted by dσ 1,2,3 in regimes 1, 2, and 3, and their respective power-suppressed ("nonsingular") corrections. In each regime, we will present a factorization formula for the singular part of the cross section and give the parametric size of the associated nonsingular corrections. The singular cross sections can be easily rewritten to be differential in m J rather than T J by taking into account a simple Jacobian factor, , (2.14) Similarly, the scaling of the collinear initial-state radiation is fixed by the hard momentum Q ∼ p J T it carries and the measurement constraint from T B . In terms of light-cone coordinates along the beam axis, n a -collinear: The soft radiation is isotropic and communicates between the collinear radiation along the beams and jet. Its momentum scaling is determined by the fact that it is constrained by either the T J measurement in the jet region or the jet veto in the beam region, soft: Table 1. Summary of the EFT modes setup, the resummed logarithms and the potentially large nonglobal logarithms for the different regimes. For all regimes we take T B p J T . By default, we consider the situation where the listed NGLs are not large logarithms in regimes 1 and 2. In a situation where these logarithms become large, the corresponding soft and n J -csoft modes split into multiple modes, as indicated in fig. 2. T B /T J , and to derive a factorization formula we must assume a power counting for T J relative to T B . Phenomenologically the most important hierarchy is T B ∼ T J as it can be applied to a large region of parameter space, and hence we will focus on this case. In this situation there is a single soft mode and the NGLs are not larger than other nonglobal contributions, all of which are fully captured by the soft function. Large NGLs appear when T B /T J 1 or T B /T J 1, arising from the sensitivity to two parametrically different soft scales (which are conceptually more difficult than the case we treat). Going through the usual steps, where the hard scattering interaction is integrated out when matching onto SCET I and the modes are subsequently decoupled in the Lagrangian, leads to the following factorization formula for the singular part of the cross section [12,19,46] For active-parton scattering this factorization formula does not include contributions from perturbative Glauber gluon exchange that start at O(α 4 s ) [53,54]. These terms can be simply calculated and included using the Glauber operator framework of Ref. [55], which will modify the structure of the product of beam functions. 4 The O(T J /p J T , T B /p J T ) terms indicated on the last line are nonsingular corrections, which may be included with fixedorder perturbation theory or by connecting to a factorization formula in regime 4. The hard function H κ in eq. (2.17) contains the short-distance matrix element for producing the nonhadronic L plus a jet and depends on the hard kinematic phase space Φ. The beam functions describe the process of extracting a parton out of the proton and the formation of an initial-state jet characterized by the scale s a,b ∼ QT B . The inclusive jet function describes the invariant mass contribution s J ∼ m 2 J of the finalstate collinear radiation to the jet mass and is not sensitive to the jet boundary since m J p J T R. Finally, the soft function S κ captures the soft radiation effects and depends on the angles between the collinear directions (and thus the pseudorapidity of the jet η J ), the jet boundary determined by the jet algorithm and jet radius R, as well as the jet and beam measurements with the jet veto specified by f B (η) in eq. (2.10). The factorization formula enables the resummation of the logarithms of T J /p J T and T B /p J T corresponding to the ratios between the hard, beam, and soft scales. Each function only involves a single parametric scale, corresponding to the typical virtuality of that mode. By evaluating each function at its natural scale and evolving them to a common scale µ using the RG evolution, the logarithms are resummed, i.e., The evolution factors U for the individual functions are the solutions of the renormalization group equations, which read e.g. for the soft function The explicit expressions for the evolution factors and anomalous dimensions can be found in the appendix of Ref. [12]. Using eq. (2.18) with the factorized cross section in eq. (2.17), the logarithms ln(µ H /µ B ), ln(µ H /µ J ), ln(µ B /µ S ) and ln(µ J /µ S ) are resummed and the dependence on the final renormalization scale µ cancels exactly at any resummed order due to consistency of RG running. By RGE consistency of the factorization formula the anomalous dimension for the soft function factorizes, as discussed in Refs. [12,56]. For the case considered here, this consistency gives This uniquely assigns the B and J -dependent cusp terms in the anomalous dimension to the beam and jet. The remaining δ( J )δ( B ) noncusp terms can also be factorized, but the precise division requires more care, as discussed below. Together this yields Here γ κ S (B) and γ κ S (J) each depend on the jet boundary and jet radius R, but this dependence cancels in the sum. 5 Solving the RGE with these factorized anomalous dimensions allows us to factorize the soft function together with its evolution as Here we have decomposed the full soft function as being analogous to the regional soft function in ref. [11] (where it was applied to cone jets). This fixes the ambiguity in splitting the noncusp one-loop anomalous dimension γ κ(δ) S into distinct contributions to γ κ S (J) and γ κ S (B) in eq. (2.22). Here γ κ S (J) and γ κ S (B) can be given in terms of R-dependent integrals for generic jet algorithms. 6 Starting at two loops, there are correlated real emissions into both the jet and beam region, which are thus constrained by both the jet and beam measurements, that lead to nonglobal structures. In eq. (2.24) these are absorbed into the µ-independent factor S (NG) κ . At this order, the decomposition in eq. (2.24) becomes ambiguous without additional input, since correlated emissions must be considered simultaneously with single region emissions when defining S (J) κ and S (B) κ , which is known for the double hemisphere case [36,37]. In regime 2 for R 1, we can use symmetry arguments to constrain the small-R terms of S T J or T J T B , and the resummation for these cases requires techniques other than the renormalization group evolution described above. The refactorization in eqs. (2.23) and (2.24) is essential to avoid introducing "fake" NGLs ∼ α n s ln 2n (µ S ) at leading logarithmic order [12]. After this refactorization, the canonical relationships between the scales in regime 1 are given by These relations together with the scaling relations µ H p J T , µ T J , determine the full canonical scaling which allows all large logarithms to be summed in regime 1, at any desired order in perturbation theory. 6 For cone jets an analytic expression for γ κ S (J) at one loop was found in ref. [11]. For anti-kT jets, γ κ can be evaluated analytically in an expansion in terms of R, which has been done at one-loop up to O(R 2 ) in ref. [14] for pp → dijets. The factorization in eq. (2.17) is limited to large jet radii R ∼ 1, such that R does not introduce additional scales or modes. In many LHC measurements smaller values of R are employed, leading to a hierarchy of scales within the soft sector and associated large double logarithms of R in the soft function S κ . We will discuss how to treat these next. Regime 2: Small For narrow jets, the jet radius introduces an additional hierarchy R R 0 . The mode setup for the associated EFT, which is a version of SCET + , is shown in the middle in table 1 and fig. 2. It is closely related to the one discussed in ref. [22], which considers it for cone jets at e + e − colliders. For R and is thus still collimated enough to be insensitive to jet boundary effects. The collinear radiation along the beam directions is still determined by the measurement of T B . Hence, the collinear modes are the same as for regime 1, Wide-angle soft radiation is now only constrained by the T B measurement, It cannot resolve the narrow jet and is thus not constrained by the jet measurement. Therefore, to have a complete description of the infrared structure of QCD for this regime, additional modes are required which have the relative scaling ∼ (R 2 , 1, R) J . The scaling of these modes is uniquely fixed by the requirement that they are restricted by the jet or beam measurement, respectively, This nomenclature for the modes follows refs. [22]. To derive a factorization formula we must choose their parametric relation to be either We take T B ∼ T J /R 2 , in which case the scalings in eq. (2.28) become degenerate, so there is only a single mode describing these momenta. We will refer to this common intermediate mode as csoft 7 n J -csoft: (2.30) 7 We denote the associated theory here SCET+. Its close connection to the original SCET+ setup for nearby jets ("ninja") in ref. [28] becomes obvious by boosting to the frame where the jet region becomes a full hemisphere. In this frame, the soft mode in eq. (2.27) becomes the ninja csoft mode and the csoft mode in eq. (2.30) becomes the overall soft mode. If on the other hand their energies differ parametrically, large NGLs of the ratio T B R 2 /T J arise, in analogy to the situation for the ratio T B /T J for soft radiation in sec. 2.2. We remark that different hierarchies between the (wide-angle) soft scale T B and the jet scale (p J T T J ) 1/2 are possible. In the following no specific relation between these scales needs to be assumed to obtain the factorization formula. In particular, the jet axis is determined only from the recoil-free measurement inside the jet region, which avoids nontrivial convolutions between the perpendicular momentum components of the n J -collinear and soft modes [57] (which appear e.g. when measuring jet broadening with the thrust axis [58]). Going through the factorization analysis in SCET + leads to terms indicated on the last line are nonsingular corrections, which can be included with fixed-order perturbation theory or by connecting to the factorization formula in regimes 1 or 3. Once again we neglect Glauber interactions here. Deriving the factorization in eq. (2.31) involves a matching onto SCET + and the decoupling of modes in the Lagrangian. The structure of the relevant operators in SCET + can be obtained by applying the BPS decoupling [27,59], either by matching onto SCET + in two steps as was done in ref. [28], 8 or alternatively by matching in one step and using collinear, csoft, and soft gauge invariance and tree-level calculations as in ref. [29]. In addition, eq. (2.31) requires the factorization of the measurement into contributions from the individual modes, Here, p (na) , p (n b ) , p (n J ) denote the momentum of the collinear radiation in the n a , n b , and out denote the csoft momentum inside or outside the jet, and T (s) B 8 It is convenient to perform these decoupling steps in the boosted ninja frame where the jet region becomes a hemisphere, see footnote 6. Following ref. [28] one then has to first decouple the soft modes in the ninja frame (corresponding to the csoft modes in the lab frame) from the collinear modes before decoupling the csoft modes in the ninja frame (corresponding to the soft modes in the lab frame) from the collinear ones. This also makes it clear which zero-bin subtractions [60] arise between these modes. is the contribution of soft radiation to the jet veto T B . For the csoft modes we used that . 9 Compared to eq. (2.17), the same hard, beam, and jet functions appear in eq. (2.31), while the soft function has now been factorized into two functions S B,κ and S R,κ J . The soft function S B,κ encodes the interactions of the wide-angle soft modes. It contains three soft Wilson lines corresponding to the partons participating in the hard collision, but only contributes to the measurement of T B as the associated soft modes no longer resolve the jet. The csoft function S R,κ J consists of two back-to-back csoft Wilson lines in the representation of the parton that initiates the jet, and contributes to both the T B and T J measurements as csoft modes resolve the jet boundary. For convenience, we have chosen the arguments k B and k J of the csoft function as to scale out the dependence on the size of the jets, which allows us to identify the csoft function with the well-known double hemisphere soft function, see eq. (3.9). This will be discussed more extensively in sec. 3, where we also give the precise definitions and the one-loop expressions of the soft functions. The csoft and soft function are RG evolved via from their natural scales We give the anomalous dimensions for S B,κ derived from RG consistency in app. A.1. The solution for the evolution factors are in direct analogy to the well-known ones appearing in eq. (2.18). Compared to eq. (2.19) for R ∼ R 0 there is in total one additional evolution factor allowing for the resummation of ln(µ S R /µ S B ) ∼ ln R. As in eq. (2.22) for regime 1, it is convenient to refactorize the csoft function to avoid spurious nonglobal Sudakov logarithms involving ln(k B /k J ) ∼ ln(T B R 2 /T J ). This is achieved by factorizing the anomalous dimension for this hemisphere csoft function . This allows us to factorize its evolution as Here the csoft function S R,κ J contains two scales and can be written as S R ∼ k B . Due to the symmetric nature of the double hemisphere csoft function S R,κ J it is natural to define these factors to be equal, The contribution S (NG) R,κ J in eq. (2.38) captures nonglobal correlations, and starts at two loops where it contains double and single logarithms as well as nonlogarithmic terms, computed in refs. [36,37]. Starting at two loops, the function in eq. (2.39) is a priori not well defined and depends on which µ-independent terms are kept in S (NG) R,κ J . One proposal for the decomposition of the double hemisphere soft function to all orders in perturbation theory leading to eq. (2.38) was discussed in ref. [37]. Some of the corrections in S Just as for eq. (2.24), the factorization of scales in eq. (2.38) is essential to avoid introducing "fake" NGLs at leading logarithmic order. After this refactorization, the canonical relationships between the scales in this region are given by which together with the scale choices determine the full canonical scaling, implying e.g. µ . This allows for the resummation of all large logarithms in regime 2. The NGLs become unavoidable in the region where T B R 2 T J . This is the hierarchy explicitly discussed in Ref. [22], which also does not attempt to resum NGLs. The NGLs arise because the soft-collinear radiation resolves each individual collinear-soft emission, obstructing a simple factorization approach. In particular, each real collinear-soft emission requires an additional soft-collinear Wilson line to describe its interactions with the softcollinear radiation. The NGLs in the double hemisphere soft function are well-studied and various new approaches systematically capturing their dominant effects have been recently explored [30,[41][42][43][44], which can directly be applied to our context due to the equivalence between our csoft function and the double hemisphere soft function. Regime Next we discuss the jet mass spectrum of a narrow jet for T J ∼ p J T R 2 , corresponding to the far tail of the jet mass spectrum. The relevant mode setup in SCET + is shown on the right in table 1 and fig. 2. The beam-collinear and wide-angle soft modes are as in regime 2 only constrained by the T B measurement, The collinear radiation in the jet now resolves the jet boundary, since its momentum scales as implying that the collinear-soft mode in eq. (2.28) cannot be distinguished from the collinear mode anymore, and the two become degenerate. As in sec. 2.3, the wide-angle soft radiation does not resolve the narrow jet, such that a soft-collinear mode related to the beam measurement with the scaling in eq. (2.29) is still present, Assuming a jet veto with T B p J T ∼ Q this mode has a parametrically different energy compared to the n J -collinear mode but the same angular resolution, which makes the appearance of large NGLs of T B /p J T unavoidable. 10 Completely disentangling the mode fluctuations at the different scales thus requires one to marginalize over all configurations of n J -collinear emissions (which can individually be resolved by a proper low-energy measurement [30]) each leading to soft-collinear matrix elements involving individually a different number of Wilson lines with different directions, see for example [23, 30, 41-43, 61, 62]. Here we do not attempt to entirely carry out this procedure, but instead only disentangle the corrections between the hard, beam-collinear, wide-angle soft, and a global 10 Removing the jet veto, i.e. TB ∼ p J T , large NGLs do not appear in this regime, but this would give rise to large NGLs for TJ p J T R 2 . For the production of a massive boson with a soft jet the relation TB ∼ p J T Q could be satisfied, but this regime is of limited relevance for jet mass measurements and presents challenges of its own. n J -collinear sector (which is not fully factorized). For this case the cross section can be written in a factorized form as The O(T B /p J T , R 2 ) terms indicated on the last line are nonsingular corrections, which can be included with fixed-order perturbation theory. The hard function H κ , beam functions B κ a,b , and soft function S B,κ are the same as in eq. (2.31) for regime 2. The collinear function J κ J encodes the interactions of both soft-collinear and collinear modes. It depends both on the jet invariant mass s J and the scale p J T R, which reflects the sensitivity to the jet boundary, and also contributes to the measurement of T B . Without any additional refactorization, the collinear function J κ J contains large unresummed Sudakov double logarithms ∼ α n s ln 2n To resum the leading double logarithms, we can decompose it as The sum over n in the first equality indicates a dressed parton expansion like in ref. [30] (with different soft-collinear matrix elements S for a different number of resolved collinear emissions and associated directions) and the factor J R,κ J S (B) R,κ J contains the n = 0 term in this expansion. The jet function J R,κ J mainly describes corrections from the energetic n J -collinear modes and depends on the details of the jet algorithm. These types of jet functions were introduced and calculated at one loop for cone jets and the k T family of jet algorithms in ref. [63]. We give the one-loop results for the latter explicitly in sec. 3.4. The function S (B) R can be taken to be the same function as in eq. (2.38), and mainly describes corrections from soft-collinear modes. The µ-dependence factorizes between J R,κ J and S (B) R allowing for a separate evolution of these functions, We derive the form of the anomalous dimensions from RG consistency in app. A.1. The canonical scales are given by Note that the evolution of the jet function J R is local, i.e. does not involve a convolution, and is identical to the one for the "unmeasured" jet function [22]. Compared to a single evolution of J the two separate evolutions in eq. (2.47) resum logarithms arising from collinear and soft-collinear emissions which are uncorrelated between these two, including in particular the Sudakov double logarithms. However, starting at O(α 2 s ) there are also NGLs of the form α n s ln Depending on the desired accuracy they may be treated as fixed-order corrections (multiplying the overall evolution factors) as indicated in eq. (2.46) or (partially) summed using more steps in a dressed parton expansion in close analogy to ref. [30]. In fact, the leading NGLs relevant for NLL accuracy arise from a strongly ordered limit (of consecutively less energetic emissions) and can be expected to be the same as for the hemisphere soft function discussed in sec. 2.3. This has been seen explicitly at O(α 2 s ) for the related case of jet shapes in e + e − -collisions for small R in ref. [39]. As mentioned at the end of sec. 2.3 recent approaches for a resummation of NGLs have been applied to this prototypical case. The canonical relationships between the different scales in regime 3 are then Together with the choices µ H p J T and µ S B T B they determine the full canonical scaling required to resum all logarithms ln R and a subset of logarithms ln(p J T /T B ) as discussed above. Regime 4: Large-R jets with The situation for large-R jets in the far tail of the spectrum, corresponding to T J ∼ m J ∼ p J T ∼ Q, is also an interesting conceptual hierarchy to consider. In this regime there are no resolved final-state collinear modes and the jet consists only of hard wide-angle emissions. As in regime 3, parametrically large NGLs of T B /p J T appear, due to the fact that soft wideangle radiation resolves the number of the hard wide-angle emissions in the jet region. One can expect that the additional corrections with respect to the narrow jet case R R 0 for typically applied jet radii are quite small at the far tail, so that for phenomenological applications it is most likely sufficient to include them in fixed-order QCD, unless one is interested in the precise behavior at the endpoint of the spectrum. In analogy to eq. (2.46) for the global collinear function J in regime 3 one can also resum Sudakov logarithms ln(T B /p J T ) in regime 4 by refactorizing the associated global hard function H into jet radius and algorithm dependent hard and soft functions. Relations between the different hierarchies We have investigated the mode setup and factorization for large and small R jets across the jet mass spectrum. The main features are summarized in table 1, including the logarithms the factorization formula resums. When T J ∼ T B R 2 the nonglobal correlations do not result in large NGLs, but this condition cannot be satisfied for T J ∼ p J T R 2 (regime 3) without also removing the jet veto. We now discuss in more detail how the different EFTs are related to each other, as illustrated in fig. 2, and how the associated factorized cross sections can be combined. The factorized cross section in eq. (2.31) for regime 2, describing the hierarchy T J p J T R for narrow jets, can be obtained from the result in eq. (2.17) for regime 1, describing broad jets, by taking the limit R R 0 and carrying out an associated factorization of the soft sector. This enables the resummation of logarithms of R, and goes hand in hand with the following expansion of the corrections in R To obtain a combined description valid for regimes 1 and 2, the O(R 2 ) corrections in eq. (2.50) need to be included and combined with the resummation of jet radius logarithms in regime 2. By including the fixed-order matching corrections for the soft functions (or in general for all functions appearing in the factorized cross section) to the same order as the noncusp terms in the anomalous dimension, corresponding to the often utilized N k LL order counting, this can be conveniently obtained by turning off the resummation in the relevant scale hierarchy. Thus, the cross section for T J p J T R 2 with ln(m J /p J T ) and ln R resummation and including nonsingular corrections with the full R dependence can be written as The scale choices in the third term indicate that the jet radius logarithms are included at fixed order only to cancel the corresponding terms in dσ 1 . Therefore for R R 0 the cross section dσ 1+2 corresponds to the singular resummed cross section from regime 2 plus nonsingular power corrections starting at O(R 2 ) that are determined by the terms in parentheses. At the same time, the scales µ S R , and µ S B in the first term are chosen using suitable profile scales [64,65] such that in the regime 1 limit R ∼ R 0 the ln R resummation is turned off and the two terms involving dσ 2 in eq. (2.52) exactly cancel, leaving just the resummed result from regime 1. Similarly, regime 2 is obtained from regime 3 in the limit T J p J T R 2 with an associated factorization of the collinear sector as (2.54) In ref. [22], the first relation has been explicitly demonstrated at one loop and exploited to obtain two-loop corrections to the "unmeasured" jet function. Therefore, one can combine regimes 2 and 3 to obtain a description of the cross section for small-R jets over the whole spectrum with ln(m J /p J T ) and ln R resummation and including all nonsingular corrections in m J /(p J T R) as follows As in eq. (2.52), this requires to use primed counting for dσ 2 and µ (J) S R to be chosen as a suitable profile scale that smoothly merges with µ J as the endpoint m J ∼ p J T R is approached. In the last term of eq. (2.55) the resummation of logarithms of m J /(p J T R) is turned off. Finally, the full cross section including all fixed-order nonsingular corrections is given by, where dσ FO denotes the fixed-order cross section computed in full QCD at the scale µ = µ FO , and the terms from the singular regions are combined via (2.57) Comparison to earlier calculations We conclude this section by identifying which jet radius logarithms were accounted for in earlier jet mass calculations. In the jet mass calculation of Ref. [11] for pp → γ + jet, with an expansion around the kinematic threshold, the soft function was refactorized in order to resum Sudakov logarithms between the soft scales. As discussed below eq. (2.24) their regional soft function corresponds to S (J) κ for a cone jet. Due to eq. (2.51) this could encode the correct small-R dependence, and they obtain the correct one-loop anomalous dimension γ κ S (J) . However, their regional soft function does not contain the required α s ln 2 R term, and it is not clear whether the scale they obtain from a numerical minimization procedure satisfies µ (J) S ∼ T J /R for R 1, as required for ln R resummation at LL accuracy. 11 In ref. [14] a 11 Since there are three physical low scales to be accounted for in the small-R limit, namely TJ /R, S (R = 0.5) ≈ 3, which differs from the value of 5/3 that is required for a correct scaling with R. similar approach was taken for pp → 2 jets. They do obtain the correct one-loop expressions for S (J) κ in the small-R limit, but it is again unclear whether they obtain the correct scale from their numerical minimization. Since the jet radius logarithms that multiply the twoloop cusp anomalous dimension are not included, they can at best achieve LL accuracy. In Ref. [12], the refactorization of the soft function was based on the structure of the anomalous dimension and identifying the correct scale choice µ (J) S ∼ T J /R. This accounts for the LL resummation of the jet radius logarithms in the normalized spectrum. But this choice alone is not sufficient beyond LL. Ref. [10] considers the inclusive jet mass spectrum without a jet veto, only probing radiation in the jet. This allows for a resummation of ln R at LL in the normalized spectrum, and even NLL once the R dependence of the NGLs ∼ ln(p J T R 2 /T J ) are taken into account. Their final expression resums only logarithms of the ratio m 2 J /(p J T R) 2 , implying that a hard scale of p J T R rather than p J T was used. They employ a framework tailored to obtain the NLL result, making it difficult to directly compare the functions from our factorization theorem with results from their calculation. None of the above approaches accounted for the jet radius logarithms in the normalization of the cross section for each individual partonic channel, which requires an additional factorization for the soft out-of-jet corrections (corresponding to the first line in eq. (2.51)). This is crucial for determining the relative contribution of the different partonic channels. Thus, when summing over different partonic channels to obtain the final physical spectrum, the ln R resummation is not accounted for systematically even at LL. Our factorization theorem presented in regime 2 allows for ln R resummation in the jet mass cross section at any order in resummed perturbation theory for which the corresponding anomalous dimensions are known. While this work was being prepared ref. [15] appeared, which also builds on ref. [22] and discusses dijet angularities for pp → dijets at small R, addressing the nontrivial color space. They achieve NLL precision for a resummation of logarithms associated with both R and the measurement of angularities, one of which is the jet mass. They use a jet-based transverse momentum veto within a certain rapidity range |η| < η cut and no restrictions beyond. For phenomenologically relevant values of η cut , their setup does not seem to properly account for the resummation of rapidity logarithms ln(p cut T /p J T ) because it does not take into account the effect of the jet veto on the beam-collinear radiation. Their study focuses on the equivalent of our regime 2, and therefore does not include nonsingular corrections from the regime T J ∼ p J T R or perturbative power corrections of O(R 2 ). The latter points can be addressed in a straightforward manner by combining their results with the framework presented here. Jet and soft functions In this section we give the definitions and relevant one-loop expressions for the various jet and soft functions that enter the factorization formulae in sec. 2. In secs. 3.1 and 3.2 we discuss the wide-angle soft functions for large-and small-R jets appearing in regime 1 (S κ ) and regimes 2 and 3 (S B,κ ), respectively. The results for S B,κ are new. The csoft function S R,κ J (together with its refactorized form) is given in sec. 3.3. In sec. 3.4 we collect the results for the known jet functions. The RG consistency of the factorization formulae allows us extract the remaining anomalous dimensions needed for NNLL resummation of the logarithms, as discussed in app. A. We verify the relations between the different EFTs given by eqs. (2.50) and (2.53) in sec. 3.5 and discuss nonperturbative effects in sec. 3.6. Wide-angle soft function for large-R jets S κ (regime 1) For the large-R jets in regime 1 there is a single soft function S κ that describes the contribution of soft radiation to the jet mass and jet veto. For example, for the partonic channel κ = {q,q; g} ≡ {qq → g} the matrix element is defined as where Y na and Y n b are soft Wilson lines in the fundamental representation along the lightlike directions n a and n b , and Y n J is a Wilson line in the adjoint representation along n J . The trace runs over color, T (T ) denotes (anti)time ordering, andˆ J andˆ B encode the measurements in the jet and beam regions, i.e., Here, η i and p T i are the rapidity and transverse momentum of particle i with respect to the beam axis. The representation of the Wilson lines and the overall normalization needs to be appropriately modified for other channels. The one-loop result of the soft function for N -jettiness jets has been computed in Ref. [66] (and for N -jettiness with generic angularities in ref. [67]). This procedure can be extended to generic jet algorithms, jet vetoes, and jet measurements at hadron colliders, which will be discussed in detail in a forthcoming paper [68]. In general, the soft function up to one-loop order can be written as where s ab,1 (R) = 2/π × πR 2 = 2R 2 [13] is proportional the jet area in the η-φ plane. 12 The s ab,δ and s aJ,δ depend on the algorithm determining the jet region and the beam measurement. We give the analytic results for the coefficients s ab,δ (R, η J ), s aJ,B (R, η J ), s aJ,J (R, η J ) and s aJ,δ (R, η J ) in the small R limit in sec. 3.5, and compare them to the full numerical results for anti-k T jets as a function of R. In eq. (3.3) T a , T b , T J denote the color charges of the respective hard partons entering the hard interaction. Wide-angle soft function for small-R jets S B (regimes 2 & 3) In eqs. (2.31) and (2.45) the soft function S B describes the interactions of the wide-angle soft modes, which do not resolve the jet. For the partonic channel κ = {q,q; g} this matrix element is defined as In contrast to eq. (3.2), the sum on i now runs over all particles, since the momentum scaling of particles present in the soft state |X s implies that this real radiation cannot resolve the jet area. S B depends on the choice of jet veto and thus on the function f B (η), for which we consider the two choices in eq. (2.11). The one-loop computation can be carried out in close correspondence to the calculation for an energy veto [63] and is discussed in app. B. The result for the C-parameter veto reads For the beam thrust veto we find The anomalous dimension of S B can be obtained at higher orders by exploiting RG consistency in eq. (2.50), see app. A. Csoft function S R (regime 2) Next, we discuss the csoft function S R in eq. (2.31) describing the interactions of the csoft modes that are a combination of collinear-soft and soft-collinear modes. For a quark jet (i.e. κ J = q) it is defined as The Wilson lines X n J and Xn J are the csoft (i.e. boosted soft) analogs of the (u)soft Wilson line Y n J and Yn J and the momentum operatorsk in andk out pick out the momentum inside and outside the jet. For a gluon jet the Wilson lines are in the adjoint representation and the overall factor changes from 1/N c to 1/(N 2 c − 1). Since the jet is defined through the beam coordinates η, φ, the angular size of the jet region is R/cosh η J . A boost along the jet axis by ln[R/(2 cosh η J )] turns the jet region into a hemisphere (ignoring O(R 2 ) corrections) while leaving these Wilson lines invariant. This is most easily seen by using reparametrization invariance (RPI-III) [69] to rescale the jet directions via n J → n J = n J β,n J →n J = n J /β with β = R/(2 cosh η J ). This boost invariance of the two-direction soft function has been exploited before in Refs. [7,70,71]. From this transformation we see that S R is just the hemisphere soft function, and with our choice of variables, is independent of R, Herek R (k L ) picks out the momentum going into the right (left) hemisphere with respect to the jet direction, i.e. for n J ·k <n J ·k (n J ·k >n J ·k). Thus up to one-loop order [56,72] where the color charge T 2 J is equal to C F for quark jets and C A for gluon jets. The refactorization in eq. (2.38) is trivial at one-loop order, since only one parton contributes to either the beam or jet region. As these regions correspond to hemispheres after the boost the collinear-soft and soft-collinear function are thus given by the same one-loop function Jet functions (regimes 1, 2 & 3) The inclusive jet functions in eqs. (2.17) and (2.31) measuring the invariant mass of the collinear radiation are well known and given by a vacuum correlator of two jet fields. Up to one-loop order they are given by [73][74][75] J q (s, The jet function J R , obtained from the collinear function J in eq. (2.45) after the decomposition in eq. (2.46), encodes the fact that the energetic n J -collinear radiation is constrained to lie within the jet region and explicitly depends on the jet algorithm, as discussed in refs. [63,76]. Following eq. (2.54), we write J R as where the term ∆J alg κ J (s, p J T R, µ) contains the algorithm dependent terms, which are power suppressed in regime 2 where T J p J T R 2 . ∆J alg has been computed at one loop for different jet algorithms in e + e − -colliders in refs. [63,76]. Adapting their expressions to the hadron collider case, the one-loop result for k T -type clustering algorithms reads (3.14) with The anomalous dimension of J R can be obtained at higher orders by exploiting RG consistency of eq. (3.13), as discussed in app. A. The jet function J R is related to the algorithm-dependent jet function J alg. κ J in refs. [22,63] via Thus, refs. [22,63] effectively combine the algorithm-dependent fixed-order corrections in regime 3 (m J ∼ p J T R) with the inclusive jet function, thereby including nonsingular correction in the regime-2 limit m J p J T R in a definite way. In our description of regime 3 in eqs. (2.45) and (2.46), the single function J R,κ J encodes the contributions of the energetic collinear radiation to the jet measurement (corresponding to the fact that collinear and collinear-soft modes present in regime 2 become degenerate in regime 3). 13 13 The direct computation of J alg. κ J in [63,76] required nontrivial (collinear-)soft zero bin subtractions on the nJ -collinear modes. In our mode setup for regime 3 with a single energetic nJ -collinear mode these subtractions do not appear. Thus our JR,κ J differs from J alg. κ J by these zero-bin subtractions, which correspond exactly to our collinear-soft function S (J) R . This was also observed in Ref. [77] in a related context. Verification of the relation between different regimes Using the perturbative results in secs. 3.1 -3.4 we can explicitly verify that the relations between the different EFTs hold at the one-loop level. First, eqs. (2.54) and (3.13) imply that the algorithm dependent correction ∆J alg κ J needs to vanish when m J p J T R, i.e. by taking which can be verified directly at one loop using eq. (3.14). Next, the relation in eq. (2.50) between the small-R and large-R jets for m J p J T R implies that at one-loop order 14 Exploiting color conservation, this requires the coefficients of the wide-angle soft function S κ in eq. (3.3) to satisfy 20) in the small-R limit. Here we encounter logarithms (in particular also Sudakov double logarithms) of the jet radius which are not resummed without the factorization of the soft function in regime 2. Furthermore, we remark that consistency of the anomalous dimensions implies that any choice of SCET I -type veto only alters the coefficient of the local terms in momentum space proportional to δ( J ) δ( B ) and thus gives the same results for s aJ,B and s aJ,J . 15 By performing appropriate expansions of the integral expressions for the coefficients of S κ , one can confirm analytically that these relations are indeed satisfied [68]. In fig. 3, we show the full numerical results for the coefficients together with the small R result in eq. (3.20) for the C-parameter veto. We also display the coefficients when including corrections at O(R 2 ) in a small-R expansion, which can be calculated analytically and will be given explicitly in ref. [68]. One can see that the small-R results approximate the full coefficients very well for R R 0 . We have verified that this holds also for the beam thrust veto and an arbitrary jet rapidity. Including O(R 2 ) corrections one obtains an excellent approximation of the full result even for R 1. This suggests that the small-R limit (including terms at O(R 2 )) is a good approximation for phenomenological jet mass studies at the LHC. 16 Such an expansion has been applied in [10,14] for the inclusive jet mass spectrum with the result that O(R 4 ) corrections have a negligible impact for phenomenologically relevant values of R. We see from fig. 3 that the expansions are valid up to jet radii R ∼ 2 implying that R 0 2 is a more appropriate radius of convergence than R 0 1. 17 Leading nonperturbative effects We conclude this section by discussing the leading nonperturbative effects on the jet mass spectrum. The leading nonperturbative effects are in particular relevant in the peak and tail an approximation to the full result for |ηJ | < R. 15 Additional terms in the combination 1/µ L0( B /µ) δ( J ) − 1/µ L0( J /µ) δ( B ) do not affect this consistency and in fact appear in general for large R jets. However, these are only related to algorithm dependent deviations of the jet region (and not to the employed beam measurement) which are power suppressed in the small R limit. 16 For the beam-beam dipole the O(R 2 ) corrections are typically larger and can be quite sizable also for smaller values of the jet radius R ∼ 0.5. 17 For central jets with a cone radius R cone 0 = π/2 ≈ 1.6 the jet region becomes a full hemisphere, which is a naive estimate for the radius of convergence. Using a radius in the η − φ plane instead implies a significantly smaller jet area and a wider range of convergence, so that a value R0 2 is plausible. region where p J T R 2 T J Λ QCD and thus affect the factorization formulae in secs. 2.2 and 2.3. Nonperturbative corrections to the jet veto are ignored, since their effect is negligible for normalized spectra, which are measured experimentally. We start by briefly summarizing the findings of ref. [13] for large-R jets, before moving on to small-R jets. The wide-angle soft function can be decomposed into a perturbative component S pert κ and a nonperturbative function F κ [64,78,79], Expanding in Λ QCD J , one obtains Thus the leading nonperturbative effect leads to a shift in the jet mass, In ref. [13] it was shown that Ω κ depends only on the jet radius R and channel κ but not on the jet rapidity η J , and that for small jet radii where as indicated, the R-independent nonperturbative parameter Ω κ J depends only on whether the jet is initiated by a quark or a gluon. Here Ω q is the nonperturbative correction for thrust in deep-inelastic scattering (DIS) [71], and Ω g is its analog for gluons. Technically, once hadron mass effects are accounted for the function F κ and parameter Ω κ also have renormalization group evolution between the hadronic and soft scales, and there is another matching coefficient at the soft scale [80]. This does not change the universality discussion above, and hence this complication is suppressed for simplicity. We now show that the same conclusion follows directly from the factorization formula for small R in eq. (2.17). The leading nonperturbative effects come from the csoft function S R , which is identical to the (DIS) double hemisphere soft function, as argued in sec. 3.3. The leading nonperturbative correction is therefore This correspond to a shift in the perturbative jet mass spectrum given by in agreement with eqs. (3.23) and (3.24). In addition to the above nonperturbative effects, which are associated with hadronization, the jet mass spectrum is also affected by underlying-event contributions associated with multiple partonic interactions, which has perturbative and nonperturbative components. These effects scale like R 4 [81] and are thus not very relevant at small R. Note that contributions from primary soft radiation, which share some underlying-event characteristics in that they also scale as R 4 , are fully captured by the soft function(s) [13]. Application to SCET II and jet-based vetoes In this section, we consider other classes of jet vetoes, focussing our attention on regime 2 in sec. 2.3, which has the largest number of hierarchies, T J p J T R 2 and R R 0 . Specifically, we discuss the transverse energy veto as an example of a SCET II -type beam measurement, as well as jet-based vetoes. Transverse energy veto Here we discuss the mode setup and factorization formula for a veto on the transverse energy outside the jet, the jet boundary and are defined by the restrictions due to the measured jet mass and imposed jet veto are n J -collinear-soft: We assume that T J ∼ E T R 2 , such that the modes are degenerate, and large NGLs are avoided as in sec. 2.3. This leads to the following factorized cross section (4.8) Once again, the indicated O(E T /p J T , T J /(p J T R 2 ), R 2 ) nonsingular corrections can be obtained by considering the correspondence with other regimes or fixed-order calculations. Compared to eq. (2.31) the same hard, jet, and collinear-soft functions appear, while the beam functions and soft functions are different and depend also on an additional rapidity renormalization scale ν [83,84]. The natural scales for the beam functions are µ B ∼ E T and ν B ∼ ω a,b ∼ p J T . The natural scales for the soft function S B are µ S ∼ ν S ∼ E T . Since the rapidity regulator breaks boost invariance, S B still depends on η J . At one loop, the matching coefficients in the beam functions encode only up to one real emission and therefore correspond to the transverse-momentum dependent beam functions in refs. [84][85][86][87]. We calculate the one-loop correction for the soft function S B in app. B using the η-regulator in refs. [83,84]. The result reads We verify in app. A that this result is in agreement with the RG consistency of the factorized cross section. Jet-based vetoes In this section we consider the corresponding jet-based versions of the global SCET I and SCET II jet vetoes, as discussed e.g. in refs. [48,82]. These local jet-veto variables are based on identifying additional jets j(R veto ) using a jet algorithm with radius R veto in the beam region and considering the largest contribution from a single jet. (The jet algorithms and radii for the identification of the hard signal jet and for the vetoing of additional jets can in principle be different.) We consider the jet vetoes T cut B and p cut T defined through max The clustering effects due to the jet veto affect both collinear initial-state radiation as well as soft and csoft radiation (outside the identified jet), introducing a dependence on R veto in the beam and soft functions. For a small value of R veto , the jet clustering of collinear and soft radiation is power-suppressed by O(R 2 veto ) [82,88] so that the veto on additional jets is separately imposed on the collinear initial-state radiation and soft radiation. One can also argue that the clustering of soft and csoft modes is predominantly performed within each sector for R veto 1, such that the measurement also factorizes between these sectors. The price to pay for this factorization is the appearance of clustering logarithms ln R veto (closely related to NGLs) starting at O(α 2 s ), whose systematic resummation is beyond the scope of this paper. In the following we consider only the resummation of the jet radius logarithms ln R related to the observed jet. The EFT mode setup for the jet-based vetoes is identical to that for the corresponding global veto. For the T cut B veto the modes are as for the generalized beam thrust veto in sec. 2 and summarized in table 1, with the identification T B → T cut B , leading to the factorized cross section As in previous cases, the nonsingular corrections indicated in the last line can be obtained by using the correspondence with other regimes and fixed-order calculations. At one loop, the beam functions B, the soft function S B and the collinear soft function S R describe a single emission, such that the clustering algorithm in the beam region does not play any role and their expressions are the cumulant of the matrix elements in sec. 2.3. We emphasize that the structure of the renormalization differs between the global and local jet-based vetoes. Starting at two loops, the analytic structure of the expressions changes, accounting now for the jet clustering as indicated by the additional dependence on R veto . The renormalization is multiplicative in the arguments associated with the jet veto, as required by the structure of the factorization theorem [48,82]. For example, the renormalization of the csoft function S R is multiplicative in k cut B but involves a convolution in k J as can be seen from the associated RG equation Next, we consider the jet-based transverse momentum veto, p cut T , which is the standard choice used by the experiments. This combines the features discussed above with the mode setup for the SCET II veto in sec. 4.1 (with E T → p cut T ) and leads to the factorization formula (4.13) The one-loop correction to the wide-angle soft function S B reads in direct analogy to eq. (4.9) (4.14) To demonstrate explicitly which logarithms are resummed by eq. (4.13) at higher orders, we give the jet radius and jet mass dependent logarithmic terms predicted by it at NNLO in app. C. With the analogous relation to eq. (2.50) the results in eqs. (3.10) and (4.14) allow us also to write the one-loop expression for the associated unfactorized soft function S κ (encoding the contributions from all soft modes) as (with L p T ≡ ln(p cut T /µ)) where s ab,1 (R) = 2R 2 , ∆s p T ab,δ (R, η J ), ∆s aJ (R, η J ) and ∆s p T aJ,δ (R, η J ) will be corrections that start at O(R 2 ) and are given in ref. [68]. A related soft function has been also computed for small R in ref. [89] in the context of an exclusive H + 1 jet analysis without an explicit measurement of the jet mass. The associated result corresponds to the combination of the one-loop soft and soft-collinear corrections (where the latter are encoded in the S (B) R,κ J component of the csoft function) to the jet veto measurement using eqs. (3.11) and (4.14). This result agrees with the computation in ref. [89] (see eq. (20) therein). Their result is expressed in terms of two-dimensional integrals, which numerically agree with our analytic expression in eq. (4.16) up to O(R 2 ) terms. Fixed-order cross section for small-R jets In sec. 3.5, we showed numerical results for the one-loop soft function, demonstrating consistency between regimes 1 and 2, and finding that the small-R results provide a good approximation even up to rather large values of R. To lend more credence to this conclusion, we show numerical results for the cross section in this section, comparing the results of regime 2 with regime 1. The comparison is performed at NLO and thus only tests the validity of the small R expansion at fixed order and not the effect due to ln R resummation. A detailed phenomenological study of the effects due to resummation of ln R terms will be presented in the future. To investigate the range where the small R expansion is valid, we show results for the spectrum and its cumulative distribution The (N)LO cross section is obtained by expanding the factorization formula for regime i = 1, 2 to this order and taking all scales equal to µ = p J T . In the ratio of jet mass spectra most ingredients drop out, e.g. for i = 2 because only for a single real emission radiated into the jet region one does obtain a nonvanishing spectrum at NLO. The ratio in eq. (4.18) is in particular independent of the jet veto and hard process, and only depends on the partonic channel, the jet radius R, the ratio T J /(p J T R 2 ), which we take to be 1/15 for our plots. This value corresponds for example to T J = 5 GeV and p J T = 300 GeV for a jet radius R ∼ 0.5, which would satisfy the requirement T J ∼ p cut T R 2 /2 for avoiding large NGLs with a jet veto p cut T = 30 GeV. The results are shown in fig. 4 for anti-k T jets with the full R dependence (red solid) from regime 1 and the leading small-R result (green dotted) from regime 2. Furthermore, we display the small-R result including the O(R 2 ) correction arising from soft initialstate radiation (blue dashed), which corresponds to including the s ab,1 = 2R 2 term in eq. (3.3), and including all analytic corrections to O(R 2 ) (black dot-dashed), which will be given in [68]. The small-R approximation works quite well for R 0.5, and its range of validity is considerably extended by including the soft ISR correction. This is not surprising, because the contribution of soft ISR to the jet mass only starts at O(R 2 ), whereas other O(R 2 ) corrections only account for deviations in the shape of the jet region and are comparably small. Including also all remaining corrections at O(R 2 ) coming from soft ISR-FSR interference the full result for anti-k T jets is almost exactly approximated even for a jet radius R ∼ 2. This confirms the statement that the effective expansion parameter is R/R 0 with R 0 2. For the κ = {q,q; g} channel the soft ISR correction appears with a numerically small color factor C F − C A /2 = −1/6, compared to C A /2 = 3/2 for the other channels, as pointed out in ref. [13], so that already the leading result of the small-R expansion gives a good approximation even for large values of the jet radius. In fig. 5, we show the jet radius dependence of the cumulative distribution for pp → H +1 jet (left panel) and pp → Z +1 jet (right panel), using the second line of eq. (4.17). We employ the jet-based transverse momentum veto discussed in sec. 4.2, and use T J /(p J T R 2 ) = 1/15, p cut T = 30 GeV, p J T = 300 GeV, η J = 0, Y L = 0, E cm = 13 TeV. For simplicity, we consider the production of on-shell EW bosons without any subsequent decay. Compared to the differential spectrum, the small-R approximation seems to work over an even larger range. Once again, including the soft ISR correction greatly extends the range where the small-R approximation works well. (Also for pp → Z+jet the soft ISR correction gives the dominant O(R 2 ) effect, since the contribution from the {q,q; g}-channel is small compared to the one from the {g, q; q}-channel, where the soft ISR correction is large.) The fact that the full result is almost exactly reproduced by including the full set of O(R 2 ) corrections is somewhat specific to anti-k T jets with a p cut T -veto. For different jet algorithms and vetoes there is in general some visible difference toward large R between the full result and the one containing the corrections to O(R 2 ), see for example the R-dependence for the C-parameter in the right panel of fig. 3. Conclusions We presented a factorization framework to provide a complete description of jet mass spectra in hadronic collisions including realistic jet algorithms and jet vetoes. It allows to systematically treat jet radius effects in the jet mass spectrum, including the resummation of jet radius logarithms, the jet boundary effects that cut off the spectrum at m J p J T R, and the inclusion of O(R 2 )-suppressed power corrections. This description is based on SCET + , which is an extension of standard Soft-Collinear Effective Theory with additional modes that are simultaneously soft and collinear. We utilized this theory for the jet mass measurement in the process pp → L + 1 jet and discussed the factorization formulae and all relevant ingredients allowing for the systematic higher-order resummation of logarithms of the jet mass, jet radius, and jet veto at NNLL for global vetoes and NLL for jet based vetoes, and beyond once the relevant ingredients become known. In the phenomenologically important peak and tail region of the jet mass spectrum with m J p J T R, and for appropriate jet veto scales determined by a definite power counting, nonglobal structures do not contain large logarithms and can thus be included at fixed order. In the far tail region, m J ∼ p J T R, recent progress in the resummation of NGLs can be directly applied to incorporate their dominant effect. Comparing the perturbative soft corrections at one loop, we found that an expansion in terms of small R gives a good approximation in the peak and tail region for typically adopted jet radii R 1. A detailed phenomenological study for experimentally measured jet mass spectra at the LHC including the effects due to ln R resummation and the relevant power corrections as well as the associated uncertainties will be presented in the future. through the Investigator grant 327942. This work is part of the D-ITP consortium, a program of the Netherlands Organization for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). This work was also supported in part by a Global MISTI Collaboration Grant from MIT. A.1 Anomalous dimensions for the generalized beam thrust veto The anomalous dimensions of the matrix elements for a global SCET I beam measurement are defined in analogy to eq. (2.20). For the jet and soft functions appearing in the factorization formula of sec. 2.2 the anomalous dimensions do not depend on the jet radius R and have the structure where in the second line we assumed Casimir scaling for the cusp anomalous dimension which holds at least up to three loops. The cusp and the noncusp anomalous dimension are known at least up to three and two loops, respectively. Analytic expressions using the same notation can be found for example in the appendices of refs. [12,22,49,50]. Here we only infer the structure for the anomalous dimensions of the remaining jet and soft functions involved in the factorization formulae in (2.31) and (2.45). The relation in eq. (2.50) implies that S R is the double hemisphere soft function, for which the µ-dependence factorizes, i.e., where γ κ J hemi (α s ) is half of the noncusp anomalous dimension of the standard double hemisphere soft function. Thus, the anomalous dimension for the function S B,κ reads Using the one-loop cusp anomalous dimension, Γ cusp [α s (µ)] = α s /π + O(α 2 s ), and the fact that the soft noncusp dimensions vanish at this order, this is in agreement with the µ-dependence of eqs. (3.6) and (3.7). The relation in eq. (2.53) implies that such that the anomalous dimension of the jet function J R in analogy to the result for the "unmeasured" jet function in ref. [22]. A.2 Anomalous dimensions for the transverse energy veto We now determine the structure of the anomalous dimension for S B for the transverse energy veto, and check that this is consistent with the one-loop result in eq. (4.9). The beam function ν and µ anomalous dimensions read [82] Here T 2 B = T 2 a,b and κ B = κ a,b encode the flavor of the colliding parton coming from the respective beam, and ω B = ω a,b its large momentum component. This directly leads to the ν-anomalous dimension of S B , Using the one-loop cusp anomalous dimension, Γ cusp [α s (µ)] = α s /π+O(α 2 s ), and γ κ ν [α s (µ)] = O(α 2 s ) this is in agreement with the ν-dependence of eq. (4.9). To check the µ-dependence we give also the hard function anomalous dimension, with ω J = 2p J T cosh η J , where the quoted form again assumes Casimir scaling of the cusp anomalous dimension. The consistency relation leading to the structure of the µ-anomalous dimension γ κ S B reads Using color conservation, i.e. T J = −T a − T b , and noting that the noncusp anomalous dimensions cancel each other at one loop, it is straightforward to check that eq. (4.9) is consistent with this relation. B Calculation of the soft function S B Here we outline the main steps for the one-loop computation of the wide-angle soft function for narrow jets (R R 0 ) with a general jet veto. S B was defined in sec. 3.2 and results for various jet vetoes were given in eqs. (3.6), (3.7) and (4.9). Due to the fact that the jet region is not resolved by the wide angle soft modes, the contribution from the beam-beam dipole with the color factor T a · T b is just given by the result without any jet which is known for common measurements like (beam) thrust, C-parameter or the transverse momentum for back-to-back configurations [56,84,90]. The computation for the real radiation correction from the jet-beam dipoles can be performed similarly to the corresponding contribution for an energy veto in ref. [63] summarized in their appendix B.1. It is convenient to take advantage of their results and calculate only the difference correction between the employed jet veto and the energy veto explicitly, which both have common soft IR divergences. For definiteness we consider the correction with the color structure T a · T J , which can be written as where cos θ denotes the angle between the gluon momentum and the beam direction n a , i.e. cos θ = n a · k/| k| = tanh η, andf (cos θ) = f B (η) is defined in terms of the vetodependent function f B in eq. (2.10). When f B (η) → 1 for η → ±∞ this leads to rapidity divergences, which is for example the case for the transverse energy veto discussed in sec. 4.1. To regulate these divergences we employ a factor ν η /|2 n a · k| η arising from a modified version of the η-regulator in refs. [83,84]. 18 Furthermore, we rescaled µ 2 → µ 2 e γ E /4π in eq. (B.1) anticipating MS renormalization. The unrenormalized result for the correction with an energy veto S E can be read off from eq. (5.12) in ref. [63], The remaining correction ∆S T B implementing the difference to the actual jet veto can be written as an integral over the angle cos θ, where the function F denotes the integrand for an energy veto (given in eq. (B.2) of [63]), which also contains implicitly the dependence on the angle between beam and jet with cos θ aJ ≡ 1 − n a · n J ≡ n = tanh η J , and the function G encodes the additional factor for the specifically applied jet veto, (B.4) 18 The rapidity regularization factor for the soft function needs to satisfy ν η /(na·k) η when the momentum k becomes collinear to the beam direction na in order to use the common result for the beam function matching coefficients (where precisely this factor is used for regularization). For all jet vetoes discussed in this paper G reads explicitly where we dropped the rapidity regulator for SCET I -type measurements. To compute the integral eq. (B.3) to O( 0 , η 0 ) it is convenient to split it into two integration regions −1 ≤ cos θ ≤ 1 − δ and 1 − δ ≤ cos θ ≤ 1 with 1 − δ > cos θ aJ , such that collinear divergences appear either in the jet or beam direction. Otherwise the choice of the cutoff parameter δ is irrelevant. For simplicity we take δ 1 and also expand in this parameter. We start with the contribution from the first integration domain, ∆S T B ,1 . The integrand F is decomposed into a product of two functions F J andF J , where F J has a power-like behavior for θ → θ aJ (i.e. for radiation close to the jet) andF J encodes the finite remainder, as discussed in ref. [63] above and below eq. (B.6). We can then write 19 for ∆S T B ,1 where the integrands in the last two lines can be expanded in before the integration and the other integral can easily be carried out in d dimensions. For the correction ∆S T B ,2 from the integration domain 1 − δ ≤ cos θ ≤ 1, we perform a similar decomposition for both of the functions F and G, such that F a and G a contain the power behavior for cos θ → 1 (i.e. for radiation close to beam a) andF a andG a contain the remainder. This leads to the integral for ∆S T B ,2 , where both integrals can be carried out analytically without any additional expansions in or η. Adding the contributions from the two integration regions, the dependence on δ drops out. After expanding in η and , we obtain for the C-parameter veto, separating out the contribution containing the jet mass and jet radius logarithmŝ The rest contains corrections from the hard function H κ = H κak (x a ) +Ĩ The remaining constants appearing in eq. (C.6) are given by 21 j (1) q = (7 − π 2 )C F , j (1) g = 4 3 − π 2 C A + 5 3 β 0 , s (1) = π 2 6 . (C.10) The functionsp (0) ij are directly related to the splitting functions at O(α s ) and given bỹ p (1) qq (z) = C F 2L 0 (1 − z) − θ(1 − z)(1 + z) , p (1) qg (z) = T F θ(1 − z) (1 − 2z + 2z 2 ) , The matching functionsĨ ij encoding collinear initial state radiation effects are given bỹ We also display the logarithmic dependence of the two loop result. Here we only show explicitly the terms associated with either jet mass or jet radius logarithms. These read forσ jet 2T cut J p cut T R 2 + (terms involving only L B and ln R veto ) . (C.13) Here the term S (NG,2) hemi (x) encodes the nonglobal structures and can be directly read off from refs. [36,37], |ln x| + (nonlogarithmic terms) . (C.14) The nonlogarithmic terms in eq. (C.14) must be kept when including this term, since they are of the same size as the logarithms for the regions we consider. The required anomalous
2016-12-03T17:41:40.000Z
2016-05-25T00:00:00.000
{ "year": 2016, "sha1": "1df1b2a7a0b35eb22e07639fe10425f45bd52de6", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP12(2016)054.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "3503d95ca224e7c28ed2358024989d32c8fdfe15", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119649354
pes2o/s2orc
v3-fos-license
Row contractions annihilated by interpolating vanishing ideals We study similarity classes of commuting row contractions annihilated by what we call higher order vanishing ideals of interpolating sequences. Our main result exhibits a Jordan-type direct sum decomposition for these row contractions. We illustrate how the family of ideals to which our theorem applies is very rich, especially in several variables. We also give two applications of the main result. First, we obtain a purely operator theoretic characterization of interpolating sequences. Second, we classify certain classes of cyclic commuting row contractions up to quasi-similarity in terms of their annihilating ideals. This refines some of our recent work on the topic. We show how this classification is sharp: in general quasi-similarity cannot be improved to similarity. The obstruction to doing so is the existence, or lack thereof, of norm-controlled similarities between commuting tuples of nilpotent matrices, and we investigate this question in detail. Introduction Since the appearance of the seminal work of Sz.-Nagy and Foias in the original edition of [31], the rich interplay between complex function theory on the unit disc and the theory of Hilbert space contractions has been fruitfully exploited. A wealth of structural results about Hilbert space contractions was uncovered, based on a careful analysis of the compressions of the standard isometric unilateral shift to its coinvariant subspaces. At the root of this strategy is an important fact saying that "almost coisometric" contractions (i.e pure contractions with one-dimensional defect spaces) are always unitarily equivalent to such compressions. Furthermore, the appropriate coinvariant subspace can be identified explicitly, and it encodes the ideal of holomorphic relations constraining the contraction. Unfortunately, this condition on the contraction T being almost coisometric is very rigid, and it is desirable to replace it with a more flexible one. A natural replacement is that T should admit a cyclic vector. Perhaps surprisingly, such contractions can still be classified using compressions of the unilateral shift to a coinvariant subspace reflecting the holomorphic constraints satisfied by T . The compromise here is that the relationship between T and its classifying model is weaker than unitary equivalence (or even similarity); it is usually referred to as quasi-similarity. Despite this apparent weakness, several key pieces of information about T can still be extracted using this scheme. In fact, this approach can be greatly expanded to move past the setting of cyclic vectors, and a genuine analogue of the Jordan canonical form of a matrix can be constructed (see [6] for a comprehensive treatment). Date: February 20, 2019. The first author was partially supported by an NSERC Discovery Grant. 1 A modern trend in operator theory is to make the object of study a d-tuple of operators T = (T 1 , . . . , T d ) on some Hilbert space H. It is natural, then, to aim to reproduce the very successful univariate program to elucidate the properties of row contractions, that is d-tuples T that are contractive when viewed as row operators from the d-fold direct sum H (d) to H. This has been carried out to a great extent by Popescu in a long series of papers starting with [27], [28], where he shows that many aspects of the classical theory have close analogues in the multivariate context, where no commutativity is imposed on the operators T 1 , . . . , T d . In contrast, the structure of commuting row contractions turns out to be more elusive. Nevertheless, building upon the groundwork laid in [24] and [3], a coherent theory has emerged in the last two decades, leveraging function theory on the so-called Drury-Arveson space to infer information about general commuting row contractions. By way of analogy with the familiar univariate setting, it is then natural to wonder whether commuting row contractions can be classified using compressions of the Drury-Arveson shift to coinvariant subspaces. Classification up to unitary equivalence was achieved in [3], under the necessary condition that the defect space of the row contraction be one-dimensional. Recently, the authors have showed that the aforementioned more flexible quasi-similarity classification also has a satisfactory multivariate counterpart [13]. So far, we described two different classifications for commuting row contractions: one up to unitary equivalence which requires strong conditions to be satisfied, and another up to quasi-similarity that is more widely applicable. There is another commonly used equivalence relation on linear operators that we have seemingly overlooked: similarity. The motivation behind this paper is thus the following question: what kind of commuting row contractions can be classified up to similarity using compressions of the Drury-Arveson shift to coinvariant subspaces? We note that this question has been considered in the single-variable case in [8], [9], [10]. Interestingly, there are obstructions to similarity even in otherwise transparent cases. Indeed, for a pure cyclic contraction T annihilated by a Blaschke product θ with distinct roots, it was shown in [8] that similarity between T and the standard model operator S θ is equivalent to the roots of θ forming a so-called interpolating sequence. Roughly speaking, the condition on the sequence being interpolating allows for the construction of enough commuting idempotents in the commutant of T , which can in turn be used to diagonalize T up to similarity. Achieving diagonalization in our multivariate context is one of the main objectives of this paper, where the natural replacements for Blaschke products are vanishing ideals of zero sets, and germs thereof. In turn, we use the information we obtain on diagonalization to connect the similarity question for commuting row contractions to various function theoretic properties of the zero set. In particular, we characterize the property of a sequence being interpolating in purely operator theoretic terms. As another application, we refine the work done in [13] in some special cases. This refinement, and the limitations of it which we identify, lead us to a careful analysis of similarities between commuting tuples of nilpotent matrices. Controlling the norm of these similarities is the salient feature of this endeavour, and as we illustrate, this task is much more complicated than what was witnessed in [8]. Let us now turn to describing the structure of the paper. In Section 2 we introduce the necessary background material and notation, and gather some necessary preliminary results. In Section 3, we study what we call higher order vanishing ideals of interpolating sequences and we show in Theorem 3.4 that this class of ideals is very rich, much more so in fact that its single-variable counterpart. This helps frame the main result of the paper on the existence of Jordan-type decompositions, which we prove in Section 4. In simplified terms, our main result reads as follows (see Theorem 4.6 for the complete statement). Theorem 1.1. Let T = (T 1 , . . . , T d ) be an absolutely continuous, commuting row contraction which is annihilated by all multipliers that vanish on some interpolating sequence Λ ⊂ B d , up to some fixed order. Then, for each λ ∈ Λ there is a commuting nilpotent d-tuple N (λ) such that T is jointly similar to the d-tuple ⊕ λ∈Λ (λI + N (λ) ). Furthermore, the polynomials annihilating a given d-tuple N (λ) depend explicitly on the local behaviour of the annihilating ideal of T at the point λ. In the univariate situation, zero sets can be completely understood in terms of Blaschke products. No such tools are available in the multivariate world however. As a replacement tool, we perform a detailed analysis of germs of multipliers and of their polynomial representatives. This information is used crucially in the proof of Theorem 1.1. The rest of the paper is devoted to two applications of our main result above. First, in Section 5, we apply it to obtain the following purely operator theoretic characterization of interpolating sequences (Theorem 5.6). This is a close multivariate analogue of [8,Theorem 4.4]. Recall that a sequence (λ n ) ⊂ B d is strongly separated if there is ε > 0 such that for every n ∈ N there is a contractive multiplier ω n such that |ω n (λ n )| ≥ ε and ω n (λ m ) = 0 for every m = n. Theorem 1.2. Let Λ = {λ n : n ∈ N} ⊂ B d be a sequence and let a denote its vanishing ideal of multipliers. Consider the following statements. (i) The sequence Λ is interpolating. (ii) The row contraction Z a is similar to D = ∞ n=1 λ n , where Z a is the compression of the Arveson d-shift to the orthogonal complement of a. (iii) Every absolutely continuous commuting row contraction T annihilated by a is similar to D = ⊕ ∞ n=1 λ n . (iv) The sequence Λ is strongly separated. (v) The sequence Λ is strongly separated by partially isometric multipliers. In one variable, the preceding statements are all equivalent, as a consequence of Carleson's classical characterization of interpolating sequences [1,Chapter 9]. In several variables, the equivalence of (iv) and (v) appears to be new. We also mention that the equivalence of (i) and (ii) is already apparent from results in [1,Ch. 9]. Our second application of Theorem 1.1 is a classification of certain pairs of commuting row contractions using their annihilating ideals. The following result (Theorem 6.3) gives a two-sided improvement of [13,Corollary 3.7] in our special case of interest. Specifically, this result says that S and T are quasi-similar, which is a two-sided version of the notion of quasi-affine transform that appears in [13,Corollary 3.7]. Theorem 1.3. Let T = (T 1 , · · · , T d ) and S = (S 1 , · · · , S d ) be absolutely continuous, cyclic, commuting row contractions with common annihilating ideal a. Assume that a contains some higher order vanishing ideal of an interpolating sequence. Then, there are injective operators X and Y with dense range such that XT k = S k X and Y S k = T k Y for 1 ≤ k ≤ d. We show in Example 4 that the previous theorem is sharp in the sense that quasi-similarity typically cannot be improved to similarity. Perhaps surprisingly, the obstruction lies in the structure of similarity classes of commuting tuples of nilpotent matrices. Elucidating this structure is a classical and notoriously difficult problem (see for instance [17], [16]). This difficulty stands in sharp contrast with the case of a single cyclic nilpotent matrix, which of course is always similar to a Jordan block of appropriate size. Our point of view here is different however. In our case of interest, the existence of a similarity is easily established; it is the size of this similarity that is crucial. In Section 7, we tackle this problem and obtain necessary and sufficient condition for the existence of norm-controlled similarities between a given nilpotent tuple and the corresponding functional model for a homogeneous annihilating ideal (see Theorems 7.2 and 7.7). Preliminaries Throughout the paper, we fix a positive integer d ≥ 1. We let H denote a complex Hilbert space and B(H) will denote the C * -algebra of bounded linear operators on it. Likewise, we will denote by B(H, K) the Banach space of bounded linear operators from H into another Hilbert space K. Given a subset S ⊂ B(H), we denote its commutant by S ′ . We also set [SH] = ran S = span{Aξ : A ∈ S, ξ ∈ H}. A d-tuple T = (T 1 , . . . , T d ) of operators on H is said to be cyclic if there is a vector ξ such that [A T ξ] = H, where A T denotes the unital operator algebra generated by T 1 , . . . , T d . Given z = (z 1 , · · · , z d ) ∈ C d and A ∈ B(H), we put zA = (z 1 A, · · · , z d A). Likewise, we set In particular, given another d-tuple S = (S 1 , . . . , S d ) acting on some Hilbert space K, we say that S and T are quasi-similar if there are injective bounded linear operators X : H → K, Y : K → H with dense ranges such that XT = SX and Y S = T Y . If either X or Y is also surjective, then S and T are similar. The associated reproducing kernel Hilbert space on B d is called the Drury-Arveson space and it is denoted by H 2 d . We will encounter vector valued versions of this space as well, which we identify with H 2 d ⊗ H, for some Hilbert space H. Given two Hilbert spaces H and K, a function Every multiplier Φ gives rise to a multiplication operator Using this identification with H = K = C, we can view the algebra of multipliers as a weak- * closed subalgebra M d ⊂ B(H 2 d ). One important property of M d is that it coincides with its commutant, M d = M ′ d . We will often go back and forth between the interpretation of a multiplier as a function and as a multiplication operator. In particular, this identification allows us to define the multiplier norm as for every multiplier Φ. While the inequality holds for every multiplier Φ, the two norms are not comparable in general. A multiplier Φ is inner if M Φ is a partial isometry. Much like in Beurling's classical description of invariant subspaces for the unilateral shift on the Hardy space of the unit disc, inner multipliers and ideals in M d are connected with multiplier invariant subspaces in the Drury-Arveson space. We summarize the main features that we will need in the following theorem. and we use the standard multi-index notations It can be verified that The reader should consult [1] for more details on these topics. One basic property of the multiplier algebra M d that we will need repeatedly is that it admits a solution to the so-called "Gleason problem", as the following result shows. Theorem 2.2. Let ϕ ∈ M d and let z ∈ B d . Let N ≥ 1 be an integer. For each α ∈ N d such that |α| = N there is ψ α ∈ M d with the property that Proof. This can easily be inferred from [19,Cor. 4.2]. Let Λ = {λ n : n ≥ 1} be a countable subset of B d . Much of the developments in the paper are based on an analysis of the restrictions of multipliers to Λ, through the following properties. We say that Λ is (1) separated if for every n ≥ 1 there is ϕ n ∈ M d such that ϕ n (λ n ) = 1 and ϕ n (λ m ) = 0, m = n, (2) strongly separated if there is ε > 0 such that for every n ≥ 1 there is ω n ∈ M d with ω n M d ≤ 1 such that |ω n (λ n )| ≥ ε and ω n (λ m ) = 0, m = n. (3) strongly separated by inner multipliers if there is ε > 0 such that for every n ≥ 1 there is a Hilbert space H n and an inner multiplier Ω n : B d → B(H n , C) with Ω n (λ n ) ≥ ε and Ω n (λ m ) = 0, m = n, (4) an interpolating sequence for M d if for every bounded sequence (a n ) there is ψ ∈ M d such that ψ(λ n ) = a n , n ≥ 1. Strongly separated sequences are obviously separated, and it is a consequence of the open mapping theorem that interpolating sequences are strongly separated. In one variable, the classical interpolation theorem of Carleson [7] implies conversely that strongly separated sequences are strongly separated by inner functions, and in fact interpolating. However, this last implication fails for other reproducing kernel Hilbert spaces on the unit disc such as the Dirichlet space; this observation seems to be due to Bishop and Marshall-Sundberg [21]. In general, interpolating sequences can be characterized by another separation condition, along with a socalled Carleson measure condition. This important result can be found in [2], and it settled a long-standing open problem. Proof. We may clearly suppose that w / ∈ Λ. It is clear that {w} ∪ Λ satisfies property (b) in Theorem 2.3, so it suffices to check that {w} ∪ Λ also satisfies property (a) therein. Assume otherwise, so that there is a subsequence (λ n ) of Λ with the property that lim n→∞ |k(λ n , w)| 2 k λn 2 k w 2 = 1. Upon passing to a further subsequence if necessary, we may assume that (λ n ) converges to some z ∈ B d and that the sequence of unit vectors so by the Cauchy-Schwarz inequality there is ζ ∈ C such that |ζ| = 1 and Thus, we find whence z = w < 1. Thus, the sequence Λ has an accumulation point in the open unit ball B d . Since subsequences of interpolating sequences are interpolating themselves, and since functions in M d are continuous on B d , this is readily seen to be impossible. 2.2. Multivariate operator theory and functional calculi. Let T 1 , . . . , T d ∈ B(H) be commuting operators, and let T = (T 1 , . . . , T d ). We will denote by σ(T ) ⊂ C d the Taylor spectrum of T . See [32] or [25] for comprehensive treatments of the Taylor spectrum. One important tool we will need is the so-called Taylor functional calculus. Given an open subset U ⊂ C d , we let O(U ) denote the ring of functions that are holomorphic on U . Then, there is a constant C > 0 and a unital algebra homomorphism [32,Theorem III.9.9]). The following summarizes the properties of the Taylor functional calculus that we will need. (ii) Let R = (R 1 , . . . , R d ) be a commuting d-tuple of operators on a Hilbert space K with σ(R) ⊂ U . Let X ∈ B(H, K) be such that XT j = R j X for j = 1, . . . , d. Then (iii) Let K 1 and K 2 be disjoint non-empty compact subsets of C d , and suppose U 1 and U 2 are open disjoint neighbourhoods of K 1 and K 2 , respectively. Let χ denote the characteristic function of the set U 1 , and set P = τ T,U1∪U2 (χ). Then, P is a non-zero idempotent operator commuting with T which satisfies σ(T | ran P ) = K 1 and σ(T | ran(I−P ) ) = K 2 . Our attention will be focused on the subclass of commuting d-tuples T = (T 1 , . . . , T d ) which are row contractions in the sense that It is readily verified that the constant function 1 ∈ H 2 d is a cyclic vector for M x , and that P Ha 1 is a cyclic vector for Z a . One reason which explains the importance of M x is that it plays a certain universal role among commuting row contractions, as we describe next. Let T = (T 1 , . . . , T d ) be a commuting row contraction on some Hilbert space H. By [3, Theorem 8.1], there is a unital completely contractive homomorphism The commuting row contraction T is said to be absolutely continuous (or AC for short) if α T extends to a weak- * continuous unital algebra homomorphism on M d . It follows from [12,Theorem 3.3] (see also [11,Theorem 2.4]) that T is AC if and only if the sequence (α T (ϕ n )) converges to 0 in the weak- * topology of B(H) whenever (ϕ n ) is a bounded sequence in A d converging to 0 pointwise on B d . The latter two conditions are in fact equivalent to the sequence (ϕ n ) converging to 0 in the weak- * topology of M d . Since the polynomial multipliers are weak- * dense in M d , if T is AC then the weak- * continuous extension of α T is unique and we denote it by The annihilating ideal of T is defined to be Next, we show that the functional calculus just defined is compatible with the Taylor functional calculus. This is folklore, but we provide the details for the reader's convenience. (ii) Assume that σ(T ) ⊂ B d . Then, for every ϕ ∈ A d we have If in addition T is AC, then for every ψ ∈ M d we have On the other hand, since the function f is holomorphic on a neighbourhood of the closed unit ball, upon expanding it as a convergent power series centred at the origin we find a sequence of polynomials (p n ) that converge uniformly on a neighbourhood of B d to f . Consequently, by the continuity property of τ Mx,U , we find that the sequence of operators τ Mx,U (p n ) = M pn , n ≥ 1 converges in norm to τ Mx,U (f ) = M ψ . This forces ψ ∈ A d and since the multiplier norm dominates the supremum norm over B d , we also find that (p n (z)) converges to ψ(z) for every z ∈ B d . In particular, we infer that ψ = ϕ so indeed ϕ ∈ A d . Finally, using the continuity property of α T and of τ T,U we find that the sequence of operators converges in norm to both α T (ϕ) and to τ T,U (f ), whence (ii) A standard polynomial approximation argument similar to the one used above shows that α T (ϕ) = τ T,B d (ϕ). for every ϕ ∈ A d . Assume now that T is AC and fix ψ ∈ M d . For each n ≥ 1, we define U n to be the open ball of radius (1 − 1/n) −1 centred at the origin, and we let for every n ≥ 1 by (i). Since σ(T ) ⊂ B d , we may invoke Theorem 2.5 to see that for every n ≥ 1. Furthermore, we note that by the uniform continuity of ψ on σ(T ), we have that the sequence (ψ n ) converges uniformly on σ(T ) to ψ, whence the sequence (τ T,B d (ψ n )) converges in norm to τ T,B d (ψ) by the continuity property of τ T,B d . On the other hand, it is well-known that (M ψn ) n converges to M ψ in the weak- * topology [30,Theorem 3.5.5], so that the sequence ( α T (ψ n )) converges in the weak - * topology to α T (ψ). Hence, the two limits must coincide and we conclude that In view of this result, we may unambiguously use the notation ϕ(T ) to denote the functional calculus associated to a commuting row contraction T and applied to a function ϕ, provided that this makes sense to begin with. We will do so henceforth, and will not distinguish between the various functional calculi. Next, fix z ∈ C d . We define an equivalence relation on the set of functions that are holomorphic in a neighbourhood of z. In this case, we write f 1 ∼ z f 2 ; the relation ∼ z is an equivalence relation, and we denote by [f ] z the equivalence class of a function f holomorphic on a neighbourhood of z. We call [f ] z the germ of f at z, and we denote by O(z) the ring of all germs of holomorphic functions at z. In the following result, given a subset S of a ring we denote by S the ideal generated by S. is an analytic variety in U . In fact, for each z ∈ Z U (F ) there is an open subset W ⊂ U containing z along with finitely many functions f 1 , . . . , f k ∈ F such that Proof. This follows from [20, Theorem II.E.3] and its proof. which is the maximal ideal in O(z). The next sequence of lemmas shows, for some purposes, that polynomials, holomorphic functions and multipliers can be used interchangeably when studying germs. First, we deal with a density question. Lemma 2.8. Let c be an ideal in M d , and let c denote its weak- * closure. Let z ∈ B d . Assume that there is a positive integer µ such that Then, we have that Let N be the cardinality of the set {α ∈ N d : |α| ≤ µ − 1} and consider the surjective linear map for every ψ ∈ M d . By virtue of Equation (1), we see that ∆ is weak- * continuous, so that ∆(c) = ∆(c). Fix f ∈ c. By the previous equality, we find g ∈ c with the property that ∆ Finally, we find that We conclude that a ⊂ b as desired. Next, we introduce a mechanism to move between ideals of germs of holomorphic functions and ideals of polynomials. Given an open subset U ⊂ C d , a subset of functions F ⊂ O(U ) and a point z ∈ U , we define the polynomial ideal determined by F at z to be We now verify that that this construction yields nothing new if we start with an ideal of polynomials a ⊂ C[x 1 , . . . , x d ], provided that a contains some power of the maximal ideal m z . be an ideal of polynomials, and let z ∈ C d . Assume that there is a positive integer µ such that m µ z ⊂ a. Then, we have that p(a, z) = a. Proof. We trivially have that a ⊂ p(a, z). To prove the reverse inclusion, we fix p ∈ p(a, z). By definition, this means that there are p 1 , · · · , p m ∈ a and Upon writing each f j as a power series convergent around z, we see that there is another function g j holomorphic near z such that [g j ] z ∈ m z µ and a polynomial r j such that In particular, we see that p j r j ∈ a for every 1 ≤ j ≤ m and thus m j=1 On the other hand, we see that and since p − m j=1 p j r j is a polynomial, this means that Finally, we see that The next result shows that if z is an isolated point of Z U (F ), then the polynomial ideal determined by F at z contains all the relevant information about F . Furthermore, the radical of p(F, z) is m z , and there is a positive integer µ such that m µ z ⊂ p(F, z). Proof. For convenience, throughout the proof we let To establish the converse, we first make a preliminary observation. Because z is an isolated point of Z U (F ), it follows from the Nullstellensatz for O(z) (see [20, Theorems II.E.20 and III.A.7]) that the radical of a is m z . In particular, for each 1 ≤ j ≤ d there is a positive integer µ j such that . This immediately implies that the radical of p(F, z) contains m z , and hence is equal to m z by maximality. Fixing now [f ] z ∈ a, we will show that [f ] z ∈ [p] z : p ∈ p(F, z) , thus completing the proof. Upon writing f as a power series convergent around z, we see that there is another function g holomorphic near z such that [g] z ∈ [q] z : q ∈ m µ z and a polynomial r such that It thus suffices to show that [r] z and [g] z belong to [p] z : p ∈ p(F, z) . Using that The previous lemma allows us to make an important definition that we require later. Let U ⊂ C d be an open subset and let z ∈ U . Let F ⊂ O(U ) be a subset with the property that z is an isolated point of Z U (F ). By Lemma 2.10, there is a positive integer µ such that m µ z ⊂ p(F, z). We may thus define the polynomial order of F at z to be the smallest positive integer κ such that m κ+1 z ⊂ p(F, z). In the special case where Z U (F ) is a discrete subset of U , then F has a well-defined polynomial order at every z ∈ Z U (F ). Higher order vanishing ideals In this section, we consider higher order vanishing ideals of interpolating sequences. In subsequent sections, these objects will form the basis of various operator theoretic problems. For now, we construct such ideals, and show that the multivariate setting supports a wealth of drastically different behaviours, making it much richer and more complicated than the familiar univariate situation. In turn, this provides motivation for the work appearing in later sections. Throughout this section, Λ ⊂ B d will be a countable set. For each non-negative integer κ, we define the vanishing ideal of Λ of order κ, denoted by v κ (Λ), to be the collection of functions ψ ∈ M d such that ∂ α ψ ∂x α (z) = 0 for every z ∈ Λ and every α ∈ N d such that |α| ≤ κ. It follows from Equation (1) that v κ (Λ) is a weak- * closed ideal of M d . Our attention will be mostly devoted to the case where Λ is an interpolating sequence (see Subsection 2.1). In this case, we can identify the zero set of v κ (Λ). Theorem 3.1. Let Λ ⊂ B d be an interpolating sequence and let κ be a non-negative integer. Then, we have that ) and the proof is complete. The class of ideals we will be interested in for the rest of the paper are those that contain v κ (Λ) for some κ; we will typically refer to them as higher order vanishing ideals. One useful property that such ideals possess is that they have uniformly bounded polynomial order at every point in their zero set. Lemma 3.2. Let Λ ⊂ B d be an interpolating sequence and let κ be a non-negative integer. Let a ⊂ M d be an ideal such that a ⊃ v κ (Λ). Then, for every z ∈ Z B d (a), the polynomial order of a at z is at most κ. Proof. Fix z ∈ Λ and let θ ∈ M d be a multiplier vanishing on Λ \ {z} and such that θ(z) = 1. Let α ∈ N d such that |α| > k. Then, it is readily verified that Next, we aim to elucidate the structure of higher order vanishing ideals. For this purpose, it is useful to first consider the single-variable case. Example 1. Let Λ ⊂ B 1 be an interpolating sequence, let κ be a non-negative integer and let a ⊂ M 1 be a weak- * closed ideal containing v κ (Λ). Let θ ∈ M 1 denote the Blaschke product with a simple zero at every point of Λ. Using the classical inner-outer factorization along with the factorization of inner functions as Blaschke products and singular inner functions, it is readily seen that the fact that a contains v κ (Λ) is equivalent to θ κ+1 ∈ a. There is an inner function τ ∈ M 1 such that a = τ M 1 . Then, we see that τ divides θ κ+1 , so that τ is itself a Blaschke product with where each n z is a positive integer at most κ + 1. It is then readily verified that Our next task is to show that the simple behaviour witnessed in the previous example is special to the univariate setting. Some preparation is required. Lemma 3.3. Let Λ ⊂ B d be an interpolating sequence and let κ be a non-negative integer. For each subset Ω ⊂ Λ, there is a multiplier θ Ω ∈ M d with the following properties. ( Proof. Since Λ is an interpolating sequence, an application of the open mapping theorem yields the existence of a constant C > 0 such that for every bounded function f on Λ, there is a corresponding multiplier in M d whose restriction to Λ coincides with f and whose norm is at most In particular, for every Ω ⊂ Λ there is ϕ Ω ∈ M d whose restriction to Λ agrees with the characteristic function of Ω and such that ϕ Ω ≤ C. For each Ω ⊂ Λ, we put It is readily checked that these functions satisfy properties (i) and (ii). To establish (iii), let (Ω n ) be an increasing sequence of finite subsets of Λ such that ∪ ∞ n=1 Ω n = Λ. For each positive integer n, we claim that the multiplier . To see this, first note that for every Ω ⊂ Λ we have θ Ω ∈ v κ (Λ\Ω) and 1 − θ Ω ∈ v κ (Ω), which implies in particular that (3) ∂ α θ Ω ∂x α (w) = 0 for every w ∈ Λ and every α ∈ N d such that |α| ≥ 1 and |α| ≤ κ. Fix w ∈ Λ and n ≥ 1. If w ∈ Ω n , then we have for every w ∈ Λ and every α ∈ N d such that |α| ≥ 1 and |α| ≤ κ. This establishes the claim that ψ n ∈ v κ (Λ). Now, the sequence (θ Ωn ) is bounded, and hence it has a weak- * limit point τ ∈ M d . Note that τ is the weak- * limit of On the other hand, for each z ∈ Λ, there is N ≥ 1 such that z ∈ Ω n for every n ≥ N . We thus find τ (z) = lim n→∞ θ Ωn (z) = 1 and using Equation (1) we obtain We now arrive at the main result of this section, showing that higher order vanishing ideals are plentiful. For this purpose, fix z ∈ Λ. We claim that Indeed, for every w ∈ Λ, w = z and every α ∈ N d with |α| ≤ κ, we note that On the other hand, we know by assumption that m κ+1 for every w ∈ Λ, w = z, and therefore Since θ z (z) = 1, we conclude that [θ z ] z is invertible in O(z). Consequently, we find where the last inclusion follows from Equation (4). The claim is established. Next, we always have and thus By Lemma 2.8, it follows that and so p(b, z) = p(a z , z). Finally, we may invoke Lemma 2.9 to get p(b, z) = a z . This theorem says in particular that the higher order vanishing behaviour of multipliers in several variables is more complicated that what is visible on the unit disc. We illustrate this in the following example, leveraging the fact that we can prescribe the polynomial ideals determined by such an ideal at every point. Example 2. Let Λ = {z n : n ∈ N} ⊂ B 2 be an interpolating sequence. For each n ∈ N, write z n = (z n,1 , z n,2 ) and consider the ideal b n = m 2 zn + x 1 − z n,1 . It is readily verified that b n is not a power of the maximal ideal m zn . By Theorem 3.4, there are weak- * closed ideals a, b ⊂ M 2 both containing v 2 (Λ) and satisfying zn , p(b, z n ) = b n for every n ≥ 1. This type of phenomenon does not occur in one variable; the reader may wish to compare the preceding equalities with Equation (2) in Example 1. Jordan-type decompositions In this section, we investigate AC commuting row contractions annihilated by some ideal of multipliers. Our main goal is to show that if the annihilating ideal is a higher order vanishing ideal for some interpolating sequence, then the corresponding commuting row contraction is similar to a block diagonal tuple, where each block is a nilpotent tuple translated by a scalar multiple of the identity. We view this as an infinite-dimensional, multivariate version of the classical Jordan decomposition of a matrix. Several preliminary lemmas are required before we can prove the existence of such a decomposition. We start with a technical fact. We note that R ⊥ = ϕ∈S ker ϕ(T ) * . By assumption, the constant multiplier 1 is in the weak- * closure of S + Ann(T ). Thus there is a net (ψ j ) j∈J in Ann(T ) and a net (ϕ j ) j∈J in S such that (ϕ j +ψ j ) j∈J converges in the weak- * topology of M d to 1. On the other hand, since ψ j (T ) = 0 for every j ∈ J and since T is AC, we have that the net (ϕ j (T )) j∈J converges to I in the weak- * topology of B(H). For ξ ∈ ϕ∈S ker ϕ(T ) * , we find We now clarify the relationship between the Taylor spectrum and the zero sets of annihilating ideals. Assume that v κ (Λ) ⊂ Ann(T ). Then, we have that (Ann(T )) and choose ϕ ∈ Ann(T ) such that ϕ(z) = 1. By Theorem 2.2 there are ψ 1 , . . . , ψ d ∈ M d such that Applying the functional calculus to the previous equality yields Assume z ∈ Λ is such that θ {z} (T ) = 0 and set R = T | ran θ {z} (T ) . Note that and thus θ {z} (T ) is an idempotent. Likewise, ) is simply the zero d-tuple, and hence its Taylor spectrum is the origin in C d . By the spectral mapping theorem [25, Corollary IV. 30.11], we must have σ(R) = {z}. However, it follows from [32, Lemma III. 13.4] that σ(R) ⊂ σ(T ) so z ∈ σ(T ). This shows that θ {z} ∈ Ann(T ) for every z ∈ Λ \ σ(T ). In particular, we note that for all z ∈ Λ. For ϕ ∈ v κ (σ(T ) ∩ Λ), we then have ran θ {z} (T ) ⊂ ker ϕ(T ), z ∈ Λ and therefore ϕ(T ) = 0 in light of Equation (5). We conclude that v κ (σ(T ) ∩ Λ) ⊂ Ann(T ). Since v κ (Λ) ⊂ Ann(T ), it follows from (i) and Theorem 3.1 that (Ann(T )). We conclude that A more thorough exploration of the relationship between the Taylor spectrum and annihilating ideals will be undertaken in an upcoming paper. For now, we turn to elucidating the structure of AC commuting row contractions whose Taylor spectrum is a singleton. As motivation, we first consider the univariate situation. Let T ∈ B(H) be an AC contraction with non-trivial annihilating ideal and with σ(T ) = {λ} for some λ ∈ B 1 . It then follows from [6,Theorem 4.11] that Ann(T ) is generated by some power of the Blaschke factor with root λ, and in particular T − λI is a nilpotent operator. As the next result shows, similar statements hold true for AC commuting row contractions under a topological assumption on the zero set of the annihilating ideal. We say that a commuting d-tuple T = (T 1 , . . . , T d ) is nilpotent if for each 1 ≤ j ≤ d there is a positive integer n j such that T nj j = 0. Proof. Using the fact that O(z) is Noetherian, we can find ψ 1 , . . . , ψ m ∈ Ann(T ) such that On the other hand, it follows from Lemma 2.10 that Let p ∈ p(Ann(T ), z). There are functions g 1 , · · · , g m analytic on a neighborhood of z such that In particular, there is a small open ball B centred at z on which the functions g 1 , · · · , g m are defined and holomorphic, and are such that p = m j=1 ψ j g j everywhere on B. Applying the functional calculus to this equality and invoking Theorem 2.6, we find p(T ) = m j=1 ψ j (T )g j (T ) = 0 since ψ 1 , . . . , ψ m ∈ Ann(T ). Thus, p ∈ Ann(T ). We conclude that p(Ann(T ), z) ⊂ Ann(T ). It remains only to show that p(Ann(T ), z) is weak- * dense in Ann(T ). Fix ψ ∈ Ann(T ). Using Equation (6), there are polynomials q 1 , . . . , q m ∈ p(Ann(T ), z) and functions f 1 , . . . , f m holomorphic on a neighbourhood of z such that For each 1 ≤ j ≤ m, upon writing f j as a power series convergent around z, we see that there is another function g j holomorphic near z such that [g j ] z ∈ m z κ and a polynomial r j such that In particular, we infer that ∂ α ∂x α (ψ − p)(z) = 0 for every α ∈ N d such that |α| ≤ κ − 1. By virtue of Theorem 2.2, for each α ∈ N d with |α| = κ there is a multiplier ϕ α ∈ M d such that In particular, this means that ψ − p belongs to the weak- * closure of m κ z in M d . Invoking (7), we see that ψ = (ψ − p) + p belongs to the weak- * closure of p(Ann(T ), z). We conclude that p(Ann(T ), z) is weak- * dense in Ann(T ). We remark here that the zero set of the annihilating ideal of a single AC contraction is a Blaschke sequence, all the points of which are isolated. Thus, the topological assumption on the zero set in the previous result is automatically satisfied in one variable. We will need to apply Theorem 4.3 when σ(T ) is discrete but contains more than a single point. For this purpose, we introduce the following procedure which allows us to isolate points in the spectrum. To show the reverse inclusion, we put K = {ker p(T ) : p ∈ p(Ann(T ), z)} and let R = T | K . It follows from Lemma 2.10 that there is a positive integer κ such that m κ z ⊂ p(Ann(T ), z) ⊂ Ann(R). In particular, we see that is simply the zero d-tuple, and hence its Taylor spectrum is the origin in C d . By the spectral mapping theorem [25, Corollary IV. 30.11], we must have σ(R) = {z}. But then χ B is identically 1 on a neighbourhood of σ(R), and it follows that χ B (R) = I. Let X : K → H be the inclusion map, and note that XR j = T j X for 1 ≤ j ≤ d. It follows from Theorem 2.5 that Thus, K = ran X ⊂ ran χ B (T ) = M and the proof is complete. Theorem 4.3 and Lemma 4.4 taken together hint at a possible approach to construct Jordan-type decompositions. However, to deal with infinite spectra this procedure would need to be applied inductively infinitely many times, thus causing significant problems regarding convergence for instance. Whenever the zero set of the annihilating ideal forms an interpolating sequence, these difficulties can be circumvented as the next developments showcase. Assume that there is a non-negative integer κ such that v κ (Λ) ⊂ Ann(T ). For each z ∈ Λ, let Then, the following statements hold. (i) Let z ∈ Λ and let θ ∈ v κ (Λ \ {z}) such that 1 − θ ∈ v κ ({z}). Then, the subspace K z is non-zero and coincides with ran θ(T ). (ii) We have that Proof. (i) Fix z ∈ Λ. Since Λ is an interpolating sequence, z is an isolated point of Λ, and hence of Z B d (Ann(T )) by assumption. Furthermore, we see that z ∈ σ(T ) ∩ B d by Theorem 4.2. The fact that K z is non-zero then follows immediately from Lemma Hence, there are multipliers ϕ 1 , . . . , ϕ m ∈ Ann(T ) and functions f 1 , . . . , f m holomorphic on a neighbourhood of z such that For each 1 ≤ j ≤ m, upon writing f j as a power series convergent around z, we see that there is another function g j holomorphic near z such that [g j ] z ∈ m z κ+1 and a polynomial r j such that Set ϕ = m j=1 r j ϕ j ∈ Ann(T ). Thus, there is [g] z ∈ m z κ+1 such that In particular, we infer that p − ϕ ∈ v κ ({z}) whence and therefore pθ ∈ Ann(T ). Consequently, we find p(T )θ(T ) = 0 so that ran θ(T ) ⊂ ker p(T ). This shows that ran θ(T ) ⊂ K z . Conversely, we note that by Lemma 3.2 we have m κ+1 z ⊂ p(Ann(T ), z) so that Now, we have 1 − θ ∈ v κ ({z}), so that by Theorem 2.2 for each α ∈ N d with |α| = κ + 1 there is ψ α ∈ M d such that We conclude that so that θ(T ) is idempotent and ker(I − θ(T )) = ran θ(T ). We conclude that K z ⊂ ran θ(T ), and statement (i) is established. (ii) Apply Lemma 3.3 to find for every z ∈ Λ a multiplier θ z ∈ v κ (Λ \ {z}) such that 1 − θ z ∈ v κ ({z}) and with the property that the ideal where the last equality follows from statement (i). Thus, statement (ii) holds. (iii) Let z ∈ Λ. It follows immediately from the definition of K z that p(Ann(T ), z) ⊂ Ann(T | Kz ). Let now ϕ ∈ Ann(T | Kz ). By Theorem 2.2 there is a polynomial p and a multiplier ψ in the ideal generated by m κ+1 z such that ϕ = p + ψ. As noted in the proof of (i), we have m κ+1 z ⊂ p(Ann(T ), z), so in fact ψ belongs to the weak- * closure of p(Ann(T ), z) in M d . Since T is AC, we can then infer that K z ⊂ ker ψ(T ). Applying statement (i) to the function θ z ∈ M d defined in the proof of (ii) above, we find that That is, pθ z ∈ Ann(T ) and therefore Because θ z (z) = 1, it follows that [θ z ] z is invertible in O(z), hence [p] z ∈ [τ ] z : τ ∈ Ann(T ) and therefore p ∈ p(Ann(T ), z). Hence, ϕ = p + ψ belongs to the weak- * closure of p(Ann(T ), z) in M d and statement (iii) follows. Finally, we arrive at the main result of this section, which is also the central result of the paper. Therein, we obtain a Jordan-type decomposition for AC commuting row contractions whose annihilating ideal is a higher order vanishing ideal of some interpolating sequence. Theorem 4.6. Let T = (T 1 , . . . , T d ) be an AC commuting row contraction. Let Λ ⊂ B d be an interpolating sequence and let κ be a non-negative integer such that v κ (Λ) ⊂ Ann(T ). Then, for each z ∈ Z B d (Ann(T )) there is a commuting nilpotent d-tuple N (z) such that zI + N (z) is an AC commuting row contraction whose annihilating ideal is generated by p (Ann(T ), z). Furthermore, T is similar to where the last equality follows from Theorem 3.1. In particular, Λ 0 is also an interpolating sequence. Now, Theorem 4.2 implies that Λ 0 = σ(T ) ∩ B d and v κ (Λ 0 ) ⊂ Ann(T ), so upon replacing Λ by Λ 0 if necessary, it is no loss of generality to assume that For each z ∈ Λ, we put Applying Lemma 4.5, we see that H = z∈Λ K z . Moreover, for every z ∈ Λ we have that K z = ran θ {z} (T ) is a non-zero invariant subspace for T and that Ann(T | Kz ) is the weak- * closure of p(Ann(T ), z). Thus Y K z is orthogonal to Y K w whenever z, w ∈ Λ are distinct, so that where H denotes the Hilbert space on which T acts. Since every K z is invariant for T , we have Let X : H → z∈Λ K z be given by for h ∈ H. Then X is a boundedly invertible linear map with the property that In turn, use Theorem 2.5 to find σ(T | Kz ) = {z}. Since Ann(T ) ⊂ Ann(T | Kz ), we conclude that so that z is an isolated point of Z B d (Ann(T | Kz )). We may apply Theorem 4.3 to see that for some nilpotent d-tuple N (z) acting on K z whose annihilating ideal is generated by p(Ann(T ), z) (by Lemma 4.5). We mention that the previous theorem generalizes [8,Corollary 3.3] in two ways: it extends it to the multivariate setting, and it allows for a wider range of annihilating ideals. The reader may also wish to compare with [8,Theorem 5.7]. We close this section with a reformulation of Theorem 4.6 in the case where κ = 0. Corollary 4.7. Let T = (T 1 , . . . , T d ) be an AC commuting row contraction. Let Λ ⊂ B d be an interpolating sequence such that v 0 (Λ) ⊂ Ann(T ). Then, T is similar to z∈Z B d (AnnT ) zI. If in addition we assume that T has a cyclic vector, then T is similar to z∈Z B d (AnnT ) z. Proof. By virtue of Theorem 4.6, for each z ∈ Z B d (Ann(T )) there is a commuting nilpotent d-tuple N (z) such that zI + N (z) is an AC commuting row contraction whose annihilating ideal is generated by p (Ann(T ), z). Furthermore, T is similar to It is readily verified that m z = p(v 0 (Λ), z), whence m z ⊂ p(Ann(T ), z) ⊂ Ann(zI + N (z) ) and so N (z) = 0. Hence T is in fact similar to z∈Z B d (AnnT ) zI. Finally, if T has a cyclic vector, then so does z∈Z B d (AnnT ) zI, which forces the identity operators appearing in this decomposition to act on one-dimensional spaces, whence T is indeed similar to z∈Z B d (AnnT ) z. Application: an operator theoretic characterization of interpolating sequences As a first application of Theorem 4.6, in this section we explore a characterization of interpolating sequences phrased purely in operator theoretic terms. More precisely, we seek to obtain a multivariate version of [8,Theorem 4.4]. We begin by recording a simple observation. Another elementary fact we single out relates to compressions of partial isometries. Lemma 5.2. The following statements hold. (i) Let {V n : n ∈ N} be a family of contractions on some Hilbert space. Assume that the row operator is a partial isometry. Let M be a closed subspace which is coinvariant for V n for each n ∈ N and such that M ⊥ ⊂ ran V . Then, P M V P M is a partial isometry. (ii) Let a ⊂ M d be a weak- * closed ideal. Let H be a Hilbert space and let Ω : B d → B(H, C) be an inner multiplier such that [aH 2 d ] ⊂ ran M Ω . Then, P Ha M Ω | Ha is a partial isometry. Proof. (i) Since M ⊥ ⊂ ran V , we see that Using the fact that M is coinvariant for each V n , we find We conclude that P M V P M is a partial isometry. (ii) This follows immediately from (i). We remark that statement (ii) in the previous result is analogous to a classical fact [6, Problem III. 1.11], which says that if θ ∈ H ∞ is an inner function and ω is an inner divisor of θ, then ω(S θ ) is a partial isometry Here, S θ denotes the one-variable model operator. Next, we obtain a sort of converse to Theorem 4.6. Roughly speaking, it says that a sequence can be determined to be strongly separated (see Subsection 2.1) if there exists a certain Jordan-type decomposition. Next, by Theorem 2.1 there is a Hilbert space X such that for each λ ∈ Λ there is an inner multiplier Ω λ : we have that [aH 2 d ] ⊂ ran Ω λ . In light of Lemma 5.2, we infer that the row operator Ω λ (Z a ) = P Ha M Ω λ | Ha : H a ⊗ X → H a is a partial isometry for every λ ∈ Λ. Consider now the row operator Fix λ ∈ Λ. Let h ∈ ran Ω λ (D). It is then readily verified that Xh lies in the range Ω λ (Z a ). Hence, we may choose f ∈ H a ⊗ X such that Ω λ (Z a )f = Xh and f = Xh , which implies that Note also that Let ω 1 , ω 2 , . . . be contractive multipliers such that Ω λ (z) = ω 1 (z) ω 2 (z) . . . , z ∈ B d . By Lemma 5.1, a λ is the weak- * closed ideal generated by {ω m : m ∈ N}. Since ϕ λ ∈ a λ and since D is absolutely continuous, we conclude that ϕ λ (D) lies in the weak- * closure of the ideal in B(H) generated by {ω m (D) : m ∈ N}. Since e λ = ϕ λ (D)e λ , it follows that ran ω m (D) ⊂ ran Ω λ (D). We conclude that and thus One consequence of the previous theorem is that the sequence Λ is both strongly separated and strongly separated by inner multipliers (see Subsection 2.1). This is no coincidence; these notions actually coincide. The proof of this fact requires the following technical tool. Theorem 5.4. Let λ ∈ B d and let a ⊂ M d be a weak- * closed ideal. Let δ > 0. The following statements are equivalent. Next, choose c 1 , c 2 , . . . , c N ∈ C such that Set ω = N n=1 c n ω n ∈ a. Since we see that M ω ≤ 1. Finally, we find We can now show that the notions of strong separation and of strong separation by inner multipliers coincide. Corollary 5.5. Let Λ = {λ n : n ∈ N} ⊂ B d be a sequence and let δ > 0. Then, the following statements are equivalent. (i) For every n ∈ N, there is a contractive multiplier ω n ∈ M d with |ω n (λ n )| > δ and such that ω n (λ m ) = 0 for every m = n. (ii) For every n ∈ N, there is a separable Hilbert space H n and an inner multiplier Ω n : B d → B(H n , C) with Ω n (λ n ) > δ and such that Ω n (λ m ) = 0 for every m = n. Proof. For every n ∈ N, let a n = v 0 (Λ \ {λ n }). Assume that (i) holds and fix n ∈ N. Then, we see that ω n ∈ a n , so by Theorem 5.4 there is a separable Hilbert space H n and an inner multiplier Ω n : B d → B(H n , C) such that ran M Ωn = [a n H 2 d ] and Ω n (λ n ) > δ. Now, there are contractive multipliers {θ k : k ≥ 1} such that Lemma 5.1 implies that θ k ∈ a n for every k ∈ N. In particular, for every k ∈ N and every m = n we have θ k (λ m ) = 0. Thus, Ω n (λ m ) = 0 if m = n. We conclude that (ii) holds. Conversely, assume that (ii) holds and fix n ∈ N. There are contractive multipliers {θ k : k ≥ 1} such that Ω n (z) = θ 1 (z) θ 2 (z) . . . , z ∈ B d . By assumption, we see that θ k ∈ a n for every k ∈ N, so that ran Ω n ⊂ [a n H 2 d ]. Consider the weak- * closed ideal c n = {ϕ ∈ M d : ran M ϕ ⊂ ran Ω n }. By [15,Theorem 2.4] we infer that [c n H 2 d ] = ran Ω n ⊂ [a n H 2 d ] and thus c n ⊂ a n . Apply now Theorem 5.4 to find a contractive multiplier ω n ∈ c n ⊂ a n satisfying |ω n (λ n )| > δ. By definition of a n , we see that ω n (λ m ) = 0 for every m = n. Theorem 5.6. Let Λ = {λ n : n ∈ N} ⊂ B d be a sequence. Consider the following statements. (i) The sequence Λ is interpolating. (v) The sequence Λ is strongly separated by inner multipliers. (ii) ⇒ (i): Let (a n ) ∞ n=1 be a bounded sequence and consider the operator A = ⊕ ∞ n=1 a n , which clearly commutes with D. Put a = v 0 (Λ). By [13, Lemma 2.10], we see that Ann(Z a ) = a. Thus, applying (iii) to Z a , there is an invertible operator X such that D = XZ a X −1 . Hence, X −1 AX commutes with Z a . By [5, Theorem 5.1], we find ϕ ∈ M d such that X −1 AX = ϕ(Z a ), and thus This is easily seen to imply that ϕ(λ n ) = a n for every n ∈ N, whence Λ is an interpolating sequence. The reader will notice that in the univariate setting of [8,Theorem 4.4], all five statements from the previous theorem are equivalent. In the multivariate world however, it appears to be unknown whether strongly separated sequences are necessarily interpolating. In fact, this implication is known to fail in the setting of the Dirichlet space on the disc (see [21], [2]). Application: quasi-similarity of certain commuting row contractions In this section, we give another application of Theorem 4.6. Indeed, we wish to use the Jordan-type decomposition obtained therein to classify certain cyclic AC commuting row contractions up to "quasi-similarity" by means of their annihilating ideals. Recall that given an ideal a ⊂ M d , we put Then, Z a is an AC commuting row contraction with cyclic vector P Ha 1. Our first task is to record an elementary criterion for similarity to Z a . Lemma 6.1. Let N = (N 1 , · · · , N d ) be a commuting nilpotent d-tuple and let z ∈ C d . Let a 0 ⊂ C[x 1 , . . . , x d ] denote the ideal of polynomials that annihilate zI + N , and let a ⊂ M d denote the ideal generated by a 0 . Assume that zI + N is cyclic. Then, zI + N is similar to Z a . Proof. Assume that N acts on the Hilbert space H. Because zI + N is cyclic and N is nilpotent, it follows that H and H a are finite dimensional. If ξ ∈ H is a cyclic vector for zI + N then Let q be a polynomial. Then, we have that q(zI + N )ξ = 0 if and only if q(zI + N )p(zI + N )ξ = 0, p ∈ C[x 1 , · · · , x d ]. Therefore, q(zI + N )ξ = 0 if and only if q ∈ a 0 . Likewise, q(Z a )P Ha 1 = 0 if and only if q(Z a ) = 0, which is in turn equivalent to q ∈ a 0 via an application of Theorem 2.2. We conclude that Furthermore, the linear map X : H → H a defined as is well-defined and injective, and thus necessarily invertible. It is readily verified that X(zI + N ) = Z a X. Before stating the quasi-similarity theorem we are after, we record another wellknown fact. Proof. By assumption, for each positive integer n there is an invertible operator X n with the property that X n S n = T n X n . It is then readily verified that the operators are bounded, injective and they have dense ranges. Moreover, We can now prove the main result of this section, which is an application of Theorem 4.6. Theorem 6.3. Let S = (S 1 , · · · , S d ) and T = (T 1 , · · · , T d ) be AC commuting row contractions which are both cyclic. Let Λ ⊂ B d be an interpolating sequence and let κ be a non-negative integer. Assume that v κ (Λ) ⊂ Ann(S) = Ann(T ). Then, S is quasi-similar to T . We conclude that Ann(zI + A (z) ) = Ann(zI + B (z) ). In particular, if we denote the ideals of polynomials annihilating zI + A (z) and zI + B (z) by a z and b z respectively, then a z = b z for every z ∈ Λ. Next, projecting any cyclic vector of z∈Λ0 (zI + A (z) ) onto the appropriate component yields a cyclic vector for each d-tuple zI + A (z) , z ∈ Λ 0 . Likewise, the dtuple zI +B (z is cyclic for every z ∈ Λ 0 . We may thus invoke Lemma 6.1 to see that zI + A (z) and zI + B (z) are similar for every z ∈ Λ 0 ; indeed, they are both similar to Z az = Z bz . Finally, an application of Lemma 6.2 shows that z∈Λ0 (zI + A (z) ) is quasi-similar to z∈Λ0 (zI + B (z) ), whence S is quasi-similar to T . It is easily verified that if two AC commuting row contractions S and T are quasisimilar, then Ann(S) = Ann(T ) (see for instance [13,Lemma 2.12 ]). Furthermore, we mention that in the univariate situation, the previous theorem holds without any restriction on the annihilating ideals [6,Theorem 2.3]. A multivariate version of this single variable theorem can be found in [13,Corollary 3.7]. It should be noted however that at present, [13,Corollary 3.7] only yields a certain one-sided version of quasi-similarity. The appeal of Theorem 6.3 is precisely that it fixes this shortcoming, at the cost of being more restrictive in its assumptions. As a byproduct of the ongoing discussion, we remark that higher order vanishing ideals of a given interpolating sequence Λ are determined by their polynomial ideals, in the following precise sense. Corollary 6.4. Let Λ ⊂ B d be an interpolating sequence. Let a and b be weak- * closed ideals in M d both containing v κ (Λ) for some non-negative integer κ, and suppose that both ideals are contained in v 0 (Λ). If p(a, z) = p(b, z) for every z ∈ Λ, then a = b. Proof. Let p z = p(a, z) = p(b, z) for z ∈ Λ. Put Λ 0 = Z B d (Ann(T )) ⊂ Λ. By Theorem 4.6, Z a is similar to z∈Λ0 (zI + N (z) ) and Z (b) is similar to z∈Λ0 (zI + R (z) ) for some nilpotent d-tuples N (z) and R (z) . These d-tuples satisfy Ann(zI + N (z) ) = p z w * = Ann(zI + R (z) ), and both zI +N (z) and zI +R (z) are cyclic since Z a and Z b are. Therefore zI +N (z) and zI + R (z) are similar for each z ∈ Λ 0 by Lemma 6.1. We conclude from Lemma 6.2 that Z a is quasi-similar to Z b , whence Naturally, one may now wonder whether Theorem 6.3 can be improved to produce similarity between the row contractions. For vanishing ideals of order zero, this is indeed the case. Theorem 6.5. Let S = (S 1 , · · · , S d ) and T = (T 1 , · · · , T d ) be AC commuting row contractions which are both cyclic. Let Λ ⊂ B d be an interpolating sequence and assume that v 0 (Λ) ⊂ Ann(S) = Ann(T ). Then, S is similar to T . Proof. This is an immediate consequence of Corollary 4.7. For higher order vanishing ideals however, similarity cannot be achieved in general, even in the single variable setting. The following example illustrates this fact, and incidentally also shows that the closed range assumption found in [8,Theorem 5.7] cannot simply be removed. Example 3. Let Λ = {λ n : n ∈ N} be an infinite interpolating sequence in B 1 . For each positive integer n ≥ 1, let 0 < ε n < 1 and consider Then, it is readily verified that S n and T n are AC contractions acting on C 2 , with and such that ξ = (0, 1) ∈ C 2 is a cyclic vector. If we let H = We now claim that S and T are cyclic. To see this, invoke Lemma 3.3 to find, for each positive integer n, a multiplier θ n ∈ v 1 (Λ \ {z n }) such that 1 − θ n ∈ v 1 ({z n }). Thus, we find If p is a polynomial and n ≥ 1, then we see that (pθ n )(S)Ξ = 1 2 n p(S n )ξ and (pθ n )(T )Ξ = 1 2 n p(T n )ξ. Using that ξ is cyclic for S n and T n for every n ≥ 1, we infer that Ξ is cyclic for S and T . Thus, S and T are quasi-similar by Theorem 6.3. Suppose that there is an invertible X ∈ B(H) such that XT = SX. In particular, for every n ≥ 1 we have Xθ n (T ) = θ n (S)X. But θ n (S) = θ n (T ) for every n ≥ 1, and the collection {θ n (T )} ∞ n=1 consists of pairwise orthogonal projections summing to I, so we see that where X n = X| ran θn(T ) for every n ≥ 1. We conclude that X n T n = S n X n for every n ≥ 1. A routine calculation reveals that this forces X n to be of the form a n b n 0 ε n a n for some complex numbers a n , b n . Since X n is invertible, we see that a n = 0. Furthermore, Thus, if we choose the sequence (ε n ) to tend to zero, then X cannot be bounded. Examples of this type can also be manufactured in several variables. Although the argument is not much different, we provide the details so as to show how to construct AC commuting row contractions with certain prescribed annihilating ideals. First we record a few technical facts relating to automorphisms of the ball that may be of independent interest. Lemma 6.6. Let T = (T 1 , . . . , T d ) be an AC commuting row contraction with cyclic vector ξ and such that Ann(T ) = v 1 ({0}). Let z ∈ B d and let Γ : B d → B d be an automorphism such that Γ(0) = z. Then, Γ(T ) is an AC commuting row contraction with cyclic vector ξ and such that Ann(Γ(T )) = v 1 ({z}). Proof. As noted in Subsection 2.1, the components of Γ lie in A d and they form a commuting row contraction on H 2 d . Hence, because the A d functional calculus is completely contractive, we see that Γ(T ) is a commuting row contraction. We note that if (ϕ n ) is a bounded sequence in A d converging pointwise to 0 on B d , then the sequence (ϕ n • Γ) has the same properties. This shows that Γ(T ) is AC if and only if T is. Next, we have Finally, using that Γ is invertible, we see that the norm closed unital algebra generated by T 1 , . . . , T d coincides with that generated by the components of Γ(T ). Therefore, ξ is cyclic for T if and only if it is cyclic for Γ(T ). We can now give a multivariate example showing that the conclusion of the Theorem 6.3 cannot be improved to similarity in general. We note that It is readily verified that the commuting pair N = (N 1 , N 2 ) is an AC row contraction with Ann(N ) generated by {x 2 1 , x 1 x 2 , x 2 2 } since I, N 1 , N 2 are linearly independent. Therefore, we have Ann(N ) = v 1 ({0}). For each t > 0 we set If we put then we see that M 1 (t)M 1 (t) * + M 2 (t)M 2 (t) * ≤ f (t) 2 I and consequently, setting yields a commuting row contraction R(t) = (R 1 (t), R 2 (t)). The pair R(t) is nilpotent and hence AC. In fact, one readily checks that and that I, R 1 (t), R 2 (t) are linearly independent, so Ann(R(t)) = v 1 ({0}) as above. We also note that both N and R(t) have ξ = (1, 0, 0) as a cyclic vector. Next, let Λ = {z n : n ∈ N} ⊂ B 2 be an infinite interpolating sequence and let (ε n ) be a sequence of positive numbers converging to 0. For each positive integer n, let Γ n : B 2 → B 2 be an automorphism such that Γ n (0) = z n . Let both acting on By Lemma 6.6, for every n ≥ 1 we see that Γ n (N ) and Γ n (R(t)) are AC commuting row contractions with cyclic vector ξ and such that Ann(Γ n (N )) = Ann(Γ n (R(ε n )) = v 1 ({z n }). Thus, S and T are AC commuting row contractions such that Ann(T ) = Ann(S) = v 1 (Λ). Using Lemma 3.3 and arguing exactly as in Example 3, we see that S and T are cyclic, and thus, S and T are quasi-similar by Theorem 6.3. Suppose that there is an invertible X ∈ B(H) such that XT = SX. As in Example 3, we see that X n Γ n (N ) = Γ n (R(ε n ))X n and in particular X n N = R(ε n )X n for every n ≥ 1. A routine calculation reveals that this forces X n to be of the form   a n 0 0 b n a n f (ε n ) −1 a n f (ε n ) −1 c n 0 a n ε n (f (ε n )) −1   for some complex numbers a n , b n , c n . Since X n is invertible, we see that a n = 0. We compute that Finally, we note that lim n→∞ f (ε n ) = √ 2 so that the previous inequality contradicts X being boundedly invertible. Similarity of Nilpotent Tuples Example 4 in the previous section showed that in general the conclusion of Theorem 6.3 cannot be improved to similarity. Examining the construction in the example, we see that the technical difficulties boil down to obtaining norm-controlled similarities between commuting nilpotent tuples. We investigate this question in this section. To begin, we analyze a concrete model for these tuples. We first collect some known facts in the following lemma. Lemma 7.1. Let a ⊂ M d be a proper ideal. Then, the following statements hold. (i) Assume that a is generated by homogeneous polynomials. Then, for every Assume that a is generated by monomials. Then, we have x α ∈ H a and |α|! α! (Z a ) α 1 2 = 1 for every α ∈ N d such that x α / ∈ a. Proof. (i) Fix 0 ≤ t ≤ 2π. As explained in [30, Section 3.5], there is a unitary Since a is generated by homogeneous polynomials, we see that U t aU * t = a. In particular, we obtain that U t H a = H a . Hence, the operator W t = U t | Ha : H a → H a is unitary as well. Now, we note that 1 ∈ H a since a is proper and generated by homogeneous polynomials, and therefore (ii) There is a subset F ⊂ N d such that a is generated by {x β : β ∈ F }. Let β ∈ F . Since the monomials form an orthogonal basis for H 2 d , it is readily seen that x α , x β f = 0 for all f ∈ H 2 d unless there is γ ∈ N d such that α = β + γ, which in turn implies that x α ∈ a. We conclude that x α ∈ H a whenever x α / ∈ a. Thus, if x α / ∈ a we find (Z a ) α 1 = x α and thus |α|! α! (Z a ) α 1 2 = 1. We note that property (ii) of the previous result fails without the condition that a be generated by monomials. Indeed, let a ⊂ M d be the weak- * closed ideal generated by x 1 + x 2 . Then, we see that x 1 / ∈ a, yet ∈ H a and P Ha x 1 < 1. Next, we show that Lemma 7.1 imposes necessary conditions on an arbitrary commuting nilpotent tuple to be similar to the model. (i) There is a unit vector ξ ∈ H which is cyclic for N . Our next objective is to show that conditions (i),(ii) and (iii) from the previous theorem are in fact sufficient for a nilpotent commuting row contraction to be similar to the model via a similarity with controlled norm. Proving this result requires several technical lemmas. First, we show how a norm condition can be used to control the angle between certain vectors. Lemma 7.3. Let T = (T 1 , . . . , T d ) be a commuting row contraction on some Hilbert space H. Let ξ ∈ H be a unit vector. Let α, β ∈ N d with |α| = |β| and let ε > 0. Proof. Assume that |α| = |β| = ℓ and choose ζ ∈ C with |ζ| = 1 such that The map Ψ T : B(H) → B(H) defined as is completely positive and contractive, since T is a row contraction. We see that On the other hand, we also have Combining these two inequalities yields The next step is a key estimate. Lemma 7.4. Let T = (T 1 , . . . , T d ) be a commuting row contraction on some Hilbert space H. Let ξ ∈ H be a unit vector, let ℓ ∈ N and let ε > 0. Assume that |α|! α! T α ξ 2 ≥ 1 − ε for every α ∈ N d such that |α| = ℓ. For every α ∈ N d with |α| = ℓ, let c α ∈ C. Then, we have that Put D = diag ℓ! α! T α ξ 2 : α ∈ S and A = G − D. By assumption, we see that D ≥ (1 − ε)I. Furthermore, we may invoke Lemma 7.3 to see that every entry of A has modulus at most ε. Because A has zero diagonal, it follows that A ≤ ε(card S − 1). Therefore, we obtain G ≥ D − A I ≥ (1 − ε card S)I which immediately implies the desired statement. The previous norm estimate only applies to vectors that can be obtained as linear combinations of images of powers of T with the same length. In order to move past this restriction, we need the following tool. Proof. First, note that Y t T α Y −1 t = e i|α|t T α for every α ∈ N d . Fix ℓ ∈ N. We obtain Gathering all our previous observations, we obtain our main technical tool. Assume that the following properties hold. (a) There is a unit vector ξ ∈ H which is cyclic for N . For each α ∈ Ξ, let c α ∈ C and put Then, we have Proof. Invoking Lemma 7.4, for every ℓ ∈ N we find that Combining this inequality with Lemma 7.5, we find Conversely, using that N is a row contraction and arguing as in the proof of Lemma 7.3, we see that α|=ℓ ℓ! α! N α N * α ≤ I for every ℓ ∈ N. Hence, applying the contractive row operator Γ = ℓ! α! We can now state the main result of this section, which shows that conditions (i), (ii) and (iii) from Theorem 7.2 are in fact sufficient for the existence of a normcontrolled similarity to the model. Theorem 7.7. Let N = (N 1 , . . . , N d ) be a nilpotent commuting row contraction on some Hilbert space H. Let a = Ann(N ) and assume that it is generated by monomials. Let Ξ = {α ∈ N d : N α = 0} and let L ∈ N satisfy Ξ ⊂ {α ∈ N d : |α| ≤ L}. Assume that the following properties hold. Finally, it is clear that XN X −1 = Z a . Theorem 7.7 can be used to improve the conclusion of Theorem 6.3 to similarity in some special cases. However, we omit the resulting statement as the required assumptions make it unwieldy, and leave the details to the interested reader. Moreover, we mention that it would be interesting to obtain a refinement of Theorem 4.6 in the cyclic context, in the spirit of [8, Theorem 5.7 and Corollary 5.8]. Theorem 7.7 could provide the basis of such a refinement, but at present the required technical assumptions once again blur the picture. This may be a reflection of the fact that the world of multivariate nilpotence is much richer than its univariate counterpart. Indeed, even in only two variables the annihilating ideals x 2 1 , x 2 2 and x 2 1 , x 1 x 2 , x 2 2 can support drastically different operator theoretic properties (see [13,Examples 4 and 5]). This stands in contrast with the relative simplicity of the single-variable nilpotent case, where Theorem 7.7 has a much sharper (and simpler) analogue [8,Proposition 5.6].
2019-02-18T22:43:43.000Z
2019-02-18T00:00:00.000
{ "year": 2020, "sha1": "63cc2a262077fb5b73de8b348ef17140dba39d1d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1902.06826", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "63cc2a262077fb5b73de8b348ef17140dba39d1d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
234525556
pes2o/s2orc
v3-fos-license
Language Acquisition of Children Age 4-5 Years Old in TK Dhinukum Zholtan Deli Serdang This study aims to determine the language acquisition of children aged 4-5 years. This research was conducted at TK Dhinukum Zholtan Medan which was conducted on March 4, 2020 with 4 students, 2 students aged 4 years and 2 students aged 5 years. In this study, researchers used the method of observation and also a questionnaire to collect data. Based on the description above, it can be concluded that children aged 5 and 4 years have different mastery. In addition to clear pronunciation, 5 year olds have mastery of more vocabulary and are able to describe things. Meanwhile, children aged 4 years have less vocabulary, have not been able to describe something and are not able to pronounce it clearly. This difference occurs because 5 year olds have already taken part in learning compared to 4 year olds. So that children aged 5 years are better able to communicate well, because they often interact with teachers and peers at school. I. Introduction Language is one of the most important things in the life of every human being. Each of them is of course inseparable from language, the first time a child gets a language that is heard directly from the father or mother when the child is born into this world. Then as time goes by and as the child grows, they will acquire a language other than the language taught by the mother and father, either in the form of a second, third, foreign language or so on which is called language acquisition where it depends on the social environment and cognitive level possessed by these children through the learning process in their environment. Acquisition of language is a very amazing thing, especially in the process of acquiring the first language of a child without any special learning about the language from them. Like a toddler, he will only respond to utterances he often hears from the surrounding environment, especially words from his mother that the child hears very often or someone who is always with him. In line with what Chaer (2003: 167) states that language acquisition or language acquisition is a process that takes place in the child's brain when he acquires his first language or his mother tongue. Psycholinguistics is a word formed from the word psychology and from the word linguistics, which are two different fields of science. Psycholinguistics is a branch of linguistics that combines linguistics and psychology with a focus on the use of a person's language. This was explained by the Umera-Okeke brothers. Psycholinguistics is a combined science of psychology and linguistics. This science was developed in the 60's as a response to the excitement of intellectuals at Chomsky's work. As a new science in the realm of cognitive science, psycholinguistics studies the thought and mental processes that cover understanding, production and acquisition of language (Umera-Okeke and Umera-Okeke, 2012: 8). From the explanation of the Umera-Okeke brothers above, It can be seen that psycholinguistics has three main objects of study, namely language understanding, language production, and language acquisition. This is reinforced by what is conveyed by Dardjowidjojo (2016: 7) that the four main topics in psycholinguistics are: a) comprehension, namely mental processes that humans go through so that they can grasp what people say and understand what is meant; b) production, namely the mental processes in us that enable us to speak as we speak; c) the biological and neurological basis that enables humans to speak; and d) language acquisition, namely how children acquire their language. What Dardjowidjojo conveyed confirms that language understanding, language production, and language acquisition are included in psychonguistic studies. The object of study in this research is the acquisition of language so that this research is included in the psycholinguistic area. Language acquisition itself focuses on mastering one's mother tongue. Usually, language acquisition is equated with the term language learning. In theory, this was blamed by linguists, one of them Sumarlam. The term language acquisition differs from language learning in terms of mastery. As summarized in the General Linguistic Dictionary, acquisition is used in first language acquisition that occurs without R. Pandudinata, Sumarlam, & K. Saddhono, Language acquisition of children awareness to master the rules of the language naturally without being given special lessons. The term learning is used for conscious mastery of a second language (Sumarlam, 2017: 97). In this study, the data sources were 4 children at the kindergarten level aged 4 years (2 children) and 5 years (2 children) at TK Dhinukum Zholtan Deli Serdang. Therefore, the focus of this study is language acquisition in children under five. The reason why this study focuses on language acquisition for children under five is because so far there are still many people who do not understand the development of language acquired, especially their language skills. This, indirectly affects the ability of children in language, including language acquisition. Their language acquisition is also influenced by things outside themselves, such as their family, social, and educational environment. Language acquisition in this study focuses more on acquiring basic vocabulary based on the Swadesh table. In the Swades table, 200 basic vocabulary words have been mentioned and have become references in various countries to measure acquisition of basic vocabulary words. It is hoped that this research will broaden the insight into psycholinguistics and be able to contribute to future research. II. Review of Literatures In line with what Chaer said above, Kridalaksana stated that the definition of psycholinguistics is the study of the relationship between language and human behavior and reason; interdisciplinary science of linguistics and psychology. Jean Caron in his book, An Introduction to Psykolinguistics, defines psycholinguistics as "… the experimental study of the psychological processes through which a human subject acquires and implements the system of a natural language. This means that psycholinguistics is an experimental science that studies the psychological processes of how a person acquires and implements a natural linguistic system. Psycholinguistics has a field of work regarding the language process that occurs in a person's brain, both the speaker's brain and the listener's brain. Thus psycholinguistics produces a description of the language that is processed in a person involved in communication, including how the language processing occurs, how the unit is, how the meaning it contains, and how the process of understanding the language is. In other words, psycholinguistics discusses the language process in relation to abstract aspects, namely the linguistic system which is embodied in the symbols and rules that govern it, and the physical aspects, namely the discourse corpus produced by the speaker in certain situations. Theoretically, the main goal of psycholinguistics is to find a theory of language that is linguistically acceptable and psychologically able to explain the nature of language and its acquisition. Thus the scope of psycholinguistics is 1. The relationship between language and the brain, logic and thought; 2. Language processes in communication: perception, production, and comprehension; 3. Problems of meaning; 4. Perception of speech and cognition; 5. Language behavior patterns; 6. Acquisition of first and second languages; 7. The language process in individuals is abnormal. Acquisition of first and second languages; 7. The language process in individuals is abnormal. Acquisition of first and second languages; 7. The language process in individuals is abnormal. Children's Language Acquisition Theory a. Theory of Behaviorism Behaviorism theory highlights linguistic behavior that can be observed directly and the relationship between stimulus (stimulus) and reaction (response). Effective language behavior is making appropriate reactions to stimuli. This reaction will become a habit if it is justified. For example, if a child says "maybe" for "maybe" the child will definitely be criticized by the mother or anyone who hears the word. If one day the child says perhaps correctly, he will not be criticized because the pronunciation is correct. It is such a situation which is called making the appropriate reaction to stimuli and is central to the acquisition of the first language. b. Chomsky's Theory of Nativism This theory is adherents of nativism. According to him, language can only be mastered by humans, animals cannot possibly master human language. Chomsky's opinion is based on several assumptions. First, language behavior is something that is inherited (genetic), each language has the same developmental pattern (is something universal), and the environment has a small role in the process of language maturation. Second, the language can be mastered in a relatively short time. Third, the child's language environment cannot provide sufficient data for adult mastery of complex grammar. According to this school, language is something that is complex and complicated so that it is impossible to master it in a short time through "imitation". c. Cognitivism Theory The emergence of this theory was pioneered by Jean Piaget (1954) who said that language is one of several abilities that originate from cognitive maturity.8 Thus, the sequence of cognitive development determines the sequence of language development. d. Interactionism Theory The theory of interactionism assumes that language acquisition is the result of the interaction between the mental abilities of learning and the language environment. This is evidenced by various discoveries such as those made by Howard Gardner. He said that since birth, children are equipped with various intelligences. One of the intelligences in question is language intelligence. However, what cannot be forgotten is that the environment is also a factor that affects a child's language skills. III. Research Methods This research is a qualitative research because basically the data obtained are words, and not numbers. The data obtained from this study were 200 basic Swadesh words. This research strategy is a case study. The case study itself is a series of scientific activities carried out intensively, in detail and in depth about a program, event and activity, whether at the level of an individual, group of people, institution or organization to obtain in-depth knowledge of the event. Usually, the selected events, hereinafter referred to as cases, are real-life events, which are ongoing, not something that has passed (Rahardjo, 2017: 3). This research was conducted at TK Dhinukum Zholtan Medan which was conducted on March 4, 2020 with 4 students, 2 students aged 4 years and 2 students aged 5 years. In this study, researchers used the method of observation and also a questionnaire to collect data. IV. Results and Discussion The results of this study are based on a total of 200 Swadesh basic vocabulary lists. There are many types of vocabulary here, from adjectives, verbs, nouns, pronouns, numeralia, adverbs, and task words. From these 200 basic Swadesh vocabularies, data acquisition of language for children aged 5 and 4 years has different levels of vocabulary. Based on the results of the research that has been done, a 5 year old child with the initials Azura and Gusti understands more basic Swadesh vocabulary than a total of 200 basic vocabulary words and the pronunciation is perfect. Meanwhile, children aged 4 years with the initials Adit and Riko understand less basic Swadesh vocabulary and rudimentary pronunciation. The following is a list of basic vocabulary words understood by each child is shown in table 1. Overall, based on Tables 1 and 2. The 32 basic vocabularies that Azura mastered are vocabulary words related to family, animals, numerals, colors, and body parts. In addition, Azura was also able to pronounce it clearly and answer questions that were given quickly, such as when he planned to go to school where Azura was able to describe his plan. Like "I will not go to elementary school here miss, I will go to school there, between buya every morning, I won't go back later, Zura". Similar to Gusti, from the questions the researcher gave, he mastered 31 basic swades vocabulary which also related to family, animals, numerals, colors, and body parts. In addition to a fairly clear pronunciation, Gusti is also able to describe how to make sweet tea, Initially, the researcher only asked about the taste of food, like the taste of chili sauce? How about the taste of sugar? What is the usual salty taste? Then what do you use for tea? Until in the end he himself explained how to make sweet tea. Based on the narrative of Kindergarten teacher Dhinukum Zholtan, with the description of mastery of basic vocabulary above, it shows that the teaching and learning approaches of the teacher have been quite successful. This is because Azura and Gusti are students who have studied in kindergarten for almost one year so that they are able to answer questions correctly and be able to describe them. Meanwhile, the basic vocabulary mastery of Swades for children aged 4 years is quite different, as shown in tables 3 and 4 below. Overall, based on Table 3 and 4. Adit's 22 basic vocabularies are vocabulary related to family, numerals, and body members. Based on the results of the research, it was found that Adit had not mastered the names of colors and animals, besides that Adit was also not able to pronounce the vocabulary perfectly, there was still a lack of letters in the vocabulary he spoke. Similar to Riko, there are 22 basic swades vocabulary words that she uses as a whole that relates to family, colors, and body parts. Riko also has the same ability as Adit, is not able to pronounce vocabulary perfectly, there is still an inaccurate and unclear pronunciation. Based on the narrative of Kindergarten teacher Dhinukum Zholtan, with the description of mastery of basic vocabulary above, it shows that learning and approaches have not been able to influence students. This is because Adit and Riko are students who are still 2 months of learning in kindergarten, so they are not able to describe colors and deeper questions. So that further learning is needed. IV. Conclusion Based on the description above, it can be concluded that children aged 5 and 4 years have different mastery. In addition to clear pronunciation, 5 year olds have mastery of more vocabulary and are able to describe things. Meanwhile, children aged 4 years have less vocabulary, have not been able to describe something and are not able to pronounce it clearly. This difference occurs because 5 year olds have already taken part in learning compared to 4 year olds. So that children aged 5 years are better able to communicate well, because they often interact with teachers and peers at school.
2021-04-16T03:54:21.394Z
2020-12-19T00:00:00.000
{ "year": 2020, "sha1": "ee0d87a46ae16dc7d9b7f35aea0b843b35ce2778", "oa_license": "CCBYSA", "oa_url": "http://biarjournal.com/index.php/linglit/article/download/347/376", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ee0d87a46ae16dc7d9b7f35aea0b843b35ce2778", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
243846384
pes2o/s2orc
v3-fos-license
Aesthetics sets patients ‘free’ to recover during hospitalization with a neurological disease. A qualitative study ABSTRACT Background Patients with neurological symptoms are particularly sensitive to the quality of the sensory impressions to which they are exposed to during hospitalization. Aim To understand the meaning of aesthetic experiences to patients afflicted with neurological diseases during hospitalization on a neurological unit. Method Fifteen patients were invited to “walk and talk” supplemented by semi-structured interviews conducted in newly established aesthetic tableaus within the neurology unit. Data analysis was inspired by the hermeneutic phenomenological methodology of van Manen. Result The data analysis identified three overarching themes that unfolded in the patients’ experiences of a more aesthetic environment. The themes were: 1) A safe place to avoid noisiness, 2) An invitation to homey activities, 3) A thoughtful consideration for being ill. Conclusion Aesthetic elements can enable a thoughtful and needed consideration that withholds momentarily imaginative and hopeful experiences to patients in a vulnerable situation. Thus, aesthetics, together with peace and quietness, can set vulnerable patients free to retreat and recover from the symptoms of neurological diseases. Introduction Characteristics of most neurological diseases is that it challenges humans on its senses. It is also evident that hospitalization to the neurological unit has existential impact on people, hence, patients can experience loneliness during their stay at the hospital (Beck, et al., 2016(Beck, et al., , 2019(Beck, et al., , 2020. Recent studies have shown how patients are becoming nomads lurking around to find breathing spaces when they were not offered a calm and familiar environment (Beck et al., 2020). However, the quality within introducing aesthetics as a supportive act to patients during hospitalization with neurological symptoms remains uncertain. Thus, wondering how an aesthetic experiment would be experienced by patients in the neurological unit paved the way to develop, test and evaluate an intervention in clinical practice. Neurological patients are challenged on their senses The word "neurology" is originally Greek and defines 'nerve' in combination with logic. In medical terms, patients suffering from a neurological disease are diagnosed with a variety of alterations within the nerve system's structure, functions or congenital diseases as well as certain muscular diseases. Many of the diseases have a significant mortality; hence stroke (cerebral thrombosis and stroke) figures as the third leading cause of death after heart disease and cancer. This often leads to permanent mental or physical disabilities, also in younger age groups of patients (Feigin et al., 2017). The interface of the neurological diagnoses is extensive and affects humans on different levels; hence patients suffering from a neurological disease are often afflicted not only physically, but also experience challenges mentally, socially and/or financially (Feigin et al., 2017;Ganesh et al., 2017;Klinke et al., 2015;Low et al., 1999;O'Connell et al., 2020;Penner & Paul, 2017;Shaw, 2015;Simon et al., 2018;Tanner, 2001;Wolf et al., 2020). Examples of neurological diseases are migraines, Parkinson's, epilepsy, strokes, tumours, and various types of dementia, such as Alzheimer's disease, or other subcategories of diseases. Characteristic of most neurological diseases is that it challenges humans on its senses (McGough et al., 2018). For example, neurological patients may be affected by their ability to have an overview (executive functions) and be unable to sort through the impressions they get from their surroundings (visuospatial functions). Patients with neurological symptoms may be quickly disturbed by external stimuli and therefore benefit from a calm and manageable environment (Applebaum et al., 2016;Digby & Bloomer, 2014;McGough et al., 2018). The consequences of neurological disease can be extensive and therefore require healthcare professionals to be aware of the importance of the sensory impression to best meet the patients' needs and wishes (Beck et al., 2020). Scientifically, we know that hospitalized patients with neurological symptoms are particularly sensitive to sensory disturbances that may result in experiences of loneliness and discomfort or homeliness can occur (Beck et al., , , 2019(Beck et al., , , 2020. Aesthetic as an ancient virtue In western history, specifically in Greek antiquity, aesthetics was important to human life and in the way of living (Birkelund, 2013). Temples of health in Greek and Turkey, for instance, were built in exquisite terrain offering a view; sunny, airy, and beautiful buildings facilitated patients' holistic treatment (Birkelund, (2017)). Nursing has a long tradition of focusing on environmental factors in relation to illness. Already in 1860 the founder of modern nursing, Florence Nightingale, effected many important changes in patient care by increasing focus on how aesthetics were important to patients when healing from sickness (Nightingale, (2007)). Today, it is difficult to document that Nightingales thoughts on the importance of aesthetics when being sick, influenced the well-being and survival of patients in her own time. However, much of what she advised in the mid-to late 1800s still holds true today (Fontaine et al., 2001). Most often remembered for her pioneering work in improving hospital sanitation, Nightingale was one of the first to address topics such as lighting, noise, and sensory stimulation in hospitals. A central tenet woven throughout Nightingales' writings is the idea that only nature cures, and that nursing's role is to put the patient in the best situation for nature to act (Dossey, 2000). However, as pointed out by Fontaine et al. (2001), extending this philosophy to the 21st century, includes making every effort to optimize the patient's environment by reducing stimuli and enhancing factors that promote a sense of well-being, relaxation, and sleep. Achieving this goal, is as daunting a task now, as it was in Nightingale's era, especially with the advent of advanced technology, a predominant feature of modern critical care units (Fontaine et al., 2001). Aesthetic in healthcare Reseachers have focused on how aesthetics is closely related to creating a therapeutic environment (homelike, attractive) and identified that aesthetic is important in enhancing the hospital's public image (Caspari et al., 2011). An aesthetic environment contributes to improved staff morale and patient care (Ibid.). Research has highlighted the need for stimulation in the physical hospital environment during hospitalization (Anåker et al., 2018Rosbergen et al., 2017Rosbergen et al., , 2019Shannon et al., 2019). For example, Anåker et al. (2018Anåker et al. ( , 2019 discloses how patients experience the physical environment at a newly built stroke unit as lonely and suggest to undertake activities systematically in order to encounter patient's mental well-being during hospitalization. Other studies highlight how patients' activity is impacted by the environment and points out the importance of creating activity within the physical environment, hence these are evaluated as significant in relation to patients experiencing being embedded in a enriched hospital environment (Rosbergen et al., 2017;Shannon et al., 2019). Other studies have shown how hospital environment plays a significant role on the patients' and their family members' overall satisfaction with the hospital experience (Harris et al., 2002). Further, Trochelman et al. (2012)investigates how already implemented evidence-based design features affect the patients' satisfaction with the hospital environment. They conclude that within nursing, physical environment needs to be recognized as a major influence in care delivery that ultimately impacts patient safety, satisfaction and quality of care (Trochelman et al., 2012). The notion of aesthetics, in particularly, the impact of art during hospitalization has been investigated empirically (Nielsen et al., Nielsen, 2017a,;Stine Louring Nielsen et al., 2017;Trevisani et al., 2010). The effect of art in the hospital has been assessed in relation to the patients' feelings and emotions (Trevisani et al., 2010) and has the potential to influence patients mood and well-being (Moss, 2014;Ulrich et al., 2011;Zhang et al., 2019). According to Stine Louring Nielsen et al. (2017) the aesthetic value in art contributes to creating an atmosphere where patients feel safe, socialize, maintain a connection to the world outside the hospital and support their identity. Thus, the quality of aesthetics contributes to health outcomes by improving patient satisfaction (Stine Louring Nielsen et al., 20172017) Theory on aesthetics Through the lenses of philosophical-phenomenology and in particular Løgstrup's thinking (1976Løgstrup's thinking ( , 1983Løgstrup's thinking ( , 1997, this study will bring attention to the significance of aesthetics defined as "the sensory attuned impression" (Løgstrup, 1997). According to Løgstrup (1983), the attunement in our surroundings impacts our minds through our senses and in this way, gives nourishment to sovereign manifestations of life. However, the attunement can also be of such a nature as to give rise to the opposite of these expressions of life such as mistrust and hopelessness (K. E. Løgstrup, 1976). Introducing aesthetic tableaus-An intervention This study was based on an intervention, called Hospitality that aimed at introducing aesthetic tableaus using visible technical means to provide a change of (physical) scenery in the neurology unit. Characteristics of the setting at the neurological unit is illuminated in Beck et al. (2020). With the support of external grants, three outside agencies were contracted to introduce tableaus that would best support the health and well-being of hospitalized neurological patients. The hospital had no written strategy of hospital decoration or no "in house" hospital interior designers. To create a positive atmosphere the use of uplifting colours and screens for shielding was introduced within tableaus. Also, the use of positive distractions such as pictures with simple nature motives (e.g., a boat) aligning with the season of the year, was chosen to be a part of the tableaus. Within the tableaus the use of existing and new furniture, accessories and lamps, which would allow the patients more independent control over their environment, was prioritized. Realistic plastic flowers that aligned with the season of the year were also incorporated. The tableaus were introduced in December 2018 and included 1) Hallway area called "Café-Appetit for mind and body", 2) An aisle between two wards called "The Garden Room", and 3) A patient comfort room called the 'Dwelling Room'. Large posters were placed strategically in the unit, welcoming patients and relatives in the aesthetic environment. Following video shows the process: https://www.youtube.com/watch? v=SGdtJz6dA_E&t=75s Aim To understand the meaning of aesthetic experiences to patients afflicted with neurological diseases during hospitalization on a neurological unit. Design This study had an hermeneutic-phenomenological approach (van Manen, 2014), as we considered how aesthetic aspects would impact on the lived experiences of hospitalized patients afflicted by a neurological disease. Following the methodology of van Manen, 2014, a phenomenological descriptive sensitivity was combined with an interpretive understanding of the patients' lived experience and how it was given meaning (van Manen, 2014). The methodology was well suited to explore day-to-day practice and to reveal unknown or sensitive aspects of the depths and subtleties of neurological patient's experiences (ibid.). Participants Fifteen patients were asked and agreed to be interviewed about the changed environment. Taking into consideration that the participants were vulnerable and challenged on their senses, the first author spent some time at the unit from which the participants were recruited. This was to get acquainted with the setting and routines, trying to find the best possible time where it would be appropriate to ask the participants if they would be interested in participating in the study. To obtain information-rich data (Malterud, 2011), the participants were selected in collaboration with the staff, with special consideration to assure that chosen participants had used the aesthetic environment and were able to reflect on and express themselves about the meaningfulness of the newly established aesthetic tableaus in the neurological environment. Qualitative research strives for the greatest possible variation in the sampling of informants (D. Polit & Beck, 2018). A sample of 9 females, 6 males, age range (21-82) with a variety of severity of illness and length of stay were selected (Table I). The participants suffered from different disabilities e.g., dizziness, fatigue, physical impairment, pain, numbness, or sensitivity to light. The inclusion criteria included participants who spoke and understood (anonymous) and did not have other competing life-threatening disorders. Further, all participants should be adults (age 18+) and competent to provide experiences of the aesthetic environment. Exclusion criteria included participants with severe cognitive deficit or impressive/expressive aphasia or participants in an acute state of depressive suffering. Data collection To describe and verbalize what aesthetics are, how a sensory impression is experienced, as well as the significance of specific spaces (Birkelund, 2013;Van Manen, 2016) the first author performed "walkalong" interviews to capture the participants' experiences of the aesthetic environment in the moment. Researchers argue for the strengths of "walk-along" interviews, especially for research studies that will examine the importance of the environment in relation to health and well-being (Carpiano, 2009;Flick et al., 2019;King & Woodroffe, 2019;Kusenbach, 2006;Stiegler, 2020). Using "walk-along" interviews in this study (Carpiano, 2009;Flick et al., 2019;King & Woodroffe, 2019;Kusenbach, 2006;Stiegler, 2020) harmonized with the hermeneutic-phenomenological approach in which the participants' immediate moments within the hospital environment, together with their reflections of the significance of aesthetic allowed us in coming to understand and explore parts of participants lifeworld (Van Manen, 2016). Thus, by exposing participants to the immediate, complex and subtle meaning of the aesthetic environment, the "walk-along" interviews addressed rich, nuanced and phenomenological sensitivity to the research question (Carpiano, 2009;Van Manen, 2017). In practice, we gathered the empirical data by inviting the participants for a walk around the environment in which they were admitted. During the "walk-along" interviews, we explored themes related to their experiences. Neurological patients, due to their diagnosis (Parkinson's, Alzheimer's, stroke, etc.) may be physically challenged. For this reason, participants who had difficulty moving independently were offered a wheelchair during the "walk-along" interviews. Six (or however many) of the 15 patients in the study preferred (or required) utilizing the wheelchair over the walking. The "walk-along" interviews were addressed pedagogically by walking slowly and patiently, talking pauses, and listening carefully to the participants who spoked softly. The "walk-along" interviews started with an open question e.g., "show me a place in this unit that has made an impression on you?" or "Is there a certain place that you will like to show me in here?" In order to gather the overall themes of the "walk-alongs" interviews and to reflect on the significance of the environment, all "walk-alongs' ended in a supplementary interview. These interviews were conducted in one of the tableaus, sitting, and with focus on not being disturbed. A semi-structured interview inspired by Max van Manen's theory of existential (van Manen, 2014; van Manen, 2014) was used. The guide contained suggested questions to help generate spontaneous and rich descriptions of the environment (Kvale, 2011b). The interviews were not "free" narration, but were structured with open-ended questions related to the environment impression (Ibid.). The "walk-along" interviews were recorded with a small recorder, the size of a pen placed in the patient's shirt, not inhibiting their movement in anyway. The length of the "walk-along" interviews varied and was in average 50 minutes. Ethics Research among vulnerable participants may lead to ethical dilemmas and requires the researcher to be an ethical, knowledgeable and sensitive human being (Angel & Vatne, 2017;Kvale and Brinkmann, 2018). Therefore, the researchers in this study were guided by ethical principles to protect the study participants and ensure that the study was based on justice, beneficence and respect for human dignity (Damsgaard et al., 2020). The act of conducting "walk-along" interviews in this study was a moral practice wherein the interviewer was aware of the asymmetric relationship between the interviewer and the participant. Thus, to ensure that the participants felt confident to "walk-along", attentiveness for the surrounds was imperative to assure safety for all (Angel, 2013). This attitude was achieved by creating a relaxed atmosphere using friendly and approachable body language (Fog, 2007). Senses and intuition were used when "walking" together with the participant to decide when to ask them to elaborate on their statements, to ask follow-up questions or to let silence and pauses take over (Angel, 2013). However, we were worried if walking interviews would be stressful for the participants given their neurological diseases. Therefore, we applied strategies easing conversation with people who live with cognitive and language impairment described by Kirkevold and Bergland (2007). e.g., the interviewer was special attentive to mimics, gestures and body language in general. Also unlimited time was allotted for the interviews attempting to create the best conditions for getting rich data despite the participants' cognitive challenges. In the moment a participant showed any sign of exhaustion the interview was ended. The study was performed in accordance with the ethical guidelines of the Nordic Nurses Federation and the Helsinki Declaration. Thus, written and verbal information about the study was given to all participants and informed consent was obtained. Participants were assured that their names and other personal information would be anonymized to maintain confidentiality. They were reassured that they could withdraw from the study at any time without any consequences for their treatment and care in the unit. According to (Anonymous) law, approval from the Regional Committee for Medical Research was not required because of the non-biomedical character of the study. The study was approved by the (Anonymous) Data Protection Agency, which requires safeguarding of any personal information and securing the anonymity of participants. Data analysis The analytic steps in this study were guided by the thematic analysis described by van Manen, 2014. In practice, all interviews read several times, purposefully attending to embedded meanings during the "walkalong" interviews. Transcripts were read, in which the researchers searched for descriptions to answer the question, "What is the experience of an aesthetic environments to patients afflicted with neurological diseases?" The interview data was approached with an open-minded attitude about the real-life experiences embedded in the overall sense of "what was going on" (Van Manen, 2006;Van Manen, 1997). Afterwards, the text was clustered and analysed in order to identify understanding and meaning of d the material as a whole. The clusters were analysed and interpreted in the context of the overall understanding of the phenomenon by continuously going back and forth between clusters of meaning and the data material as a whole. Clusters were grouped into tentative themes to capture the phenomenon of interest (van Manen, 2014). These were presented to co-authors (EE & RB) to validate the preliminary interpretation and arguments for clustering (Van Manen, 2017). During the analysis process, thematic statements were formulated as figures of meaning in concert with the above analytic reflective method to help point to possible eidetic meaning aspects of the phenomenon (van Manen, 2014). Eidetic refers to invariant patterns of meaning that may make a phenomenon distinct (van Manen, 2014). These thematic statements were used to structure the presentation of the text. Results An overall finding was that the aesthetic environment was visible to the participants and an awareness of the quality of the physical environment made them thankful for the attempt to encounter their needs. This provided a sense of acknowledgement and belonging during their stay at the hospital. The participants interpreted the changes of the environment as a rewarding gesture that made the environment feel more patient-friendly. Despite the positive feedback of the aesthetic environment, participants expressed a distain for the disturbing noise they encountered during their stay. Un-fortunately, the noisy environment overshadowed the important calming potentials gained from the aesthetic environment. The data analysis identified three overarching themes that unfolded in the participants' experiences of a more aesthetic environment. These themes shed light on how an improved physical aesthetic environment in the hospital is intertwined with disruptive noisiness. The themes are: 1) A safe place to avoid of noisiness, 2) An invitation to homey activities, 3) A thoughtful consideration for being ill. A safe place to avoid noisiness The participants described (an overwhelming sense of) how noisy sounds dominated the neurological setting. This meant that regardless any other (positive) sensory impression, the many distracting sounds from the environment "drowned out" the possible positive aesthetic experience that may have occurred because of the environmental changes. The participants explained that the noise within the neurological environment was both diverse and constant. Some of the examples of "noisy" activities were: people talking, visual and auditory tv activity, telephones ringing, sounds of cleaning machines, health professionals walking forth and back, other participants suffering e.g., moaning, screaming or crying, the sight of worn down furniture, movement of doctors' rounds, physical training, and the persistent yet erratic sound of the calling system. The combination of these many activities provided a buzz, in which the participants experienced as transgressive and stressful, as it interfered with their experience of having peace of mind. Well, the staff, they run back and forth. There is a lot of pace. 'Please go in and talk quietly with the patients and ask how are you' [A bell sounds] And THAT sounds right there. It's special at night when someone needs help. It is very annoying and disturbing (P13). I can't stand listening to the television, but it does increase the huge bangs coming from the hallway … and I do not get so scared when they [the staff] comes running in and makes quickly movements (P8). The data identified that participants experienced their being in the neurological setting as challenging, and especially the noise they needed to escape from. The participants needed to process "something" in relation to their admission to the neurological unit. For example, for some participants it was challenging to encounter sensory impressions such as light, sound, or being able to concentrate. For others, one poor health report from the health professional made them worried or upset. These challenges potentiated the disruption that noise played in their recovery. The noise shaded the experience of being ill with comfortless and silenced the participants need for peace and quietness. A woman disclosed how this impaired code of conduct contributed to a less peaceful and pleasant environment. She felt the environment ideally should sound like: There is no code in relation to peace and quietness in here. We talk on our phones and watch TV; the television … It's just on all the time. Well, you might say, that there are no possibilities to be sick. Once, I experienced the alarm constantly ringing. It was loud and gave the feeling of 'do I need to ring on that bell too or?' It made me very confused (P9). The participants described the aesthetic tableau as a way to escape the many noisy impressions that they dealt with in the hospital environment. In that sense, the established aesthetic tableaus in the neurological environment was a place where patients could avoid the tumultuous (p6) environment in the unit and the noisiness could step into the background, where patient needs were more in the foreground: Here [the dwelling room] you can redraw to yourself. It's a kind of a safe place. You are not completely gone from the everyday life … if a doctor should come around. You are comfortable at a distance … from the busyness (P2). I have sat down and looked at the two pictures [motives of a forest]. There's a calmness in here … that means something because you can rest in any other places (P8) Here you can allow your imagination to take over; your mind can wander off. You go for a walk mentally (P4). An invitation to homey activities In a neurological setting, recognizable furniture reminding one of a past time or of one's home, generated an appreciated feeling of homeliness. The participants described how they used the "new living room" for homey activities such as talks with their significant others, drinking a cup of coffee, reading the newspaper, or to simply just sit and "be" for a while. Further, the room was used as a calm place to process new information about the course of their disease. Yesterday I received bad news. I know that I asked for it, but nevertheless I got sad, and it is difficult to be sad among 4 fellow patients. So, my husband and I have been sitting in the Dwelling room, instead of escaping to the parking lot as we used to (P10). Participants expressed that the nature pictures provided an opportunity to lose oneself in the rural scenes and "block out" the noisy surroundings. Enjoying the pictures and furniture often required that the participants be able to focus strictly on these elements, since hospital equipment (e.g., oxygen devices, walls filled with gloves, hand gel, or screens) was not experienced as inspiring at all: I like them [pointing at the nature pictures with autumn colours]. They have beautiful colours and make me calm. If you look over there [on the opposite wall where respiratory devices are placed] … that is a whole different story. I try to avoid looking that way and concentrate on looking at the pictures instead (P7). The "dwelling room" could also be experienced as welcoming change of scenery, where participants could escape the reality of the hospital and their disease: One needs to change spaces. Yesterday, I was stressed because we had been sitting here [in the Dwelling room] talking until 9.45 p.m. and I thought that maybe they [the staff] had forgotten me and about my medication. I felt I had almost been away. That was a nice feeling (P14). A thoughtful consideration for being ill The participants described ways in which the traditional environment was uncomfortable, such as the furniture being hard and unsuitable for ill people: I noticed the chairs, they are wooden chairs; hard chairs. Ill people are not supposed to sit on chairs like these (P1). Also, the lack of screen enclosures was repeatedly pointed to as a condition making it difficult to have privacy; both visual and auditory. The aesthetic environment contrasted favourable with their traditional experience of the neurological environment. In the aesthetic tableaus a more patient-friendly consciousness towards the participants being ill was materialized within the decoration of the room. The participants explained how the environment was inviting them to sit down and relax. Here they had less sensory disturbances and the candles, lighting, and pictures with nature motives contributed to a calming atmosphere generating relaxation. A woman elaborates on how the aesthetic environment was a new place where she could recover: So, when you need to recover, you need the television to be off … And no music, talk and stuff like that … Put simply, you should be able to relax and be calm when you are in the hospital. And here … [The 'Café -Appetite for food and mind]) … it is just like that (P4). The aesthetics of the tableaus made an overall positive impression on the participants. Several participants suggested that the interviews be conducted in the aesthetic environment, because they felt comfortable there. They described how the noisiness was better tolerated because the peaceful setting invited a peace of mind. In that sense, the interior was interpreted by the participants as having more than just a practical function, but it became a safe place, where they can "be ill" during their hospitalization. Being able to find peace of mind was particularly important to the participants since it promoted comfort and a needed moment to "collect" themselves during illness. A woman illuminates this by saying, while sitting in the "Dwelling-Room": I've been calmer now. And the pictures -they are beautiful. I have looked at the pictures. Trying to focus. I have trouble sleeping at night. So I went down here [dwelling room]. I just got more peace in here. In our patient room there was a lot going on. Someone who snores. Another is constantly checked by staff. Here, it is nice with all that stuff [pointing at the fake candles and on the pictures]. It's about having something to look at. Just to calm down. That picture has some warmth and depth. That feeling is contagious (P3) Discussion This study shows an overall understanding of what an aesthetic environment means to patients at the afflicted with neurological diseases. Thus, elaborates that aesthetics can reduce patient's vulnerability during hospitalization by providing uplifting distraction that enables imaginative and hopeful experiences during hospitalization. In this study, participants shared their experiences of needing positively and thoughtful experiences to gain peace of mind. Such experiences provided calmness in a vulnerable situation: hence aesthetics could fill an existential void. Nevertheless, our study also showed how an improved physical aesthetic environment in the hospital is intertwined with disruptive noisiness. Noisy sounds "owned" the neurological settings and participants needed to protect themselves and be shielding from clinical and disease-related sounds. Thus, the aesthetic places became concrete places in which the participants could distance themselves from fellow patients and avoid the inciting noise. In our recent study (Beck et al., 2020) we showed how hospitalized patients with neurological symptoms become nomads during hospitalization in order to find places to endure being present in the hospital environment. (Beck et al., 2020) paved the way for this recent study, in which we conducted an aesthetic experiment in clinical practice. This study provides new knowledge on how aesthetics, in terms of tableaus, are a helpful escape from the noisy environments in which patients wish to be protected from. However, our study also illuminates how an escape from the environment also have negative consequences for the sense of community with other people hospitalized with a neurological disease. Our study illuminates how patients with neurological symptoms can be vulnerable to environmental stimuli. Purdy (2004) distinguishes between being vulnerable and being in a vulnerable situation. Purdy defines vulnerability as: A highly dynamic process of openness to circumstances that positively or negatively influence outcome (ibid.). This definition stresses that vulnerability is not created by the individual human being, but by the context in which the individual exists. Our study adds to an understanding of how environmental impression to patients depends on how health professional is able to handle aesthetics in the hospital context, hence ensuring sensory impressions not having a negatively impact on patients experiences of hospitalization. Despite the impact of a noisy environment, our study showed positive effects of how the nature pictures and homey artefacts created an inner peace for the participants. Throughout history, there has been one quality that great leaders, policymakers, artists, and fighters have shared. Philosophers calls it "stillness": the ability to be steady, focused and calm in a constantly busy world (Bollnow, 2011). In our study, the participants' need for "stillness" was related to experiences that provided calmness and hope. The participants shared how the interpreted "stillness" was a momentary freedom from distress and worry. In a philosophical usage, the term "stillness" "a state of freedom from emotional disturbance and anxiety" (Ibid.). However, even though our study highlights the significance of aesthetics to patients, their experience of "stillness" was depending on how unnecessary tiresome sounds from devices or fellow patients was controlled or not controlled in the environment. This study enhances that aesthetic elements may be experienced as a considerate homey invitation that offer imaginative moments where the inherent vulnerability can be encountered (Purdy, 2004). In other words, patients may benefit from aesthetics in the hospital environment, because it provides the experience of stillness to feed into a greater ambition to find relief within happy and peaceful moments during chronic sickness. However, noisiness holds the key to succeed in these aesthetic efforts. Thus, we recommend that existing hospitals are renovated under consideration to aesthetics and also that new hospitals are built with the same considerations together with a special attention to quiet tableaus. In this way the environment can set patients free to recover, and avoid that patients fading away in noisiness. The evidence is clear. Noisiness has extensive consequences to ill people (Applebaum et al., 2016;Delaney et al., 2019;Garside et al., 2018;Laursen et al., 2014;Oleksy & Schlesinger, 2019). Our study supports previous studies on how a noisy environment is intertwined with a patient's possibility to experience inner peace during hospitalization. The World Health Organization (WHO) has defined guidelines to sound environments in public space in order to be less harmful (Jarosińska et al., 2018). However, there is a lack of guidelines and systematic interventions of creating for peaceful hospital environments. This contrasts favourably to the current knowledge that hospitals are typically noisy, fast paced and create an overall disturbance to a person's well-being (Beck et al., Beck, et al., 2016;Fillary et al., 2015;Konkani & Oakley, 2012). Strength and limitation Within this study, trustworthiness was strived within the criteria of credibility, transferability, dependability, and confirmability (Lincoln & Guba, 1986;Nowell et al., 2017;Tobin & Begley, 2004). Credibility addresses the "fit" between respondents' views and the researcher's representation of them (Nowell et al., 2017). We conducted persistent data collection within the context of a neurology unit even though the design of "walk-alongs" was challenging. Further, we used international peer-debriefing in order to validate the result of the study. We tried to facilitate future transferability of our findings to other settings by using pictures from the unit and a careful description of the intervention. We kept records of raw data and conducted a reflexive journal during the study in order to systematize, relate and cross reference data. This helped ease the reporting of the research process and is aligned with the dependability criteria. Since, credibility, transferability and dependability were attained in this study; confirmability according to Guba and Lincoln and Guba (1986), was established. However, striving to fulfill the need for trustworthiness, this study had some limitations. One such limitation was that some of the participants lived with language impairments due to their neurological disease. These participants, however, contributed important and valuable experiences, yet not as "rich" as those who were able to truly articulate their experiences. Omitting these interviews would have given a narrower picture of the phenomenon under investigation (Kirkevold & Bergland, 2007). We anticipated that the "walk-along" interviews would be an appropriate approach when asking participants about how the hospital environment to the patient was meaningful, and that discussions of the environmental factors influencing the participants "being" during hospitalization would be facilitated by indirect "talk as you walk" (Carpiano, 2009;King & Woodroffe, 2019;Stiegler, 2020). However, in practice, we experienced that in daytime, these walks were being interrupted, disturbed or cancelled by either health professionals wanting to do rounds, medication passes or other clinical staff eager to contribute to the investigation or care of the patient. Thus, seemingly the changes of the environment in this study were not comprehensive enough to counter the noise in the unit. Therefore, the full potential of the aesthetic environment may not be achieved. We then changed our walks to be conducted during evenings or weekends. This may have affected the data material, since the participants were interviewed in a calmer setting compared to the hectic dayshifts. The many noisy interruptions during data collection served as valuable illustrations and vivid data on the explicit need for quietness during hospitalization. In that sense, the methodological considerations was aligned with the purpose of the study (Polit & Beck, 2006D. Polit & Beck, 2018;Whittemore & Melkus, 2008). Conclusion This study sheds light on the importance of aesthetic elements within the hospital environment to patients in the neurological unit. Aesthetic elements have great impact on patients because they facilitate the experiences of being at home and safe, which are wanted by patients in their attempt to find relief during hospitalization. Our study focused on the quality of aesthetics, but did, however, identified how patients demanded peace and quietness within the environment to enjoy the impact of it. In this way peace and quietness emerged as a significant factor in how aesthetic elements (e.g., tableaus) can be experienced positively. Thus, aesthetic elements, together with peace and quietness, can set vulnerable patients free, which means that they can, to a greater extent, retreat and recover from neurological illnesses. Hence, aesthetic elements within the hospital environment focusing on silence can decrease further contextual vulnerability to patients with neurological diseases and in that sense encounter these patients' needs for stillness. Relevance to clinical practice The relevance of the study lies in its potential to inform hospital managers and staff members about how hospital environments play an important role in patient wellness and overall satisfaction of their care. It could be beneficial to patients afflicted with neurological diseases if health care professionals are determined to decrease to the level of noise in the wards. Our study may serve as a reminder of slowing down and harness the restorative wonders of serenity within the hospital walls. Furthermore, our study sheds light on the sustainable idea, that in order to move forward and develop clinical practice in a more patient-friendly way, clinicians could benefit from learning to be still in some ways during the everyday life at the hospital. Hence, stillness within the nursing discipline facilitates new ways of thinking and caring for persons while respecting their individual dignity and perspectives.
2021-11-09T06:22:35.549Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "940c77e2bb90c6aceabc53a4ed720571d079b556", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17482631.2021.1992843?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "30d0c23b74923b0e7c57cb835ce56ea439d6280b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
207966784
pes2o/s2orc
v3-fos-license
Low Bone Mineral Density of the Forearm and Femur among Postmenopausal Women with Metaphyseal Comminuted Fracture of the Distal Radius 147 Received August 1, 2019; revised and accepted October 23, 2019. Published online November 8, 2019; doi: 10.1620/tjem.249.147. Correspondence: Toshihiro Kanda, Department of Orthopedic Surgery, Seirei Hamamatsu General Hospital, 2-12-12 Sumiyoshi, Naka-ku, Hamamatsu, Shizuoka 430-8558, Japan. e-mail: kantoshi@grape.plala.or.jp Low Bone Mineral Density of the Forearm and Femur among Postmenopausal Women with Metaphyseal Comminuted Fracture of the Distal Radius Introduction Osteoporosis is a skeletal disorder characterized by compromised bone strength that predisposes patients to an increased risk of fractures (Kanis 1984). Postmenopausal osteoporosis leads to an increased risk of fragility fractures, such as femoral neck fractures, humeral neck fractures, vertebral fractures, and distal radius fractures. A distal radius fracture is a classic fragility fracture that is typically caused by a fall onto an outstretched hand, and often occurs as an early fracture in osteoporosis (Mallmin and Ljunghall 1994;Sontag and Krege 2010). Bone mineral density (BMD) refers to the amount of mineral contained in bone tissue and is measured by dual-energy x-ray absorptiometry. It is indicative of the severity of osteoporosis; indeed, osteoporosis with low BMD is a risk factor for distal radius fractures (Hegeman et al. 2004;Itoh et al. 2004;Hung et al. 2005;Bahari et al. 2007;Harness et al. 2012;Xu et al. 2017). Distal radius fractures accompanying osteoporosis may precede future secondary fractures, such as femoral and vertebral fractures (Cuddihy et al. 1999;Bozkurt et al. 2018). Thus, the prevention and treatment of osteoporosis are important for the prevention of fractures. However, if a distal radius fracture occurs in a patient with low BMD, this may result in even more severe fragility fractures (Lill et al. 2003;Sakai et al. 2008;Clayton et al. 2009). According to the AO Foundation and Orthopedic Trauma Association (AO/OTA) (Müller et al. 1990), severity is classified based on the degree of articular surface comminution and the presence or absence of metaphyseal comminution. To date, no reports have investigated the relationship between articular surface and metaphysis comminution in the distal radius and BMD of the forearm and femur in postmenopausal women. Therefore, we investigated whether there was a correlation between BMD and the degree of articular surface and metaphyseal comminution. In addition, BMD was also measured in the contralateral healthy forearm and right femoral neck. We hypothesized that, both on the articular surfaces and in the metaphysis, BMD would be significantly lower in cases of severe comminution. If our hypothesis was proven to be true, high BMD would not only prevent bone fractures but would also reduce the severity of distal radius fractures. Furthermore, this would suggest that, in patients with lower femoral neck BMD, the prevention of secondary femoral fractures is more important in cases of severe comminution (compared to cases with no comminution). Patients and Methods The study protocol was approved by Ethical committee of Seirei Hamamatsu General Hospital (approval number: 2071). The methods were carried out in accordance with the relevant guidelines and regulations, and informed consent was obtained from all participants. Of the distal radius fracture patients that visited our hospital from 2011 to 2017, we targeted postmenopausal women older than 50 years. We recruited patients who had sustained low-energy trauma from falls on flat ground and excluded those with high-energy trauma from falls and traffic accidents. Evaluation of comminution was performed by computed tomography at the time of injury, and articular surface comminution and metaphysis comminution were evaluated separately. First, to investigate whether articular surface comminution is related to the BMDs of forearm and femur, we classified all subjects into the following three groups; 1) the extra-articular fracture (Ea) group; 2) the articular simple (As) group, who had intra-articular simple fractures with only a single fracture line; and 3) the articular multifragmentary (Am) group, who had intra-articular comminution fractures with multiple fracture lines (Fig. 1). Second. to investigate the relationship of metaphyseal comminution with the BMDs of forearm and femur, we performed another classification of all subjects into three groups; 1) the metaphyseal simple (Ms) group, who had no comminution; 2) the metaphyseal monocortical comminution (Mm) group, who had comminution on either the palmar or dorsal side; and 3) the metaphyseal bicortical comminution (Mb) group, who had comminution on both the palmar and dorsal sides (Fig. 2). BMD of the distal third of the contralateral healthy forearm and the right femoral neck was measured using dual-energy x-ray absorptiometry (Hologic Discovery; Marlborough, MA, USA). In total, 165 cases were investigated. The patients' mean age was 69.8 ± 0.73 years (range, 50-89 years), mean height was 153.7 ± 0.48 cm (range, 137-170 cm), mean weight was 52.0 ± 0.82 kg (range, 31-75 kg), and mean body mass index (BMI) was 22.0 ± 0.31 kg/m 2 (range, 14.0-37.5 kg/m 2 ). Three patients had rheumatoid arthritis and 16 had diabetes mellitus; no patients had renal failure. Three patients had a history of vertebral fractures, and one patient had a history of proximal femoral fracture; no patients had history of humeral neck fractures. Patients received the following medical therapies for osteoporosis: bisphosphonate (n = 5), vitamin D (n = 6), calcium (n = 2), and vitamin K and calcitonin (n = 1). In the classification of articular surface comminution, there were 43 cases in the Ea group, 91 cases in the As group, and 31 cases in the Am group. In the classification of metaphyseal comminution, there were 58 cases in the Ms group, 82 cases in the Mm group, and 25 cases in the Mb group. Due to patient disagreement, 10 of the 165 patients underwent femoral BMD measurements only and three underwent forearm BMD measurements only. Therefore, we examined 155 cases of forearm BMD and 162 cases of femoral BMD (Fig. 3). Among the 155 cases with forearm BMD measurements, the patients' mean age was 69.7 ± 0.75 years (range, 50-89 years); mean height was 153.9 ± 0.49 cm (range, 137-170 cm); mean weight was 51.7 ± 0.82 kg (range, 31-106 kg); and mean BMI was 21.8 ± 0.30 kg/m 2 (range 14.0-37.1 kg/m 2 ). Among the 162 cases with femoral BMD measurements, the patients' mean age was 69.8 ± 0.74 years (range, 50-89 years); mean height was 153.6 ± 0.48 cm (range, 137-170 cm); mean weight was 52.0 ± 0.83 kg (range 31-106 kg); and mean BMI was 22.0 ± 0.32 (range, 14.0-37.5) ( Table 1). In these groups, we analyzed whether fracture type was associated with BMD in the forearm or femoral neck. We then investigated, in all 165 cases, the relationship between BMI and intra-articular and metaphyseal comminution. We also investigated the correlation between forearm and femoral BMD in the 152 patients with both forearm and femoral BMD measurements. Statistical analysis Results are expressed as mean ± standard deviation. Differences in average BMD between the groups were tested using the Tukey-Kramer method, and p < 0.05 was considered significantly different. The correlation coefficient between forearm and femoral BMD was calculated using Pearson correlations. The software used for statistical analysis was SPSS version 24 (IBM Corp., Armonk, NY, USA). Relation to BMI Among the classification of intra-articular comminution, there were 43 cases in the Ea group, 91 cases in the As group, and 31 cases in the Am group. (Table 4). Relationship between forearm and femoral neck BMD In the 152 cases with both forearm and femoral neck BMD measurements, the BMD of the forearm and the femoral neck showed a significant positive correlation (r = 0.666) (Fig. 6). Discussion In the current study, there was no association between articular surface comminution and BMD of the forearm and femoral neck. We also found that in the metaphysis, forearm and femoral neck BMD were significantly lower in cases with bicortical comminution. We did not find any association between intra-articular comminution and BMI, or between metaphyseal comminution and BMI. We also found a strong correlation between forearm and femoral neck BMD. Previous reports have described the relationship between the severity of distal radius fractures and BMD. However, no reports have investigated the relationship between metaphyseal comminution of the distal radius and BMD of the forearm and proximal femur. Sakai et al. (2008) reported that the degree of deformity of the distal radius fracture, palmar tilt, radial inclination, and ulnar variance were related to lumbar spine BMD. In a cadaver study, Lill et al. (2003) reported a correlation between AO classification, Cooney classification (Cooney 1993), and forearm BMD. Clayton et al. (2009) reported that hip joint BMD correlates with early instability of a distal radius fracture, carpal malalignment, and the occurrence of nonunion. While these reports described the relationship between distal radius fractures and BMD of the hip or spine, they did Classification of intra-articular comminution Classification of metaphyseal comminution BMD (g/cm 2 ) 0.530 ± 0.019 0.562 ± 0.010 0.553 ± 0.020 0.573 ± 0.016 0.554 ± 0.010 0.497 ± 0.019 Table 3. Comparison of femoral neck BMD. Am, articular multifragmentary; As, articular simple; BMD, bone mineral density; Ea, extra-articular; Mb, metaphyseal bicortical comminution; Mm, metaphyseal monocortical comminution; Ms, metaphyseal simple. not discuss metaphyseal comminution of the distal radius. In the present study, we did not find any association between articular surface comminution and BMD. However, we found that in the metaphysis, BMD was significantly lower in cases of comminution on both the palmar and dorsal sides of the hand. As there is a high degree of metaphyseal comminution in cases with low BMD, it is conceivable that the visible displacement on plain X-rays is also increased. Furthermore, since, it is thought that deformity likely occurs after reduction, it can be concluded that these findings are consistent with those of previous studies (Sakai et al. 2008;Clayton et al. 2009;Lill et al. 2003). The results of the present study did not confirm the influence of BMI on fracture comminution. Greater body weight tends to increase the severity of comminution due to the stronger forces applied to the bones. One report indicated that BMI affects the severity of distal radius fractures (Xu et al. 2017), but another study reported that there was no correlation between fracture type according to AO classifications and BMI (Acosta-Olivo et al. 2017). In fragility fractures (such as femoral neck, vertebral, humeral neck, and distal radius fractures), distal radius fractures tend to occur as the initial fracture (Sontag and Krege 2010). Although the incidence of femoral neck and vertebral fractures increases rapidly with age, the incidence of distal radius fractures increases more gradually with age. Concerning the osteoporotic fractures, Sakuma et al. (2008Sakuma et al. ( , 2014 reported that the incidence of age-related distal radius fractures is different from the incidence of other osteoporotic fractures. In cases of distal radius fractures, the patient should be examined for osteoporosis and the risk of secondary fractures should be taken into consideration. Treatment for osteoporosis should be given once a diagnosis is confirmed. Based on the results of the present study, it is highly likely that in cases of metaphyseal comminution on both the palmar and dorsal sides, the femoral neck also has lower BMD. Webber et al. (2015) examined the association between the thickness of the cortical bone distal to the radius and femoral BMD. The authors found that the thinner the cortex, the lower the femoral BMD. Furthermore, Shin et al. (2016) reported that hip joint BMD influenced the risk of distal radius fracture. The results of the present study confirmed a strong correlation between forearm and femoral neck BMD. Iba et al. (2018) reported that in patients with femoral neck and distal radius fractures, orthopedic surgeons did not properly intervene and treat the patients' osteoporosis after the fracture. Bougioukli (2019) also described inappropriate treatment of osteoporosis after fragility fractures. Distal radius fractures provide an important opportunity during which to identify osteoporosis, which should not be missed. Cuddihy et al. (1999) reported that forearm fractures are predictive of osteoporotic fractures, and Bozkurt et al. (2018) stated that vertebral and distal radius fractures are precursors to femoral neck fractures. A diagnosis of osteoporosis following a distal radius fracture is also essential towards preventing femoral neck fractures, which can occur as a secondary fracture. Johnell et al. (2005) reported that femoral neck BMD is a strong predictor of hip fractures, and in the present study, we showed that patients with distal radius fractures with metaphyseal bicortical comminution had low femoral neck BMD. Thus, such patients may be more likely to suffer from secondary hip fractures. Orthopedic surgeons should treat these patients as osteoporotic to prevent potential future hip fractures. A limitation of this study is the small number of cases; hence, it will be necessary to increase the sample size in further investigations. Additionally, we should have measured grip strength of the contralateral healthy hand in all cases, as it may have affected the BMD of the forearm and femur. Moreover, healthy controls were not investigated, and it was not possible to confirm the presence or absence of an actual secondary fracture following distal radius fractures. Future studies should conduct long-term follow ups to assess the occurrence of secondary fractures in patients with distal radius fractures.
2019-11-14T14:14:50.303Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "a48da9f3a190026f6db5ffde75b8acd24bf0e784", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/tjem/249/3/249_147/_pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "02a92dcfb0bd12a987b7bdfe51f9dbd8f77e9382", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10320869
pes2o/s2orc
v3-fos-license
Inhibition of Translation Initiation by Protein 169: A Vaccinia Virus Strategy to Suppress Innate and Adaptive Immunity and Alter Virus Virulence Vaccinia virus (VACV) is the prototypic orthopoxvirus and the vaccine used to eradicate smallpox. Here we show that VACV strain Western Reserve protein 169 is a cytoplasmic polypeptide expressed early during infection that is excluded from virus factories and inhibits the initiation of cap-dependent and cap-independent translation. Ectopic expression of protein 169 causes the accumulation of 80S ribosomes, a reduction of polysomes, and inhibition of protein expression deriving from activation of multiple innate immune signaling pathways. A virus lacking 169 (vΔ169) replicates and spreads normally in cell culture but is more virulent than parental and revertant control viruses in intranasal and intradermal murine models of infection. Intranasal infection by vΔ169 caused increased pro-inflammatory cytokines and chemokines, infiltration of pulmonary leukocytes, and lung weight. These alterations in innate immunity resulted in a stronger CD8+ T-cell memory response and better protection against virus challenge. This work illustrates how inhibition of host protein synthesis can be a strategy for virus suppression of innate and adaptive immunity. Introduction The study of virus-host interactions continues to provide valuable information about the complex relationships between cells and pathogens. Large DNA viruses, in particular, encode many proteins that modify the intracellular environment to promote viral survival, replication and spread. Vaccinia virus (VACV) is the prototypic Orthopoxvirus of the Poxviridae and is the vaccine used to eradicate smallpox [1]. VACV replicates in the cytoplasm and encodes about 200 proteins that are required for viral transcription and replication [2,3], alteration of cell metabolism [4][5][6][7], and immune evasion [8]. Between one-third and one-half of VACV proteins are devoted to evasion of innate immunity and these immunevasins may function inside or outside the infected cell. Intracellular immunevasins include those that inhibit innate immune signaling pathways leading to activation of nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB), interferon (IFN) regulatory factor (IRF)-3 and Janus kinase (JAK) / signal transducer and activation of transcription (STAT) signaling. Other intracellular immunevasins suppress apoptosis or the antiviral activity of IFN-stimulated gene products. Additional immunevasins are secreted from infected cells to bind complement factors, IFNs, cytokines or chemokines extracellularly and inhibit their activity. An interesting aspect of these immune evasion strategies is the apparent redundancy, with several proteins targeting the same activation pathway. For instance, there are at least 10 intracellular inhibitors of NF-κB encoded by VACV [9][10][11][12][13][14][15][16][17][18] and a VACV strain lacking all these factors still inhibits NF-κB [19]. VACV, like all viruses, relies on host ribosomes for virus protein synthesis. To ensure efficient translation of virus proteins, VACV shuts off host protein synthesis and re-directs the cellular translational machinery to the synthesis of viral proteins [20][21][22][23][24][25][26][27]. VACV mRNAs are translated by a cap-dependent mechanism facilitated by the eukaryotic initiation factor (eIF)4F complex that recognizes the 5'-methylated cap, and translation is initiated by interaction of the cap with eIF4E, a cap-binding protein [28]. VACV encodes capping [29] and methylating enzymes [30] that produce viral mRNAs that mimic cellular mRNAs and so evade detection by host pattern recognition receptors. VACV protein synthesis occurs in virus factories [21,27,31], and to ensure preferential translation of virus mRNAs, VACV expresses de-capping enzymes D9 and D10 that remove the cap from both cellular and viral mRNAs [25,32,33]. The abundance of viral transcripts ensures translation of viral mRNA continues despite this de-capping activity, which also promotes turnover of viral mRNAs and thereby aids the transition between the early, intermediate and late stages of viral gene expression. The importance of protein D10 for the virus replication cycle is illustrated by a D10 deletion mutant that has a smaller plaque phenotype and produces reduced yields of virus in cell culture [26]. Moreover, mutant viruses with a stop codon introduced into the D10 open reading frame (ORF) or with amino acid alterations in the D10 catalytic site have an attenuated phenotype in vivo [34]. D9 and D10 also reduce dsRNA accumulation and the consequential activation of host responses [35]. A similar outcome was observed after VACV infection of cells lacking the host exonuclease Xrn1 [36]. This report presents a functional characterization of VACV strain Western Reserve (WR) protein 169, a previously uncharacterized protein that is expressed by some, but not all VACV strains and orthopoxviruses. Protein 169 is an inhibitor of cap-dependent and cap-independent translational initiation. Protein 169 localizes in cytoplasmic puncta and is largely excluded from virus factories, enabling preferential inhibition of host mRNA translation. Consistent with this, protein 169 does not affect virus replication or spread in cell culture, but is a potent inhibitor of translation in cells in which it is expressed ectopically. Consequently, protein 169 blocks expression of host proteins that are induced following activation of diverse innate immune signaling pathways, and, in two in vivo models of VACV infection, a virus lacking 169 (vΔ169) induces a more severe primary infection than control viruses. The altered disease severity is not due to changes in viral replication, but instead is associated with increased production of pro-inflammatory cytokines and chemokines, and increased recruitment of immune cells at the site of infection. This altered response also affects the adaptive memory response and causes increased CD8 + T-cell memory and better protection against virus challenge. Collectively, these results indicate that virus inhibition of host protein synthesis can be a strategy to suppress innate and adaptive immunity, rather than primarily a means to aid virus replication as considered hitherto. Results Characterization of the VACV 169 protein VACV strain WR gene 169R encodes a small, charged protein of 78 amino acid residues. The protein lacks a nuclear localisation signal and a hydrophobic transmembrane sequence suggesting that protein 169 is likely to be cytosolic. The ORF is conserved in VACV strains modified vaccinia virus Ankara (MVA), Lister, Duke, Acambis 3000 and rabbitpox virus, and other orthopoxviruses such as camelpox virus, taterapox virus, cowpox virus and monkeypox virus (S1 Fig). However, the ORF is truncated in multiple variola virus strains (the cause of smallpox) after codon 38 and in ectromelia virus (ECTV) after codon 41. In cowpox virus and monkeypox virus there are minor changes in amino acid length and composition, but the protein is identical in the VACV strains shown and in taterapox virus (S1 Fig). The truncation of this ORF in VACV strain Copenhagen and in other orthopoxviruses indicates that the 78 amino acid protein is non-essential for orthopoxvirus replication. The expression of protein 169 by several VACV strains was investigated by immunoblotting using a rabbit polyclonal antibody raised against VACV WR protein 169 that had been expressed in and purified from E. coli (Methods). This detected a 13-kDa polypeptide in cells infected with VACV strains WR, MVA, Lister, rabbitpox, International Health Department (IHD)-J and Tian Tan, and cowpox virus strain Brighton Red, but not VACV strain Copenhagen, or in mock-infected cells (Fig 1A). VACV infection was confirmed by immunoblotting with a mAb that recognizes the VACV structural protein D8 [37], although this mAb did not detect the D8 protein made by MVA (Fig 1A). Immunoblotting for α-tubulin demonstrated equal loading of samples. Protein 169 is expressed early and localizes in the cytoplasm The time of expression and localization of protein 169 during infection were investigated by immunoblotting (Fig 1B and 1C) and immunofluorescence microscopy ( Fig 1D). HeLa cells were infected with v169 (a plaque purified, wild-type virus that expresses protein 169) in the presence or absence of cytosine arabinoside (AraC), a DNA replication inhibitor that blocks intermediate and late VACV gene expression. The anti-169 antiserum detected a 13-kDa protein from 2 h p.i. that was also present following addition of AraC, showing expression prior to DNA replication (Fig 1B). Similar expression kinetics were observed for early VACV protein C16 [38]. In contrast, the VACV late protein D8 [39] was expressed only in the absence of AraC. The localization of protein 169 was investigated by biochemical fractionation of infected cells. Immunoblotting of lysates from cells infected with v169, vΔ169 (a deletion mutant lacking the 169R gene) and v169-rev (a revertant virus in which the 169R gene was reinserted at its natural locus into vΔ169) showed that protein 169 is expressed from v169 and v169-rev, but not vΔ169, and that it localizes predominantly in the cytoplasm. Satisfactory separation of cytoplasmic and nuclear fractions was confirmed by blotting for α-tubulin and lamin (Fig 1C). Analysis by immunofluorescence using purified anti-169 antibody (Methods) detected protein 169 from 4 h p.i. in cytoplasmic puncta (Fig 1D). VACV factories were also detected from 4 h p.i. by DAPI staining, but protein 169 was excluded from these structures. To determine if protein 169 co-localized with specific cytoplasmic organelles, infected cells were stained with antibodies that detected the endoplasmic reticulum, mitochondria, Golgi apparatus, clathrin-containing vesicles and endosomes but no clear co-localization was observed (Fig 2A). Partial co-localization with 40S ribosomes was noted, although the abundance of 40S ribosomes makes a clear correlation uncertain. Staining with DAPI confirmed that protein 169 was excluded from virus factories ( Fig 2B). is not required for viral replication and spread in vitro The contribution of protein 169 to virus replication and spread was investigated using recombinant VACVs v169, vΔ169, and v169-rev that were constructed by transient dominant selection [40] (Methods). These three viruses formed plaques of indistinguishable size in African monkey fibroblasts (BSC-1) and also in rabbit kidney (RK)-13 cells and human TK -143 cells (Fig 3A-3C). Similarly, the yields of intracellular and extracellular vΔ169 were unaltered compared to control viruses after high (10 PFU/cell) or low (0.05 PFU/cell) multiplicity of infection in BSC-1 cells (Fig 3D-3G). Therefore, the 169 protein is non-essential for virus replication and spread in cell culture. inhibits various signaling pathways at protein level The 169R gene is located in a terminal variable region of the VACV genome, is expressed early during infection and is non-essential for virus replication in cell culture. These properties are characteristic of VACV genes encoding immunevasins, such as the type I IFN binding protein [41,42], the 3-β-hydroxysteroid dehydrogenase [43,44] and the intracellular inhibitors of NF-κB activation [9,10,[12][13][14][15][16][17][18]. Therefore, we hypothesized that protein 169 might be an immunevasin and this was tested by reporter gene assays. A plasmid in which firefly luciferase expression is driven by either an NF-κB, IRF-3 (ISG56.1), or interferon-stimulated response element (ISRE) responsive promoter was transfected separately into HEK 293T cells together with TK renilla luciferase (internal control), and plasmids expressing 169, FLAG-tagged 169 (FLAG-169) or other control proteins. The controls chosen were (i) VACV strain WR protein B14 that inhibits the NF-κB signaling by binding to IKKβ [15], (ii) VACV protein C6 that inhibits IRF-3 signaling by binding to TBK-1 adaptors [45], and (iii) paramyxovirus protein PiV5-V that inhibits type I IFN-induced signaling by degrading STAT1 [46]. Luciferase activity was measured by luminescence after stimulation with TNF-α (NF-κB Luc), IFN-α (ISRE Luc) or after transfection with poly (I:C) (IRF-3 Luc). Protein 169 and FLAG-169 inhibited NF-κB, IRF-3 and ISRE pathways as well as, or better than, known inhibitors of these pathways ( Fig 4A-4C). The inhibition of all these pathways was surprising, and contrasted with the controls that generally inhibit specific pathways only. Interestingly, protein 169 also caused reduced expression of TK renilla, suggesting a general reduction in protein expression. To investigate this further, the levels of chemokine CXCL10 were measured by ELISA. HEK 293T cells were transfected with plasmids expressing GFP, VACV B14, C6, 169 or Δ12A49 and then infected with Sendai virus (SeV). VACV protein A49 inhibits NF-κB signaling by binding to the E3 ubiquitin ligase β-TrCP but deletion of the first 12 amino acids abolishes this function [17] and so Δ12A49 served as a negative control. After 24 h, CXCL10 in the supernatant was measured by ELISA ( Fig 4D). CXCL10 expression is induced by both NF-κB and IRF-3, and so levels of CXCL10 were lower in cells expressing either B14 or C6, but not in cells expressing Δ12A49, as expected. However, protein 169 also reduced CXCL10 levels, consistent with results of the reporter gene assays. To test whether 169 mediates its inhibitory activity by blocking transcription, the levels of specific mRNAs were measured. A549 cells were transfected with plasmids expressing GFP, VACV B14, C6, or 169 and were stimulated 24 h later with TNF-α. mRNA levels of NF-κBinducible genes such as intercellular adhesion molecule 1 (ICAM-1), IL-6, and NFκBia were measured by reverse transcription quantitative-PCR (RT-q-PCR) and normalized to the housekeeping gene hypoxanthine-guanine phosphoribosyltransferase (HPRT) (Fig 4E-4G). Levels of all three mRNAs were similar in cells expressing 169, GFP, or C6 following stimulation with TNF-α. Conversely, as expected, lower levels of these NF-κB-inducible mRNAs were detected in cells expressing the NF-κB inhibitor B14. No difference was detected in HPRT mRNA levels, confirming that the 169-mediated inhibition of multiple immune signaling pathways was not due to a general inhibition of transcription. Therefore, it was likely that protein 169 inhibited gene expression either by blocking mRNA transport to the cytoplasm, or by blocking protein synthesis. The former possibility was unlikely given that protein 169 is cytoplasmic, but was addressed by measuring the levels of cytoplasmic and nuclear mRNAs. HEK 293T cells were co-transfected with plasmids expressing NEMO fused with renilla luciferase (NEMO-Luc) and protein 169. A plasmid expressing protein A49 and an empty vector were included as negative controls and cycloheximide was added as an inhibitor of translation. The levels of luciferase-tagged proteins were determined by luminescence (S2A Fig) and mRNA levels of NEMO-Luc were determined by RT-q-PCR (S2B Fig). In parallel, cytoplasmic and nuclear mRNAs were extracted and mRNAs levels of NEMO-Luc, HPRT and TATA boxbinding protein were compared in these fractions (S2D- S2F Fig). As before, only low levels of NEMO-Luc was detected in cells expressing 169 or treated with cycloheximide. Slightly lower cytoplasmic mRNA levels of NEMO-Luc were found in cells expressing 169, but this slight decrease could not explain the profound (~10-fold) reduction of NEMO-Luc. There was also a slight reduction in NEMO-Luc mRNA in the cytoplasm in cycloheximide-treated cells, suggesting such reduction might derive from a general inhibition in protein synthesis. Lastly, no decrease in endogenous mRNAs was observed in the presence of protein 169. Collectively these data indicate that mRNA transcription and export are not inhibited by protein 169 and therefore its inhibitory effect is downstream. (green), DAPI (blue) and with antibodies against markers of different intracellular compartments (red): protein disulphide isomerase (PDI) for endoplasmic reticulum (ER), GM130 for Golgi apparatus (Golgi), clathrin for some membrane vesicles, transferrin for endosomes and protein S6 for 40S ribosomes. Mitochondria were stained by addition of mitotracker to live cells that were then fixed and stained with anti-169 and with DAPI. White lines are scale bars (10 μm inhibits protein synthesis To investigate if protein 169 inhibits protein synthesis, HeLa cells were co-transfected with plasmids expressing GFP together with VACV N1, 169, FLAG-169 or empty vector. VACV N1 is another inhibitor of NF-κB signaling [47,48] and served as a negative control. GFP levels triplicate with an NF-κB reporter plasmid, TK-renilla luciferase and plasmids for expression of the indicated proteins. After 1 d the cells were stimulated with 75 ng/ml of TNF-α for 7 h or treated with the same medium lacking TNF-α. The luminescence of cell lysates was measured using a luminometer. These data are from one representative experiment (n = 3) and results are presented as the fold increase in luciferase expression. Firefly luciferase was normalized to renilla luciferase (internal control) and further normalized to the unstimulated samples ± SD. Statistical analysis was performed using a two-tailed Student's ttest with Welch's correction where necessary, * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001. (B) Performed as in (A) but using an ISG56.1 reporter plasmid (responsive to IRF-3) and cells were transfected with poly I:C or Lipofectamine only. (C) Performed as in (A) but using an ISRE reporter plasmid. The were determined by immunoblotting and GFP mRNAs were measured by RT-q-PCR (Fig 5A and 5B). Cycloheximide, 169 and FLAG-169 reduced GFP levels greatly compared with N1 or empty vector. In contrast, GFP mRNA levels were similar in all cells and were higher in cells treated with cycloheximide. These experiments reveal that protein 169 inhibited protein synthesis and that this is generic rather than being specific to proteins functioning in innate immunity. VACV inhibits cap-dependent translation of host mRNAs by the de-capping enzymes D9 and D10, but these do not affect cap-independent translation [25]. To determine if protein 169 has similar or different specificity, its ability to inhibit cap-dependent and internal ribosome entry site (IRES)-dependent translation was evaluated. A plasmid encoding a bicistronic gene in which firefly luciferase is translated in a cap-dependent manner and renilla luciferase is translated in a foot and mouth disease virus (FMDV) IRES-dependent manner was transfected into HEK 293T cells together with 169, FLAG-169 or 169-AAG. The latter plasmid has the 169 initiation codon and the fourth codon mutated from AUG to AAG to prevent translation and distinguish between inhibition mediated by 169 mRNA or 169 protein. Luciferase levels were determined by luminescence (Fig 5C and 5D), mRNA levels were determined by RT-q-PCR (Fig 5E and 5F) and protein expression was also measured by immunoblotting ( Fig 5G). Low levels of both firefly and renilla luciferase were found in the presence of cycloheximide, 169 and FLAG-169, but not 169 AAG, confirming that the inhibitory effect of 169 on translation requires protein 169. In contrast, luciferase levels were unaffected by proteins N1 or A49. Similar mRNA levels of renilla luciferase were found in all samples. These data show that protein 169 inhibits both cap-dependent and FMDV IRES-dependent translation. To evaluate the influence of protein 169 on protein synthesis in uninfected cells and during VACV infection, nascent proteins were analysed using surface sensing of translation (SUnSET) [49]. SUnSET is a non-radioactive method for monitoring protein synthesis that uses incorporation of puromycin into nascent polypeptide chains and causes termination of elongation. Puromycin-tagged polypeptides are then detected by immunoblotting with anti-puromycin antibody. In HEK 293 Trex cells expressing protein 169, protein synthesis was inhibited increasingly from 8 h post induction ( Fig 6A). In contrast, in a control cell line expressing C6. TAP inducibly [50] no such inhibition was seen ( Fig 6B). The effect of 169 on protein synthesis during VACV infection was tested next. HeLa cells were infected with v169 or vΔ169, and puromycin was added at different times p.i. (Fig 6C). Host protein synthesis was inhibited by 6 h p.i. and more profoundly thereafter, but no difference was detected between v169 and vΔ169. This could be due to both viruses expressing the de-capping enzymes D9 and D10 that have profound effects on virus protein synthesis [25,26,32] and might mask effects of protein 169. This result is consistent with the observations that protein 169 is absent from virus factories (Figs 1D and 2), and does not affect virus replication cells were stimulated with 100 U/ml of IFN-α for 7 h. (D) HEK 293T cells were transfected with plasmids for expression of the indicated proteins in triplicate, and the following day the cells were mock-infected or infected with SeV for 24 h. CXCL10 in the supernatant was measured by ELISA. Data shown are from one representative experiment (n = 2) and results are expressed as concentration of CXCL10, estimated from a nonlinear standard curve, ± SD. Statistical comparison to GFP control used a two-tailed Student's t-test with Welch's correction where necessary, *** p < 0.001, **** p < 0.0001. (E, F, G) A549 cells were transfected with plasmids for expression of the indicated proteins in triplicate and, after 24 h cells, were mock-stimulated or stimulated with 50 ng/ml of TNF-α for 7 h. Then mRNAs were extracted, cDNAs were prepared and RT-q-PCR was performed using ViiA 7 Real-Time PCR System (Life Technologies) using primers specific for IL-6 (E), NFκBia (F) and ICAM-1 (G). Data shown are from one representative experiment (n = 2) and results are expressed as cycle threshold (CT) values compared to HPRT levels ± SD. Statistical analysis was performed using a two-tailed Student's t-test with Welch's correction where necessary, * p < 0.05, ** p <0.01. inhibits translation initiation To determine at which stage of protein synthesis protein 169 might be acting, polysomes were profiled in HEK 293 Trex 169 cells with or without protein 169 expression (Fig 7A and 7B). Cytoplasmic extracts were prepared in the presence of cycloheximide to retain intact monosomes and polysomes and these were analyzed by sucrose density gradient centrifugation. The RNA and protein composition of the gradient was measured by absorbance (A 254 nm) and immunoblotting, respectively. Protein 169 expression caused an increase in 80S ribosomes and decrease in polysomes (Fig 7B), indicating an inhibition of translational initiation. Immunoblotting of gradient fractions revealed that protein 169 co-purified partially with the 40S ribosomal fraction (Fig 7B), consistent with immunofluorescence data (Fig 2A). For comparison, HEK 293 Trex C6.TAP cells were analyzed in parallel and protein C6 expression caused no such alterations to polysomes or 80S monosomes (Fig 8A and 8B). To investigate whether the 80S ribosomes accumulating in the presence of protein 169 contain mRNA, polysome profiling was repeated in a higher salt buffer (400 mM KCl), conditions in which 80S ribosomes lacking mRNA dissociate into constituent subunits. However, in the presence of protein 169, the 80S peak remained stable in high salt (Fig 9B), indicating that the 80S ribosomes are associated with mRNA. Increasing the concentration of salt in the sucrose density gradient reduced the sharpness of the peaks obtained. To confirm that this effect was due to the high salt concentration, the polysome profile of uninduced HEK 293 Trex 169 cytoplasmic cell lysates was examined on sucrose gradients (Fig 9C and 9D). Again, the high salt condition affects the overall sharpness of polysomal fractions independently of the expression of protein 169. Since protein 169 inhibited cap-dependent and FMDV IRES-dependent translation, both of which require the concerted action of multiple eIFs, we tested whether protein 169 could affect translation from the cricket paralysis virus (CrPV) intergenic region (IGR) IRES. This IRES uses an unusual mechanism of translation initiation binding directly to the 40S subunit and initiating from the A-site without the requirement for initiation factors [51]. A plasmid encoding a bicistronic gene in which renilla luciferase is translated in a cap-dependent manner and firefly luciferase is translated in a CrPV IRES-dependent manner was transfected into HEK 293T cells together with A49.TAP, 169 or empty vector control. Luciferase levels were measured by luminescence (Fig 10A and 10B) and protein expression was determined by immunoblotting ( Fig 10C). Low levels of both firefly and renilla luciferase were found in cycloheximide-treated cells as well as in cells expressing protein 169 indicating that protein 169 can inhibit translation from an IRES that does not require the activity of any initiation factors. mRNAs were extracted from cells, cDNAs were synthesized and GFP mRNA levels were measured by RT-q-PCR. Results are expressed as CT values compared to HPRT mRNA levels ± SD. Statistical analysis was performed using a two-tailed Student's t-test with Welch's correction where necessary, *** p < 0.001. Data shown are from one representative experiment (n = 3). (C, D) HEK 293T cells were transfected with a plasmid encoding a bicistronic RNA expressing firefly luciferase (FLuc, C) by a cap-dependent translation and renilla luciferase (RLuc, D) by foot and mouth disease virus (FMDV) IRESdependent translation, together with plasmids for expression of the indicated proteins or EV control in quadruplicate. After 16 h the cells were treated with CHX (1 μg/ml) for 7 h. The relative amount of FLuc and RLuc was determined by luminescence and results are presented as the fold increase in luciferase expression normalized to the EV control ± SD. Data shown are one representative experiment (n = 4). (E, F) Performed as in (C), but in triplicate. mRNAs were extracted, cDNAs were prepared and mRNA levels of for 169 and RLuc were determined by RT-q-PCR. Results are expressed as CT values compared to GAPDH levels ± SD. Data shown are one representative experiment (n = 4). Statistical analysis was performed using a two-tailed Student's ttest with Welch's correction where necessary, ** p < 0.01, *** p < 0.01, **** p < 0.001. (G) Cell lysates from (A) were resolved by SDS-PAGE followed by immunoblotting with the indicated antibodies. A shorter exposure of FLAG is shown to improve visualization of N1.TAP. The positions of molecular size markers in kDa are indicated on the left. Taken together these data indicate that protein 169 inhibits the initiation of translation causing accumulation of 80S ribosomes and that this applies to both cap-dependent and IRESdependent translation. This generic shut-down of host protein synthesis, while virus protein synthesis remains largely unaffected, affects the expression of many proteins induced by activation of innate immune sensing pathways and results in inhibition of innate immunity within infected cells. Such a strategy would be predicted to affect the outcome of infection in vivo and therefore this hypothesis was investigated. modulates virus virulence The contribution of 169 to virus virulence was examined using two murine models of infection. The intranasal (i.n.) model represents a systemic infection, where the virus replicates in the lungs and spreads to other organs. Virus virulence is assessed by measuring weight loss, virus titers and signs of illness [52,53]. In the intradermal (i.d.) model, mice are inoculated by intradermal injection into the ear pinna, which results in a localized infection, and virulence is determined by measuring lesion size and healing time [54,55]. In the i.n. model, infection with vΔ169 resulted in significantly greater weight loss from day 5 onwards and more severe signs of illness than control viruses (Fig 11A). To investigate the basis for these differences, the levels of cytokines and chemokines in broncho-alveolar lavage (BAL) fluids were measured early (24 h) p.i. This showed that there were enhanced levels of IL-2, IL-6, TNF-α, CCL11, CXCL9 and CXCL10 following infection with vΔ169 compared to both control viruses, whereas the levels of CCL2, CCL9, IL-12 and IL-15 were unchanged ( Fig 11B and 11C). Furthermore, infection with vΔ169 caused increased lung weights and the number of cells in BAL fluids on days 4 and 7 p.i. compared to control viruses (Fig 11D and 11E). Measurement of lung virus titers showed that all three viruses had replicated to the same extent on days 2 and 4 p.i., but by day 7 the titer of vΔ169 had decreased more than controls, indicating more rapid clearance (Fig 11F). These observations show that infection with vΔ169 caused a greater inflammatory response, with elevated synthesis of several cytokines and chemokines, enhanced recruitment of cells into BAL fluids and more rapid virus clearance. To analyze the nature of cells recruited into BAL fluids, the cells were stained with monoclonal antibodies and quantified by flow cytometry. The majority of inflammatory cells recruited during infection were macrophages (Fig 12A) and lymphocytes (Fig 12C) including CD4 + and CD8 + T-cells (Fig 12F, 12G and 12H), with fewer numbers of neutrophils, (Fig 12B), NK cells ( Fig 12D) and B cells (Fig 12E). Notably on days 4 and 7 p.i. the recruitment of macrophages, Vaccinia Protein 169 Inhibits Translation and Affects Virulence total lymphocytes, T-cells, and CD4 + and CD8 + T-cells was increased following infection with vΔ169 compared to controls, and these differences may explain the more rapid clearance of this virus. In contrast, neutrophils (Fig 12B), NK cells (Fig 12D), and B cells (Fig 12E) showed no difference between all viruses. Changes in the inflammatory response to primary infection can alter the adaptive response and subsequent protection against virus challenge. This has been observed with VACV mutants that either have increased virulence, such the VACV WR strain lacking the soluble chemokine binding protein A41 [56][57][58], or decreased virulence, such as the inhibitor of IRF-3 activation C6 [45,59,60] and the inhibitor of apoptosis and NF-κB activation N1 [47,61,62]. A more severe primary infection can also lead to better protection [57], and to test whether enhanced immune response generated by vΔ169 is advantageous and would lead to better protection, the potency of vΔ169 as a vaccine was evaluated. Mice were immunized via the i.n. route with v169, vΔ169 or v169-rev and then were challenged with wild type virus i.n. at day 28 ( Fig 13A). In this model, vΔ169 induced better protection against challenge as shown by reduced weight loss compared to controls (Fig 13A). To investigate the basis for this, the levels of VACV neutralizing antibodies were determined by plaque reduction neutralization assay ( Fig 13B) and the cytotoxicity of NK cells on uninfected YAC-1 cells (Fig 13C) and CD8 + splenic T-lymphocytes (Fig 13D) on VACV-infected P815 cells was measured by chromium release assay. At day 28 p.i. all groups of immunized mice had high serum antibody titers that did not differ between the groups (Fig 13B). Similarly, the cytotoxicity of splenic NK cells on YAC-1 cell targets did not differ between the groups (Fig 13C). However, the lysis of target cells by splenic CD8 + T-cells within the total splenocyte population from mice infected with vΔ169 was significantly greater than lysis by cells from mice infected by control viruses ( Fig 13D). Collectively, these data show that immunization with vΔ169 generates stronger CD8 + Tcell immunological memory and better protection against challenge. The virulence and immunogenicity of vΔ169 was also assessed after intradermal (i.d.) infection (Fig 14). vΔ169 caused a statistically significant increase in lesion size and duration compared to control viruses ( Fig 14A). Furthermore, as observed in the i.n. model, viral titers in the ears showed that all viruses replicated to a similar extent initially (day 3 and 6), but thereafter (days 10 and 14) viral titers were lower for vΔ169 compared to controls (Fig 14B). Additionally, mice immunized via the i.d. route with v169, vΔ169 or v169-rev were challenged with wild type virus i.n. at day 28 ( Fig 14C). As observed for i.n. model, vΔ169 induced better protection against challenge as shown by reduced weight loss of mice immunized with vΔ169 compared to controls. Discussion A functional study of VACV WR protein 169 is presented. This small, highly charged protein is expressed early during VACV infection, localizes in cytoplasmic puncta but is excluded from virus factories, and inhibits the initiation of cap-dependent and cap-independent protein synthesis. Thereby, protein 169 reduces production of host inflammatory mediators induced by activation of multiple innate immune signaling pathways. Protein 169 is conserved in many VACV strains and orthopoxviruses but nonetheless is non-essential for virus replication or spread in tissue culture. Instead, it affects the outcome of infection in vivo by decreasing the recruitment of inflammatory leukocytes, delaying clearance of virus, reducing the memory CD8 + T-cell response and diminishing protection against subsequent virus challenge. The ability of protein 169 to inhibit the innate immune response, while not affecting virus replication in cell culture or in vivo, is characteristic of many VACV immunevasins [8]. However, a striking difference between many of the immunevasins characterized hitherto and protein 169 is that the former are often inhibitors of an individual innate immune signaling pathway (or sometimes two pathways) by binding to one or two specific host proteins. In contrast, protein 169 is a general inhibitor of protein synthesis and targets multiple pathways that require nascent protein synthesis. Thus, by blocking the translation of host mRNAs that are transcribed, for instance, following activation of NF-κB, IRF-3 or the JAK/STAT signaling pathways, there is a decreased production of many inflammatory mediators and consequential reduced recruitment of leukocytes to the site of infection. The inhibition of host protein synthesis by viruses is widespread, but hitherto has been considered largely a strategy by which viruses subvert host metabolism to increase virus protein synthesis and production of virions. VACV protein 169 illustrates another purpose, namely, the decrease of host protein synthesis without a concomitant increase in production of virus proteins or infectious virus particles, but with the consequence of restricting the host innate immune response to infection, so aiding virus escape and diminishing immunological memory. Protein 169 is well adapted to this purpose for it is excluded from virus factories, the site of virus protein synthesis, and so targets host translation preferentially, and its loss does not affect virus replication in cell culture or in vivo. Protein 169 is also unusual in that it targets both cap-dependent and cap-independent translation. Many RNA viruses exploit IRES-dependent translation to manufacture their proteins while disabling cap-dependent translation of host mRNAs by targeting the eIF4F complex. Popular strategies are (i) cleavage of eIF4G by viral proteases [63][64][65], (ii) cleavage of poly A-binding protein [66,67], and (iii) decreasing phosphorylation of cap-binding protein eIF4E [68,69]. In contrast, DNA viruses use mostly cap-dependent translation and stimulate eIF4F formation. Herpes simplex virus type 1 (HSV-1) protein ICP0 promotes phosphorylation of eIF4E and 4E-binding protein 1 (4E-BP1) that leads to degradation of 4E-BP1 and stimulation of formation of the eIF4F complex [70]. Also, HSV-1 protein ICP6 binds eIF4G to enhance eIF4F assembly [71]. VACV stimulates eIF4F complex formation through hyper-phosphorylation of 4E-BP1 enabling interaction between eIF4E and eIF4G [27]. Protein 169 acts differently, but has some similarity with the modulation of protein synthesis by hepatitis C virus (HCV) in that it leads to alterations in innate immunity. HCV relies mainly on IRES-dependent translation and causes stimulation of protein kinase R that leads to translation inhibition through phosphorylation of eIF2α to inhibit production of IFN stimulated genes (ISGs) [72]. However, the factors responsible for these changes and their mechanism of action remain unknown. Protein 169 is the third VACV polypeptide shown to inhibit protein synthesis, the others being the de-capping enzymes D9 and D10 [25,32,33]. These enzymes are made either early or late during infection and de-cap both host and viral mRNAs, although some preferential affinity for different cap structures have been shown [33]. Since the viral mRNAs are synthesized in greater abundance, these soon become predominant and so virus proteins are made while host protein synthesis declines. Rapid mRNA turnover is also important for progression between early, intermediate and late stages of VACV gene expression. The importance of decapping for virus replication is illustrated by the loss or mutation of protein D10 that results in a smaller plaque phenotype, accumulation of early transcripts, lower virus yield [26] and attenuation in vivo [34]. Recently a VACV strain expressing catalytically dead versions of D9 and D10 was shown to induce large amounts of dsRNA. This activates pathways leading to inhibition of protein synthesis and consequently reduces virus production and results in severe attenuation in vivo [35]. In contrast, loss of protein 169 has no effect on virus replication in vitro (Fig 3) or in vivo (Figs 11 and 14) and its loss causes an increase in virulence in both i.n. and i. d. models of infection (Figs 11 and 14). In the i.n. model, infection by vΔ169 caused enhanced production of several cytokines (IL-2, IL-6 and TNF-α) and chemokines (CCL11, CXCL9 and CXCL10) within 1 day p.i. (Fig 11) and subsequent greater recruitment of macrophages and CD4 + and CD8 + T cells (Fig 12) and increased lung weight (Fig 11D). Later, this greater recruitment of inflammatory cells leads to more rapid virus clearance and recovery (Fig 11). Similarly, in the i.d. model the greater inflammatory response is reflected in a greater lesion size, but again this is followed by more rapid virus clearance and recovery (Fig 14). The early expression of protein 169 is consistent with prior RNA analysis of the VACV genome that showed early transcription of this ORF and 169 mRNAs were detected from 1 h p.i. [73,74]. Sometimes viruses that induce exacerbated immune responses are more virulent and in this regard it is notable that orthopoxviruses lacking ORF 169 are generally of high virulence. For instance, all sequenced variola viruses and ectromelia virus lack ORF 169 and these viruses are highly virulent in man or mice, causing smallpox and mousepox, respectively. Similarly, VACV strain Copenhagen lacks ORF 169 and caused a higher frequency of post-vaccination complications in man than the more widely used VACV strains Lister and New York City Board of Heath (Wyeth) [1]. VACV strain Copenhagen also caused larger lesion sizes in the mouse intradermal model in comparison to other VACV strains used as smallpox vaccines in man [55]. However, VACV strain Copenhagen and all variola virus strains also lack another factor that diminishes virulence, namely the soluble IL-1β binding protein encoded by gene B15R of VACV strain WR [53,75], and the causes of enhanced virulence are probably multifactorial. The increased virulence seen by loss of gene 169R has a few parallels in orthopoxvirus biology. In addition to deletion of the soluble IL-1β receptor encoded by VACV WR mentioned above [53], deletion of the chemokine binding protein A41 [56], and the B13 serine protease inhibitor [55] each caused an increase in virulence in either the i.n. or i.d. model, and in some cases also induced a stronger immunological memory response that resulted in better protection against virus challenge [57,76]. Infection with vΔ169 generated a stronger innate response (Figs 11 and 12), that led to a stronger memory CD8 + T cell response and better protection to virus challenge (Figs 13 and 14). Increased immunological memory responses and better protection against challenge have also been observed with VACV mutants with diminished virulence, such as viruses lacking the C6 or N1 proteins [60,77]. Protein 169 localizes mainly in the cytoplasm of infected cells throughout the course of infection. The punctate pattern observed might suggest co-localization of 169 with some specific organelles, but only some partial overlap with 40S ribosomes was observed. The precise mechanism by which protein 169 inhibits translation remains to be determined, but the polysome profiling experiments described (Figs 7-9) reveal that protein 169 expression leads to an accumulation of 80S monosomes and reduction of polysomes, particularly of heavier polysomes. This pattern is consistent with a reduced rate of translation initiation, and the stability of the 80S monosomes in high-salt indicates that the 80S ribosomes are mRNA-associated, rather than present in a free pool [78]. Reducing a pool of free ribosome is a strategy used by cardiovirus protein 2A that, in contrast to protein 169, causes accumulation of monosomes free of mRNA [79]. A direct interaction between protein 169 and either the mRNA cap or 40S subunit was not observed, nor was an effect of protein 169 on translation in vitro using rabbit reticulocyte lysate. However, we cannot be sure whether the prepared fraction of protein 169 is functional under the conditions tested. Nonetheless, the capacity of protein 169 to block FMDV IRES-directed translation initiation is consistent with an eIF4E-independent inhibitory mechanism. In addition, the inhibition of CrPV IRES-dependent translation by protein 169 suggests that its inhibitory activity is not mediated by interference with other eIFs. In summary, 169 is an inhibitor of cap-dependent and cap-independent translation, it affects virus virulence and contributes to VACV immunogenicity by diminishing the innate and adaptive immune response. This study illustrates that viral inhibition of protein synthesis can be an immune evasion strategy rather than a mechanism to increase yields of virus from infected cells. Ethics statement This work was carried out in accordance with regulations of The Animals (Scientific Procedures) Act 1986. All procedures were approved by the United Kingdom Home Office and carried out under the Home Office project licence PPL 70/7116. Plasmids The sequence of the VACV WR 169R gene was codon optimized by GENEART for expression in mammalian cells. 169R was then sub-cloned into mammalian expression vectors pcDNA 3.1 or pcDNA4 TO (Invitrogen) without a tag or with an N-terminal FLAG tag. E. coli expression plasmid pOPINE were engineered to express a 169R wild type sequence with a C-terminal His tag (169-His) and plasmid pGEX-6p-1 was engineered to express a 169R wild type sequence with an N-terminal glutathione S-transferase (GST) tag (GST-169). Plasmid Z11-Δ169 was used to construct the VACV mutant lacking gene 169R and contained flanking regions of the 169R gene locus cloned into plasmid Z11 that contains the E. coli guanine phosphoribosyltransferase (Ecogpt) fused with enhanced green fluorescent protein (EGFP) driven by an early/ late VACV promoter as described [45]. Plasmid Z11-169-rev was used to construct the revertant virus v169-rev and contains the 169R gene and flanking sequences inserted into Z11 plasmid. A plasmid encoding a bicistronic gene expressing firefly luciferase in a cap-dependent manner and renilla luciferase in a FMDV IRES-dependant manner was a kind gift from Prof. Ian Goodfellow, Department of Pathology, University of Cambridge. A plasmid encoding a bicistronic reporter gene expressing firefly luciferase in a cricket paralysis virus (CrPV) IRESdependent manner and renilla luciferase in a cap-dependant manner was a kind gift from Dr. Eric Jan, Department of Biochemistry and Molecular Biology, University of British Columbia, Canada. NF-κB-Luc, ISRE-Luc and TK renilla was obtained from Dr. Andrew Bowie (Trinity College, Dublin, Ireland), ISG56.1 Luc was from Ganeth Sen (Lerner Research Institute, Ohio), and M5P Luciferase-NEMO (Luc-NEMO) and M5P GFP-FLAG were obtained from Dr. Felix Randow (MRC Laboratory of Molecular Biology, Cambridge, United Kingdom). C6.TAP, N1. TAP, B14.FLAG and A49.TAP were described previously [15,17,45,47]. V5-PiV5-V was provided by Jennifer H. Stuart (Department of Pathology, University of Cambridge, UK). Recombinant VACVs 169 construction VACV vΔ169 was constructed by transfecting plasmid Z11-Δ169 into VACV WR infected CV-1 cells using FuGENE 6 and a recombinant VACV was isolated by transient dominant selection [40] as described for other VACV deletion mutants [12,80]. Plaque purified wild type 169 (v169) and deletion 169 (vΔ169) viruses were isolated from the same intermediate virus and were genotyped using PCR and primers amplifying the flanking regions of the 169R locus. The revertant 169 virus (v169-rev) was constructed by transfection of plasmid Z11-169-rev into vΔ169-infected CV-1 cells following the same procedure as described above. Genomic DNA isolated from recombinant VACVs (v169, vΔ169 and v169-rev) were compared to parental VACV WR virus using restriction endonuclease digestion with HindIII or SphI digestion and virus DNA was visualized after pulsed field gel electrophoresis. 169 polyclonal serum production E. coli BL21(DE3) R3 pRARE cells (kind gift from SGC Oxford), where R3 denotes a derivative of BL21(DE3) resistant to a strain of T1 bacteriophage (SGC Oxford) and the pRARE plasmid originates from the Rosetta strain (Novagen) and supplies tRNAs for rare codons, were transformed with the 169-His expression plasmid. The bacteria were grown in terrific broth and the expression of 169-His was induced by 1 mM IPTG at 37°C for 6 h. Bacteria were collected by centrifugation, lysed and disrupted by sonication. 169-His was purified from the soluble fraction by immobilized metal affinity chromatography (IMAC) using a His-Trap HP column followed by ion exchange chromatography (IEX) using a MonoQ GL column. Three and a half mg of 169-His was used to inoculate two rabbits (Eurogentec, Seraing, Belgium) to obtain polyclonal sera. Two rabbits (number 770, 771) were immunized at day 0, 14, 28 and 56 with Freund's complete adjuvant at day 0 and with incomplete Freund's adjuvant for the boosts with dose of 400 μg of 169-His. Sera prepared from venous blood drawn before immunization and at day 66 were tested for recognition of protein 169 expressed during VACV infection. Serum from rabbit 771 was sensitive enough to detect protein 169 from VACV-infected cells. This serum was used for immunoblotting analysis throughout this study (further referred as anti-169). Serum from rabbit 770 was further purified against GST-169 using AminoLink immobilization kit. GST-169 protein was produced in BL21(DE3) E. coli bacteria (Merck Millipore) transformed with pGEX-6p-1 GST-169 plasmid. Bacteria were grown in LB and expression of GST-169 was induced by 1 mM IPTG at 37°C for 6 h. Bacteria were collected by centrifugation, lysed and disrupted by sonication. GST-169 was purified from soluble fraction using glutathione-sepharose 4B and size exclusion chromatography (SEC) using Superdex 75 10/300 GL column. Two mg of GST-169 was used for polyclonal serum purification using AminoLink immobilization kit following the manufacturer's instructions for the pH 7 protocol. Protein 169-specific purified IgG (further referred as an anti-169 purified antibody) were used for immunofluorescence studies. Virus growth properties For analysis of virus single step growth properties, BSC-1 cells were infected at 10 PFU/cell for 12 or and 24 h. Extracellular virus in the clarified growth medium (after centrifugation at 500 x g for 10 min) was titrated by plaque assay on BSC-1 cells. Cell associated virus was measured by scraping cells from the plastic flask, combining these with the debri from the supernatant and collection by centrifugation as above. Cells were then disrupted by three rounds of freeze-thawing and sonication and the virus was titrated by plaque assay on BSC-1 cells. For analysis of multiple step growth properties, BSC-1 cells were infected at 0.05 PFU/cell for 24 and 48 h. The extracellular and cell-associated viral titers were determined as described above. Plaque size assay BSC-1, RK-13 and TK -143 cells were infected with the indicated VACVs at 50 PFU/ well of a 6-well plate. The radius of plaques was measured after 72 h using Axiovision 4.8.2 software on an Axiovert.A1 microscope (Zeiss) with Axiocam MRc. In each condition 20 plaques per virus were measured in three independent experiments. Murine intranasal and intradermal models of infection For intranasal (i.n.) model of infection, BALB/c mice (6-8 weeks old) were inoculated with VACVs, which had been purified by sedimentation twice through a sucrose cushion, (5 × 10 3 PFU into each nostril) and monitored daily for a weight loss and scored for signs of illness as follows hair ruffling, back arching, reduced mobility, pneumonia [52,53]. For the intradermal (i.d.) model of infection, female C57BL/6 mice (6-8 weeks old) were inoculated with purified VACVs (10 4 PFU) in both ear pinna and the diameter of the lesion was measured daily using a micrometer [54]. The administered dose was confirmed by plaque assay. For challenge experiments, immunized animals were challenged i.n. 28 d p.i. with 5 × 10 6 PFU of v169 and weighed daily thereafter. Bronchial alveolar lavage (BAL) fluids were prepared on the indicated days. These were centrifuged at 1500 g to obtain cells for flow cytometry and the clarified supernatant was used for ELISA. Live cells collected from BAL fluids were counted using a haemocytometer following staining with trypan blue. For determination of lung and ear tissues viral titers, the lungs and ears tissues were homogenized and washed through a 70 μm nylon mesh using DMEM and 10% FBS. Cells were then frozen and thawed three times, and sonicated thoroughly to liberate intracellular virus. Infectious virus was titrated in duplicate by plaque assay on BSC-1 cell monolayers. For chromium-release cytotoxicity assay, NK cell cytotoxicity and VACV-specific cytotoxic T lymphocyte (CTL) activity within total splenocyte populations was assayed with a standard 51 Cr-release assay as described [77]. NK-mediated lysis was tested on uninfected YAC-1 cells, while VACV-infected P815 cells (H-2d, mastocytoma) were used as targets for VACV-specific CTL lysis. The percentage of specific 51 Cr release was calculated as specific lysis = [(experimental release−spontaneous release)/(total detergent release−spontaneous release)]×100. The spontaneous release values were always < 10% of total lysis. Cell fractionation HeLa cells were either mock-infected or infected at 10 PFU/cell for 7 h. The cells were fractionated using Cell fractionation kit (Thermo Scientific) according to the manufacturer's instruction. Immunofluorescence HeLa cells were seeded on glass coverslips and were either mock-infected of infected at 10 PFU/cell or 2 PFU/cell in case of 16 h time point. At the indicated times, the cells were washed twice with PBS and fixed with 4% paraformaldehyde in PBS containing 250 mM HEPES. The cells were permeabilized with 0.1% triton X-100 followed by blocking with 10% FBS in PBS (blocking buffer) for 0.5 h. Coverslips were incubated with primary antibodies for 1 h in a moist chamber followed by three 10 min washes with 10% FBS. Coverslips were incubated with secondary antibody (Alexa Fluor 488 Goat Anti-Rabbit IgG (H+L), Alexa Fluor 546 donkey Anti-Mouse IgG (H+L) 1:750 diluted in blocking buffer) for 30 min in a moist chamber followed by three 5 min washes with 10% FBS and PBS only. The coverslips were washed with water and mounted in Mowiol 4-88 containing DAPI. Coverslips were allowed to set and stored at 4°C. Cells were visualized by Axio observer Z1 confocal microscope (Zeiss) with a 63x oil objective. Reporter gene assay Reporter gene assays was performed in HEK 293T cells in 96-well dishes as described [45]. Cells were transfected in triplicate with 60 ng of firefly reporter plasmid (NF-κB, ISG 56.1 or ISRE), 10 ng of TK renilla (as an internal control) and 100 ng of expression plasmid or empty vector control using TransIT-LT1 according to the manufacturer's instruction. The following day cells were stimulated; (i) with 75 ng of TNF-α for 7 h (NF-κB Luc) or (ii) transfected with 200 ng/well of poly I:C for 24 h (ISG56.1 Luc) using lipofectamine, or (iii) with 100 U/ml of IFN-α for 7 h (ISRE Luc). Cells were lysed using passive lysis buffer (Promega) and firefly luciferase activity was normalized to the renilla luciferase activity, and these data were further normalized to the un-stimulated controls of each test plasmid. RT-q-PCR A549 cells were transfected in triplicate with GFP.FLAG, B14.FLAG, C6.TAP and 169 using Lipofectamine LTX Plus (Life Technologies). The following day cells were stimulated with 50 ng/ml of TNF-α for 7 h. RNA was extracted using RNeasy Mini Kit (Qiagen) and converted to cDNA using SuperScript reverse transcriptase. ICAM-1, IL-6 and NF-κBia mRNA were quantified in comparison to hypoxanthine-guanine phosphoribosyltransferase (HPRT) using SYBR green master mix. HeLa or HEK 293T cells were transfected with indicated plasmids for 24 h. RNA was extracted using RNeasy Mini Kit (Qiagen) and converted to cDNA using SuperScript reverse transcriptase. GFP, luciferase or 169 mRNA were quantified and compared to HPRT or glyceraldehyde 3-phosphate dehydrogenase (GAPDH). For analysis of cytoplasmic and nuclear mRNA, HEK 293T cells were transfected with empty vector control, A49.TAP and 169 together with NEMO-Luc. After 4 h the cells were treated with CHX (1 μg/ml) for 16 h. Cells were lysed in RLN buffer (50 mM Tris HCl pH 8.0, 140 mM NaCl, 1.5 mM MgCl 2 , 0.5% (v/v) Nonidet P-40, 1 mM DTT, 500 U/ml RNAse out), scraped and incubated for 5 min on ice. Nuclei were sedimented by centrifugation at 1000 g for 3 min. Supernatant (cytoplasmic fraction) was taken and mRNA was extracted according to the manufacturer's instruction (Qiagen). RLT buffer was added to the pellet (nuclear fraction) and forced through a 25G needle ten times. Further steps followed the manufacturer's instructions (Qiagen). cDNA was prepared using SuperScript reverse transcriptase. Luc-NEMO, HPRT and TATA box binding protein mRNA were quantified in comparison to (GAPDH) using SYBR green master mix. ELISA HEK 293T cells were transfected in triplicate with GFP.FLAG, B14.FLAG, C6.TAP, 169 and Δ12 A49.TAP. The following day cells were either mock-infected or infected with SeV for 24 h. The amount of CXCL10 in the supernatant was determined using human CXCL10 Quantikine ELISA Kit (R&D Systems). The results were analyzed using nonlinear standard curves for ELISA (GraphPad PRISM). HEK 293 Trex 169 inducible cell line construction HEK 293 Trex (Invitrogen) empty cells were transfected with 169 (pcDNA4 TO) using Tran-sIT-LT1. Transfected cells were selected in the presence of zeocin and were serially diluted to obtain individual clones. Expression of protein 169 within these clones were analyzed by immunoblotting and immunofluorescence. In the chosen clone at least 90% of cells were expressing protein 169. Puromycin labeling HEK 293 Trex 169 or C6.TAP [50] cells were treated with 1 μg/ml of DOXY for the indicated times to express protein 169 or C6. Cells were treated with 5 μg/ml of puromycin for 25 min and harvested for analysis by immunoblotting [49]. HeLa cells or BSC-1 cells were mockinfected or infected with VACVs at 5 PFU/cell for the indicated times. Cells were treated with puromycin as described above. Polysome profiling analysis HEK 293 Trex 169 or C6.TAP [50] cell were uninduced or induced with 1 μg/ml of DOXY for 16 h. Thirty min prior to harvesting, the cells were treated with CHX (1 μg/ml). Cells were washed and lysed in lysis buffer supplemented with protease inhibitors (cOmplete, Mini, EDTA-free, Roche, 1 tablet in 10 ml of lysis buffer) (20 mM Tris HCl pH 7.5, 100 mM KCl, 5 mM MgCl 2 , 1 mM CHX, 1 mM DTT, 0.1 mM EDTA) with DNAse I and NP-40 (0.03%) followed by trituration with a 25G needle. Cleared (19,000 g for 5 min at 4°C) cytoplasmic lysates were layered on top of sucrose density gradient (10-50% sucrose in lysis buffer) prepared by a Gradient Master (Biocomp) and resolved by centrifugation at 200,000 g for 90 min at 4°C. Absorbance (254 nm) composition within the gradient was measured during fractionation at 4°C using an Isco fractionator. Proteins from these fractions were extracted using methanolchloroform extraction and subjected to immunoblotting analysis. Polysome profiling in higher salt condition was carried out with HEK 293 Trex 169 as described above except that the lysis buffer and sucrose density gradient contained 400 mM KCl. Statistical analysis Statistical analysis was performed using Student's two tail t-test unless otherwise stated. (1 μg/ml) for 16 h. The relative amount RLuc was determined by luminescence and the results are expressed as luciferase fold induction normalized to the EV control ± SD. (B) Performed as in (A) except that mRNAs were extracted, cDNAs were prepared and the level of mRNA for RLuc were determined by RT-q-PCR. Results are expressed as CT values compared to GAPDH levels ± SD. (C) Performed as in (A) except that the cytoplasmic fraction was prepared from lysed cells and proteins were separated and resolved by SDS-PAGE followed by immunoblotting with the indicated antibodies. The positions of molecular size markers in kDa are indicated on the left. Nuclear cell lysates prepared from cell fractionation from mock-infected HeLa cells serve as a positive control for lamin staining. (D, E, F) performed as in (B) except that mRNAs were extracted from cytoplasmic and nuclear fractions separately (Methods), and the mRNA levels of RLuc, HPRT and TBP were determined by RT-q-PCR. Results are expressed as CT values compared to GAPDH levels ± SD. Data shown are from one representative experiment (n = 3). Statistical analysis was performed using a two-tailed Student's t-test with Welch's correction where necessary, Ã p < 0.05, ÃÃ p < 0.01, ÃÃÃ p < 0.001, ÃÃÃÃ p < 0.0001. (TIF)
2018-04-03T00:11:14.124Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "0608989eec4a4135b604df320d50172b53e0d2b2", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1005151&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0608989eec4a4135b604df320d50172b53e0d2b2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
249847925
pes2o/s2orc
v3-fos-license
2022 Confront relativistic ab initio equation of states to universal relations of neutron star Starting from the relativistic realistic nucleon-nucleon ( NN ) interactions, a newly developed relativistic ab initio method, i.e., the relativistic Brueckner-Hartree-Fock (RBHF) theory in the full Dirac space is employed to study the neutron star properties. First, the one-to-one correspondence relation for gravitational redshift and mass is established and used to infer the mass of isolated neutron stars combining the gravitational redshift measurements. Next, the ratio of the moment of inertia I to MR 2 as a function of the compactness M/R is obtained, which is consistent with the universal relations in the literature. The moment of inertia for 1 . 338 M ⊙ pulsar PSR J0737-3039A I 1 . 338 M ⊙ is predicted to be 1.356 × 10 45 , 1.381 × 10 45 , and 1 . 407 × 10 45 g cm 2 by the RBHF theory in the full Dirac space with NN interactions Bonn A, B, and C, respectively. Finally, the quadrupole moment of neutron star is calculated under the slow-rotation and small-tidal-deformation approximation. The equation of states constructed by the RBHF theory in the full Dirac space, together with those by the projection method and momentum-independence approximation, conform to universal I -Love- Q relations as well. By combing the tidal deformability from GW170817 I. INTRODUCTION Neutron stars are one of the most compact objects in the universe, whose central densities can reach as high as 5 to 10 times the saturation density of nuclear matter ρ 0 ≈ 0.16 fm −3 [1], which is far beyond what can be achieved in terrestrial laboratories. Therefore, neutron stars are ideal laboratories for studying the ultra-dense matter, and have established close connections among nuclear physics, particle physics, and astrophysics. The astrophysical observations for the global properties of neutron stars provide important constraints for the equation of state (EoS) of dense matter [2][3][4][5], which is the only ingredient needed to unveil the structure of neutron stars theoretically. The high-precision mass measurements of massive neutron stars constitute nowadays one of the most stringent astrophysical constraints on the nuclear EoS, such as PSR J1614-2230 (1.928 ± 0.017M ⊙ ) [6,7], PSR J0348+0432 (2.01 ± 0.04M ⊙ ) [8], and PSR J0740+6620 (2.08 ± 0.07M ⊙ ) [9,10]. Recently, the Neutron star Interior Composition Explorer (NICER) mission has reported two independent Bayesian parameter estimation of the mass and equatorial radius of the millisecond pulsar PSR J0030+0451 as 1.34 +0. 15 −0.16 M ⊙ and 12.71 +1.14 −1.19 km [11], as well as 1.44 +0.15 −0.14 M ⊙ and 13.02 +1. 24 −1.06 km [12]. In combination with constraints from radio timing, gravitational wave (GW) observations, and nuclear physics experiments, these posterior distributions have been used to infer the properties of the dense matter EoS (see Ref. [13] and reference therein). Moreover, the two independent Bayesian estimations of the radius for the massive millisecond pulsar PSR J0740+6620 have also been reported [13,14]. Another unique probe for studying the properties of dense matter has been extracted by the recent observation of GW signals emitted from a binary neutron star merger, i.e., GW170817 [15]. The tidal deformability, which denotes the mass quadrupole moment response of a neutron star to the strong external gravitational field induced by its companion [16][17][18][19][20], can be inferred from the GW signals. The limits on the tidal deformability have been widely used to constrain the neutron star radius [21][22][23][24], the asymmetric nuclear matter EoS [25][26][27], and hence the neutron skin thickness of 208 Pb [21]. Besides, as rotating objects, the internal structures of neutron stars are strongly constrained by the moment of inertia, which can be determined from the measurements of spin-orbit coupling in double pulsar systems [28]. Such a measurement of the moment of inertia for neutron star would have crucial implications for delimiting the EoS significantly [29] and can be used to distinguish neutron stars from quark stars [30]. Special attention has been attracted by the system PSR J0737-3039 [28,31,32], which is the only currently known double pulsar system. It is hoped that the moment of inertia of the 1.338M ⊙ primary component in this system, i.e., PSR J0737-3039A, will be measured eventually to within 10% [33], which could be used to impose new constraints on the EoS [34]. The global properties of neutron stars, like masses, radii, tidal deformabilities, and moment of inertia are highly sensitive to the EoS for neutron star matter [35][36][37]. Nevertheless, it has been shown [30,38] that, for slowly rotating neutron stars, there exists universal relations between the moment of inertia I, the tidal deformability Λ (or Love number), and the quadrupole moment Q of neutron stars, the so-called I-Love-Q relations, which are approximately independent of the internal composition and the EoS for neutron star matter. Wide attention has been attracted on these universal relations (see Ref. [39] for a review). Although so far the reason for these universal behaviours are not well understood [40,41], attempts have been made to combine the universal relations with GW detections to infer the neutron star properties [42,43]. The robustness of the I-Love-Q relations have been extensively studied with EoSs constructed from a variety of nuclear models (see Refs. [39,44] and references therein), including the relativistic Brueckner-Hartree-Fock (RBHF) theory [45][46][47][48][49][50]. The RBHF theory has played an important role in the long history for ab initio understanding the properties of dense nuclear matter from realistic nucleon-nucleon (NN) interactions since 1980s [51,52]. In the RBHF theory, the single-particle motion of the nucleon in nuclear matter is described with the Dirac equation, where the medium effects are absorbed into the single-particle potentials. In principle, the scalar and the vector components of the single-particle potentials should be determined in the full Dirac space [53], i.e., by considering the positive-energy states (PESs) and negative-energy states (NESs) simultaneously. However, to avoid the difficulties induced by NESs, the RBHF calculations are primarily performed in the Dirac space without NESs [45][46][47][48][49]54]. Recently, a self-consistent RBHF calculation in the full Dirac space has been achieved for symmetric nuclear matter (SNM) [55,56] and asymmetric nuclear matter (ANM) [57]. By decomposing the matrix elements of single-particle potential operator in the full Dirac space, the momentum dependent scalar and vector components of the single-particle potentials are determined uniquely [55]. The long-standing controversy about the isospin dependence of the effective Dirac mass in relativistic ab-initio calculations of ANM is also clarified [57]. The RBHF theory in the full Dirac space has been applied to neutron stars [37,57], where the mass, radius, and tidal deformability are calculated with realistic NN interactions Bonn A, B, and C [58]. The maximum mass of a neutron star is found less than 2.4M ⊙ and the neutron star radius for 1.4M ⊙ is predicted about 12 km, which are consistent with the astrophysical observations of massive neutron stars and simultaneous mass-radius estimations by NICER [12]. The tidal deformabilities for 1.4M ⊙ neutron star are predicted as 376, 473, and 459 for the three parametrizations of NN interactions respectively, which all lie in the region Λ 1.4M ⊙ = 190 +390 −120 inferred from the revised analysis by LIGO and Virgo collaborations [59]. In this work, we employ the RBHF theory in the full Dirac space to study other global properties of neutron star, including the gravitational redshift, moment of inertia, and quadrupole moment under the slow-rotation and small-tidal-deformation approximation. The main focus will be the relation between the moment of inertia and the compactness parameter, as well as the universal I-Love-Q relations. This paper is organized as follows. In Sec. II, the theoretical framework of the RBHF theory and structure equations for neutron star properties are briefly described. The obtained results and discussions are presented in Sec. III. Finally, a summary is given in Sec. IV. A. The relativistic Brueckner-Hartree-Fock theory In the RBHF calculations, one of the most important procedure is the self-consistent determination of the single-particle potential operator U of the nucleons, which is generally divided into scalar and vector components [60] Herep = p/p is the unit vector parallel to the momentum p. The quantities U S (p), U 0 (p), and U V (p) are the scalar potential, the timelike part and the spacelike part of the vector potential. In principle, the scalar and the vector components of the single-particle potentials can only by determined uniquely in the full Dirac space. However, to avoid the numerical difficulties in the full Dirac space, different approximations are proposed to extract the single-particle potentials in the Dirac space without NESs. The momentum-independence approximation [46] assumes that the single-particle potentials are independent of the momentum, and the spacelike component of the vector potential U V is negligible. The scalar potential U S and the timelike part of the vector potential U 0 are then extracted from the single-particle potential energies at two selected momenta. In the projection method [49], the effective NN interaction G matrix, which is obtained by solving the in-medium scattering equation, are projected onto a complete set of five Lorentz invariant amplitudes, from which the single-particle potentials are calculated analytically. However, the choice of these Lorentz invariant amplitudes is not unique. Only by decomposing the matrix elements of U in the full Dirac space, the Lorentz structure and moment dependence of single-particle potentials can be uniquely determined [53]. The theoretical framework for the RBHF theory in the full Dirac space has been described in detail in Ref. [55] for SNM and Ref. [37] for ANM. In this work this method is used to construct the EoS of neutron star matter, which is regarded as beta equilibrium nuclear matter consisting of protons, neutrons, electrons, and muons [61]. Using the relativistic Bonn A potential [58], the RBHF theory in the full Dirac space for nuclear matter is applicable for density in the range 0.08-0.57 fm −3 . For lower density in the crust of neutron star, the EoS introduced with the Baym-Bethe-Pethick (BBP) [62] and Baym-Pethick-Sutherland (BPS) model [63] is used. For higher density, we follow the strategy proposed in Ref. [64] and applied in Refs. [57,65], where the neutron-star matter EoS above a critical density ρ c = 0.57 fm −3 is replaced with the maximally stiff or causal one, which predicts the most rapid increase of pressure with energy density without violating the causality limit. B. Mass, radius, gravitational redshift, and tidal deformability The stable configurations of a cold, spherically symmetric, and nonrotating neutron star can be obtained from the Tolman-Oppenheimer-Volkov (TOV) equations [66,67]. Adopting natural units G = c = 1, the TOV equations are given by where P (r) is the pressure at neutron star radius r, M(r) is the total neutron star mass inside a sphere of radius r, E(r) is the total energy density. These differential equations can be solved numerically with a given central pressure P c and M(0) = 0. The quantity R for P (R) = 0 denotes the radius of the neutron star, and M(R) is its mass. The gravitational redshift which relates the mass of the neutron star to its radius is defined as Since the radius of the neutron star is harder to observe relative to its mass, the simultaneous measurements of the mass and the gravitational redshift would provide a clear radius determination. The tidal deformability is defined as where C = M/R is the compactness parameter. The second Love number k 2 [17,68] is calculated by where y R = y(R) is the solution of the following nonlinear, first-order differential equation Here the two functions F (r) and Q(r) depend on the known mass, radius, pressure, and energy density profiles of the star: The differential equation (6) for k 2 can be solved together with the TOV equations and the initial condition y(0) = 2. C. The moment of inertia The moment of inertia is calculated under the slow-rotation approximation pioneered by Hartle and Thorne [69,70], where the frequency Ω of a uniformly rotating neutron star is far smaller than the Kepler frequency at the equator In the slow-rotation approximation the moment of inertia of a uniformly rotating, axially symmetric neutron star is given by the following expression [71] The quantity ν(r) is a radially dependent metric function and defined as The frame-dragging angular velocityω is usually obtained by the dimensionless relative frequencyw ≡ω/Ω, which satisfies the following second-order differential equation: where j(r) = e −ν(r) 1 − 2M(r)/r for r ≤ R. The relative frequencyω(r) is subject to the following two boundary conditionsω It should be noted that under the slow-rotation approximation the moment of inertia does not depend on the stellar frequency Ω. D. The quadrupole moment It has been shown [30,38] that there exists universal relations between the moment of inertia, the Love number, and the quadrupole moment of neutron stars. Physically, the moment of inertia quantifies how fast a neutron star can spin for a fixed angular momentum, the quadrupole moment describes how much a neutron star is deformed away from sphericity due to rotation, and the Love number characterizes how easily a neutron star can be deformed due to an external tidal field. These quantities can be computed by numerically solving for the interior and exterior gravitational field of a neutron star in a slow-rotation [69,70] and a small-tidal-deformation approximation [17,68]. In this work the quadrupole moment is calculated by following the detailed instructions described in Ref. [38]. In order to investigate the universal I-Love-Q relations, the following dimensionless quantities are introduced III. RESULTS AND DISCUSSIONS The clear one-to-one correspondence relation for gravitational redshift and mass established in the left panel in Fig. 1 can be used to infer the mass of a isolated neutron star, when the observation of the gravitational redshift is provided. In Fig. 1 In Fig. 2 we display the ratio of the moment of inertia I to MR 2 as a function of the compactness parameter M/R obtained by the RBHF theory in the full Dirac space with NN interactions Bonn A, B, and C. Lattimer et al. [29] has shown that, in the absence of phase transition and other effects that strongly soften the EoS at supra-nuclear densities, there is a relatively unique relation between the quantity I/MR 2 and M/R: This relation is shown as the purple band in Fig. 2. It is found that our results are consistent with the universal relations obtained in Ref. [29] for the range where M/R > 0.08 M ⊙ /km. This result is also shown as the orange dashed line in Fig. 2. It can be seen that our results are very close to that obtained by Lim et al, especially for the neutron stars with small compactness. Although the ratio of the neutron star moment of inertia I to MR 2 has an universal function of the compactness parameter M/R, the moment of inertia itself depends sensitively on the neutron star's internal structure. It has been suggested [29] that a measurement predicted with the tidal deformability in GW170817 combing universal relations [42] as well as that inferred from Bayesian analysis (95% credibility) [35] are also shown. The last column is the corresponding radius. accuracy of 10% for I is sufficient to place strong constraints on the EoS. In Tab. II, we show the momenta of inertia I 1.338M ⊙ and radius R 1.338M ⊙ for PSR J0737-3039A predicted by the RBHF theory in the full Dirac space with three parametrizations for NN interactions. The results obtained by the projection method [49,74] and momentumindependence approximation [46] are also shown. The RBHF theory in the full Dirac space leads to the minimum values comparing to the approximations in the Dirac space without NESs. This is understandable since the RBHF theory in the full Dirac space gives the minimum radius of neutron star for the fixed canonical mass: Starting from Bonn A, the radii R 1.4M ⊙ of a 1.4M ⊙ neutron star from these three models are 11.98, 12.38, and 12.35 km, respectively [37]. The moment of inertia for PSR J0737-3039A predicted by the RBHF theory in the full Dirac space with Bonn A is 1.356 × 10 45 g cm 2 , which is very close to the most probable value 1.36 × 10 45 g cm 2 obtained from Bayesian analysis (95% credibility) [35]. The result from the non-relativistic ab initio variational calculations [76] is also shown in Tab. II, where the Argonne v18 interaction (AV18) [77] is used, together with the relativistic boost corrections to the two-nucleon interaction as well as three-nucleon interactions modeled with the Urbana force [78]. The non-relativistic ab initio calculation leads to a moment of inertia smaller that what we obtain, similar as the case for radius. In Ref. [42], by using well-known universal relations among neutron star observables, the reported 90% credible bound on the tidal deformability Λ 1.4M ⊙ = 190 +390 −120 from GW170817 [59] has been translated into a direct constraint on the moment of inertia of PSR J0737-3039A, giving It can be seen that the results with three methods for RBHF theory are consistent with this constraint. Let us now confront the EoSs obtained by the relativistic ab initio calculations, i.e., the RBHF theory in the full Dirac space, projection method, and momentum-independence approximation with Bonn potentials, to the universal I-Love-Q relations. The I-Love as well as Q-Love relations and I-Q relations are shown in the top panels of Fig. 3 and Fig. 4, respectively. The dimensionless moment of inertiaĪ and dimensionless quadrupole moment Q are defined in Eq. (15). A single parameter along the curve is the mass or compactness, which increases to the left of the plots. Similar as Ref. [39], we only show data with the mass of an isolated, non-rotating configuration in the range 1M ⊙ < M < M max with M max representing the maximum mass for such a configuration. One observes that the universal relations hold very well. Since the relations are insensitive to EoS, one can construct a single fit (black solid curves) given by [38,39] ln where coefficients are listed in Tab [42], where the I-Love relation is obtained by using a large set of candidate neutron star EoSs based on relativistic mean-field and Skyrme-Hartree-Fock theory. IV. SUMMARY In summary, the RBHF theory in the full Dirac space has been employed to study the gravitational redshift, moment of inertia, and quadrupole moment of neutron star under the slow-rotation and small-tidal-deformation approximation. The one-to-one correspondence relation for gravitational redshift and mass is established and used to infer the mass of isolated neutron stars combining the gravitational redshift measurements. The ratio of the moment of inertia I to MR 2 as a function of the compactness M/R is obtained, which is consistent with the universal relations shown by Lattimer et al. [29] and that from Bayesian posterior probability distributions by Lim et al. [35]. Using NN interactions Bonn A, B, and C, the moment of inertia for 1.338M ⊙ pulsar PSR J0737-3039A is predicted to be
2022-06-20T01:15:40.522Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "c8eb94bf764bb5454f65faf5a32827a2bc023d1c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c8eb94bf764bb5454f65faf5a32827a2bc023d1c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
208549081
pes2o/s2orc
v3-fos-license
Familial intellectual disability as a result of a derivative chromosome 22 originating from a balanced translocation (3;22) in a four generation family Background Balanced reciprocal translocation is usually an exchange of two terminal segments from different chromosomes without phenotypic effect on the carrier while leading to increased risk of generating unbalanced gametes. Here we describe a four-generation family in Shandong province of China with at least three patients sharing severe intellectual disability and developmental delay resulting from a derivative chromosome 22 originating from a balanced translocation (3;22) involving chromosomes 3q28q29 and 22q13.3. Methods The proband and his relatives were detected by using karyotyping, chromosome microarray analysis, fluorescent in situ hybridization and real-time qPCR. Results The proband, a 17 month-old boy, presented with severe intellectual disability, developmental delay, specific facial features and special posture of hands. Pedigree analysis showed that there were at least three affected patients. The proband and other two living patients manifested similar phenotypes and were identified to have identically abnormal cytogenetic result with an unbalanced translocation of 9.0 Mb duplication at 3q28q29 and a 1.7Mb microdeletion at 22q13.3 by karyotyping and chromosome microarray analysis. His father and other five relatives had a balanced translocation of 3q and 22q. Fluorescence in situ hybridization and real-time qPCR definitely validated the results. Conclusions The abnormal phenotypes of the proband and his two living members in four generations of the family confirmed the 3q duplication and 22q13.3 deletion inherited from familial balanced translocation. This is the first report of familial balanced reciprocal translocation involving chromosomes 3q28q29 and 22q13.3 segregating through four generations. Background Balanced reciprocal translocation, the most common chromosomal rearrangement in humans, is usually an exchange of two terminal segments from different chromosomes without genetic material loss which occur in 0.16%-0.20% (1/625-1/500) of live births [1][2][3]. Almost all balanced translocation have no phenotypic effect on the carrier but lead to increased risk of generating unbalanced gametes. Of the significant hazards is unfavorable pregnancy outcomes such as recurrent miscarriages, still births, early newborn deaths, or the offspring with birth defects due to the different forms of the unbalanced gametes produced during the meiotic segregation of chromosomes. Meiotic segregation of reciprocal translocation produces gametes with a variety of combinations of normal and translocated chromosomes. The partial chromosome complement of 32 possible zygotes could be produced by the union of gametes from a translocation carrier parent and a non-carrier parent [4]. Here, we describe a four-generation Chinese family with six individuals carrying a karyotypically balanced chromosomal translocation t(3;22)(q28;q13) manifesting normal phenotype, while there are three patients with severe intellectual disability and developmental delay carrying 3q28q29 duplication and 22q13.33 deletion. The conventional cytogenetic analysis combined with chromosome microarray identified the submicroscopic imbalances deciphering the etiology of such patients in the family. We reviewed the literature of partial trisomy 3q associated with 3q duplication syndrome [5] and 22q13.3 microdeletion syndrome [6], and discussed the genotype-phenotype correlation related to this case. Clinical description The proband, 17 month-old boy, is the first child of a couple of unrelated healthy parents. His mother had one time of active abortion before (see Fig. 1 for the pedigree chart of the family) without obviously inducing factors. The boy's gestation period was normal. The boy was born at 40 weeks of gestation with weight of 3.2 kg and length of 50 cm. His head circumference was 34 cm and Apgar score was 10. His audition test was normal. The boy showed normal development at birth. From 6 month-old, he gradually displayed developmental retardation. At age of 17 month, his mental and motor development was significantly delayed. He was 12 kg mass and 80 cm tall, the head circumference was 44 cm. He showed facial dysmorphism with protruding forehead, bushy eyebrows, big eyes, hypertelorism, big cup ears, low nasal bridge, downturned corners of the mouth, pointed jaw, and apathia, and special posture of hands including both hands ulnar deviation at the state of relation, middle finger straight while other fingers bending, strong proximal belly-finger, and grasping only with his thumb and middle finger. His development was tested with Gesell Developmental Observation-Revised (GDO-R) demonstrating extremely severe developmental delay for adaptability (score: 22 points), moderate delay for gross motor (score: 41 points) and fine motor (score: 51 points), severe delay for language (score: 38 points) and personal-social interaction (score: 36 points). Now, He could mumbled "mama, baba" in occasional unconsciousness, but neither walked nor followed instructions. He manifested less eye contact with casual eye tracking. Other examinations including magnetic resonance imaging of brain, electroencephalogram (EEG), cardiac and abdominal ultrasound were all normal. A total of 22 family members in four generations were investigated in the family. Additional two family members (III:3 and III:7) were found to manifest the same phenotypes as the proband. The oldest living patient (III:7) was a 21-year-old male presenting severe intellectual disability, speech disorder, motor retardation, specific facial features and special posture of hands. G-banding karyotyping Peripheral blood leukocytes from the proband and other family members were stimulated by phytohemagglutinin. Routine cytogenetic analysis by G-banding techniques at the 400 bands of resolution was performed using imaging software for humans according to the International System for Human Cytogenetic Nomenclature (ISCN, 2016). Fluorescence in situ hybridization (FISH) analysis To verify the balanced translocation found in karyotyping and to validate the obligate carriers in the family, subtelomeric FISH studies were performed using Agilent SureFISH probes (Agilent, Beijing, China) for 3p26.2 (SureFISH 3p26.2 CNTN4 RD, Orange Red) and 3q29 (SureFISH, 3q29 WDR53 211 kb, Green), 22q13.33 (SureFISH 22q13.33 SHANK3, Green) and 22CEP (SureFISH 22CEP, Orange Red) according to manufacturer's procedure. Selected subtelomeric probes were used for carriers and healthy family members to identify balanced carriers. The 22CEP probe is not a centromere specific but a locus specific centromere-near probe at 22q11. Chromosome microarray analysis (CMA) Chromosome microarray was performed for the proband using Affymetrix CytoScan HD array (Affymetrix, Santa Clara, CA), and data were analyzed with the software of Chromosome Analysis Suite (ChAS) (Affymetrix, Santa Clara, CA) using the following filtering criteria: deletions > 5 kb (a minimum of five markers) and duplications > 10 kb (a minimum of 10 markers). DNA digestion, ligation, fragmentation, labeling, hybridization, staining and scanning were performed following the Affymetrix's protocol. The Database of Genomic Variants (GRCh37/ hg19) and OMIM, DECIPHER, ISCA were used to evaluated the array data and analyze genotype-phenotype correlation. Real-time quantitative PCR validation To verify the chr22q13.3 microdeletions in the patients of the family, a pair of primers were designed to target the deleted gene SHANK3 (chr22:50674415-50733298) using an online primer designing tool-Primer 3 (http:// primer3.ut.ee/) and synthesized by Shanghai Invitrogen Biotechnology Company (Shanghai, China). Assays were carried out in accordance with manufacturer recommendations on the 7500 Real-Time PCR system (Applied Biosystems, Foster city, California). The copy number variations were determined based on the ratio of deletion fragment copies to reference gene (GAPDH) copies in samples. Both genomic DNA samples from the normal male and female individuals were used simultaneously as two control samples. Each qPCR was carried out in triplicate with the SYBR Premix Ex Taq II PCR reagent kit (TakaRa Bio, Dalian, China) according to the manufacturer's protocol. Results Karyotyping Fifty metaphase cells were examined for the proband and other family members. An apparently abnormal karyotype was identified in proband (IV:5) and two living relatives (III:3 and III:7) as der (22) FISH FISH was performed for predicted carriers and healthy members of the family. The results showed that 22q13.33 signal (Red) separated from 22CEP signal and translocated into 3q in proband's father (III:8) and other five members of I:2, II:1, II:3, II:6 and III:2, indicating that the six family members were obligate carriers of balanced translocation of 3q and 22q (Fig. 3a, b). The abnormal unbalanced translocation in proband was inherited from his balanced translocation carrier father. Real-time quantitative PCR Real-time quantitative PCR performed on the family and two control samples (both male and female healthy individuals) showed that the proband and the living patient (III:7) had the deletion of SHANK3 gene in 22q13.33 and the proband's parents were same as controls (Fig. 4c). Discussion Intellectual disability (ID) is a common, variable and heterogeneous manifestation of central nervous system dysfunctions affecting 1-3% of the population [6]. Unfortunately only less than one-half of ID could be identified the specific etiologies [7,8]. In this study, we described a familial balanced reciprocal translocation involving chromosomes 3q and 22q segregating through four generations in which six family members (I:2, II:1, II:3, II:6, III:2 and III:8) carrying the balanced translocation. The proband was firstly referred to our hospital for his severe physical and mental retardation, and then at were normal as control least three patients with the same phenotypes were found in his four-generation family by means of the pedigree investigation which highlighted the potential chromosomal abnormalities in the family. By using the G-banding karyotyping analysis, an abnormal long arm of charomosome 22 was found in the proband. To study the etiology of the proband, CMA was applied and showed the derivative chromosome 22 with 9.0 Mb duplication of 3q28q29 and 1.7 Mb microdeletion of the 22q13.33. It is known that the subtelomeric regions among the gene rich regions of the genome, are particularly prone to recombination and therefore often involved in chromosomal rearrangements, in which subtle rearrangements at the telomere regions may cause unexplained ID [9], and many of these subtelomeric deletions or duplications are now recognized as clinically recognizable phenotypes [9,10]. According to the literature, the 3q26.3-3q29 duplications being the minimal critical region could cause 3q duplication syndrome [11], and the 22q13.33 microdeletion was associated with 22q13.3 deletion syndrome (Phelan-McDermid syndrome, PMS) [12]. Arıkan et al. [13] summarized the most common abnormal features of 3q duplication syndrome, such as facial dysmorphism (hypertrichosis, prominent eyelashes, bushy eyebrows, broad nose with anteverted nares and depressed nasal bridge hypertelorism, epicanthic folds, long philtrum, micrognathia, low anterior hairline, malformed auricles), limb anomalies (rhizomelic shortening of the limbs, hypoplasia of the phalanges, camptodactyly and clinodactyly), congenital heart defects (septal defects), renal malformations (polycystic kidneys or dysplasia), seizures, brain malformations, and so on. However, partial trisomy 3q cases with duplication of different segments showed significant differences from each other. Approximately 60-75% cases with 3q distal duplication have a concomitant deletion of another chromosomal segment carrying unbalanced translocation, and the deletion of the chromosomal segment could contribute to the phenotype as the present case [14,15]. Our case has characteristic features of the 3q duplication syndrome such as facial features (protruding forehead, bushy eyebrows, hypertrichosis, low nasal bridge, and malformed auricles), special posture of hands (camptodactyly and clinodactyly) and the severe development delay (mental, motor, and language). It has been reported that the EPHB3, CLDN1 and CLDN16 located at 3q26.31-q29 were important to 3q duplication syndrome [11]. Our case has a 9.0-Mb duplication of 3q28q29 encompassing CLDN1 and CLDN16. CLDN1 (OMIM 603718) and CLDN16 (OMIM 603959) locating at 3q28 encode Claudin 1 and Claudin 16, respectively, which are epithelial or endothelial cell-to-cell adhesion tight junction proteins. Loss of Claudin 1 function mutations could result in neonatal ichthyosissclerosing cholangitis syndrome [16,17], while the gene mutations in CLDN16 could cause familial hypomagnesemia with hypercalciuria and nephrocalcinosis (FHHNC) which is a rare autosomal recessive renal disease [18]. The detailed mechanism of how haploinsufficiency of CLDN1 and CLDN16 causing disorders remains to be elucidated. It has been known that the fragment duplication of 3q28 encompassing CLDN16 was associated with multiple congenital abnormalities including coarctation of the aorta, atrial septal defect (ASD) and ventrical septal defect (VSD), hypertrichosis and umbilical hernia/omphalocele [19], while our patients presented the phenotype of hypertrichosis in which CLDN16 might play an important role. In addition, the present case was identified to have a 1.7 Mb 22q subtelomeric deletion associated with 22q13.3 deletion syndrome also known as Phelan-McDermid syndrome (PMS) [20]. PMS is characterized by developmental delay, absent or impaired speech, neonatal hypotonia, autistic traits and mild dysmorphic features. Shank3 encoding by SHANK3 gene (also known as PROSAP2, OMIM: 606230) is a postsynaptic scaffolding protein with the key role in spine shape/ maturation, localization of glutamate receptors, and growth cone motility [21]. The mutations of SHANK3 gene have been considered to be responsible for the neurological features of the PMS phenotype [22]. The association between deletion size and phenotypes expanded the genomic region of interest in PMS, in which small deletion size is mainly related to autism spectrum disorders, but large deletion size is prone to severe phenotypes. In the DECIPHER database, all deletions referred to PMS ranging in size from 100 kb to over 9 Mb contain SHANK3, and 75% PMS cases carried the simple terminal deletions while approximately 25% cases were comprised of translocations in the 22q13 region, ring chromosome 22 and mosaics [23,24]. Our case had a 1.7 Mb deletion including 38 RefSeq genes and his family patients were accompanied by PMS phenotypes: severe developmental delay, hypotonia, speech/language delay, facial dysmorphism. We deduced that the variations of large fragment and genes referring to unbalanced translocation of 3q and 22q contributed to the severe phenotypes. Conclusions In this study, we presented the molecular cytogenetic characterization of 3q28q29 duplication and 22q13.33 microdeletion in a proband with severe intellectual disability and developmental delay. This is the first report of a familial reciprocal translocation t(3,22)(q28;q13.3) segregating through four generations. Three patients carrying an unbalanced condition with der(22)t (3:22) were found, It was reported that a cross-like quadrivalent configuration in gametocytes of reciprocal translocation carriers was usually observed [3,25]. The meiotic segregation patterns of the quadrivalent are alternate, adjacent-1, adjacent-2 and 3:1. We deduced the unbalanced state in the patients was likely from an adjacent-1 segregation, in which der(22)t(3:22) was rearranged. Perhaps, there is another possibility that the chromosomal pairs 3 and 22 were arranged in bivalents instead of quadrivalents (as usual for a translocation) in meiosis. However, homologous chromosomes for up to 9.0 Mb of 3q Subtelomere failed to pair together during meiosis, which is a great hazard for the long unmatched fragment in meiosis. Meanwhile, most researches showed that the actual proportion of normal and balanced translocation zygotes for the offspring of reciprocal translocation carriers was much higher than the theory of 2:32. Also the interesting fact that up to now only male carriers of the balanced translocation produced an offspring with der (22) in the family. All of the above need to be further validated by the pairing configurations occurred in pachytene substage. We reviewed the literature of 3q duplication syndrome and 22q13.3 microdeletion syndrome, and discussed the genotype-phenotype correlation in this case, suggesting that prevention of recurrent intellectual disability in this family can be achieved through carrier screening and prenatal genetic diagnosis. Availability of data and materials All data and material are available in the article. Authors' contributions KZ, YL and ZG participated in the study design. YH and YY made clinical diagnosis. KZ and YZ carried out karyotyping analysis. YW, HZ, RD and YL performed the molecular genetic studies. KZ, YL and ZG contributed reagents, materials and analysis tools. KZ, RD and YL drafted the manuscript. All authors read and approved the final manuscript. Ethics approval and consent to participate The work was approved by Medical Ethics Committee of Qilu Children's Hospital of Shandong University. Informed consent was obtained from the proband's parents and other members of the family for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal. Consent for publication The proband's parents and his family members all consent for publication in your journal.
2018-02-20T23:58:24.096Z
2018-02-20T00:00:00.000
{ "year": 2018, "sha1": "0b8248ffe5bda42fbf3e4091f32dbd9d7d630ee5", "oa_license": "CCBY", "oa_url": "https://molecularcytogenetics.biomedcentral.com/track/pdf/10.1186/s13039-017-0349-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2435ad40d3e0e789a1faca6bba4cb7e53396a585", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
225944721
pes2o/s2orc
v3-fos-license
Correction of English Translation Accuracy Based on Poisson Log-linear Model In the conventional machine translation methods, pipelined sequential operation is used to perform part-of-speech identification and syntactic analysis on the raw corpus to obtain the syntactic structure of English language, thereby reducing the iterative transfer error between translation tasks and the accuracy of structured instances, resulting in reduced accuracy in English language and literature translation. In this paper, a Poisson log-linear model that saves the corresponding bilingual corpus by means of Chinese-English dependency-tree-to-string is designed to implement dependent structured processing on the source language end and ensure that the accurate translation of English language is further proofread through the data-oriented translation model in the Chinese-English bilingual correspondence. The experimental results show that translation with high accuracy can be obtained based on the proposed method, which is highly accurate and stable. Introduction Machine translation is the key to the natural language operation scope with a high application value [1][2]. The case-based machine translation is an empirical English language and literature translation strategy, which does not require complex deep-level grammar and semantic analysis. It can improve the efficiency of English language translation [3][4]. However, the instance-based machine-translation method has higher requirements on the quality of the instance library. Conventional machine translation methods use pipelined sequential operations to implement part-of-speech identification and syntactic analysis on the raw corpus to obtain the syntactic structure of the English language, which makes iterative transmission of errors between translation tasks and reduces the accuracy of structured 6]. To solve this problem, we studied the method of machine translation accuracy in English language and literature, shaped and implemented a machine translation system based on the instances of Chinese-English dependency-tree-to-string in this paper, which improved the accuracy of English machine translation. Dependency-tree-to-string model The model of dependency-tree-to-string is <D, S, A>, where <D, S> is a translation pair, D represents the dependency tree of the source language, S represents the target word string of the source language, and A is used to describe the relationship between D and S. An instance of the word alignment relationship based on the dependency-tree-to-string bilingual alignment model is shown in Figure 1. Figure 1 includes words and part-of-speech features in each street. English under each word indicates the part-of-speech corresponding to the vocabulary, such as NN for nouns, VV for verbs, and JJ for adjectives. Lines in words are used to describe the dependencies between words. At the bottom of the instance, the English string sequence S corresponding to the Chinese sentence. The upper and lower dashed lines are used to describe the alignment relationship between Chinese word nodes and English words. The conceptual similarity of words can be described by the similarity of the meaning of the concept, and the similarity of the meanings of and is calculated using formula (2): Lexical semantic similarity of poisson log-linear model Where α represents a controllable parameter; d represents the path distance of two Yiyuan in the Yiyuan tree, and its value is non-negative. The Poisson log-linear model uses a multi-feature thinking judgment model. For a set sentence , a translation is formed. The maximum entropy , , translation model is as shown in equation (3): The Poisson log-linear model is highly scalable, can set corresponding features for different target requirements, and can apply a variety of linguistic methods to machine translation. Feature functions such as forward and backward translation probabilities and target language models are the main forms of machine translation systems. Based on the actual requirements of the translation system, a feature function and corresponding privilege weights are automatically set, and the optimal translation with the highest score on the generated translation is obtained according to formula (3). Implementation of the machine translation system The Sato & Nagao method is used to describe the dependency mechanism. The source language dependency tree of the dependency-tree-to-string-aligned instances is formalized. The matching description method is used to detect instance fragments in the instance database, and to obtain input sentences to achieve similar instance detection. Matching expressions can be replaced, filtered, or added in three ways. In the target word string without the dependency tree structure relationship layer, the corresponding translation expression changes accordingly. The following shows the structure of the source language dependency-tree-to-string instance D3 and instance D5 in the instance library: D3 In the instance database, identifiers such as e21 and e51 are labeled in word order, and the target word string S is labeled with "e" in the prescript. For the instance sentence "She bought an English book", integrating source language instances D3 and D5 to detect and obtain [c21, [r, c23 [c51]]] is one of the corresponding expressions. The target translation of the input sentence obtained through the target matching expression is: I buy a politics book. According to the Poisson log-linear model in this paper, the eigenfunctions used are: 1) Probability of THE forward and reverse translation. When the number of words is the same, there are more words in the same sentence between the translated sentence and the translated sentence instance. The applied feature function will produce a more accurate translation. 2) Language model. The quality of the generated translation is measured by this function, which improves the fluency of the translation. In this paper, the language model of the target language can be used to find the probability of translation fragments in the target language. Experimental settings The experimental corpus is the Chinese-English news corpus used in the official evaluation of CWMT 2018. About 420,000 pairs of English-Chinese parallel corpora were collected from it and used as the initial corpus of the bilingual instance database. The test set for the official evaluation of CWMT 2018 is used as the test set, and the experimental corpus is shown in Table 1. Experimental results and analysis In order to test the effectiveness of the system in this paper, the experiment is based on the corpus of Table 1. A comparative analysis of the translation results of the system, semantic language-based machine translation system, and open-source statistical machine translation system is shown in Table 2. The BLEU in Table 2 is a comparative analysis of n-unit fragments of the translation to be evaluated and the reference translation. The higher the number of matching fragments calculated, the better the quality of the translation to be evaluated. NIST is a measurement standard for translation quality assessment. It is used to evaluate the quality of translations per unit quantity. The higher the value, the better the translation quality. Analysis of Table 2 shows that the BLEU and NIST values of the system in this paper are higher than those of the other two systems, which indicates that the proposed machine translation system in this paper has better performance. Hence, it is an effective method for English language and literature translation. The experiment collects partial translations from the translation results of the three translation systems and analyzes them. Table 3 shows the translations obtained from the test sentence "The information industry shows a rapid development trend". The translation differences between the three translation systems in terms of "fast development" are analyzed, as shown in Table 3 The open-source statistical machine translation system translates the translation as "fast change", and the semantic language-based machine translation system translates the translation as "keeping the "Moment going" has a high deviation from the raw word and does not conform to the grammar and semantics of English. Although the translation results of the word in this system are inconsistent with the reference translation, the semantics meets the requirements and have high accuracy. Table 3. Translations obtained by different translation systems Raw Information industry shows the rapid development Reference Translation The information industry is developing rapidly Machine translation system based on semantic language The information industry is keeping the momentum going Open-source statistical machine translation system The information industry is developing rapidly Text system The information industry is a high-speed development situation Table 4 and Table 5 are the translation results of the system in this paper and the semantic language-based machine translation system from English to Chinese and from Chinese to English. The first column in the two tables is the average of the translation results of the two systems for each sentence. The average number of the systems in this paper is smaller than that of machine translation systems based on semantic languages, indicating that the system in this paper has fewer inaccurate results. The recall rates of accurate translations in the translation results of the second column in the two tables are analyzed, i.e., the proportion of accurate translations, it can be seen that the recall rate of the system in this paper is higher. Analysis of the 3rd and 4th columns in the two tables shows that the correct translation rate of the first and first two translation results of the system in this paper is 8-9 percentage points higher than the semantic language-based machine translation system and 11-13 percentage point. Comprehensive analysis of these results suggest that this system systematically improves the accuracy of translation results and has high English language and literature translation Conclusions In this paper, the machine translation accuracy methods in English language and literature are studied, and a machine translation system based on the Poisson log-linear model is created and implemented to accomplish the accurate translation of English language and literature.
2020-06-25T09:06:51.857Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "c85c6d68cab69bbe30e504fceedb61b42491bc92", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1533/2/022049", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "68c93c8daf41c47c606515a3bd1dcb10aec874fe", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Mathematics" ] }
236256720
pes2o/s2orc
v3-fos-license
A Scoping Review Protocol of Malaria Vectors in Malaysia Introduction: Malaria is still a public health threat. From 2010 to 2017, a total of 33, 181 malaria cases were recorded in Malaysia. Thus, effective intervention and key entomological information are vital to interrupt or in preventing malaria transmission. Therefore, availability of malaria vectors information is desperately needed. The objective of this study is to established protocols of new and potential malaria vector as a part of existing malaria vectors in Malaysia for human and zoonotic infection. Methods and analysis: Scoping review will be conducted based on Arksey and O’Malley’s methodology on four electronic databases; Scopus, PubMed, Google Scholar and Science Direct. Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Review (PRISMA-ScR) will be used as a systematic approach. All relevant nding will be managed by Mendeley software and Microsoft Excel program. Ethics and dissemination: Ethical approval is not required on secondary of data. Study ndings will be submitted for peer-reviewed publication. Introduction Malaria is still a public health threat. It is responsible for 17% of the global burden of parasitic and infectious diseases causing over a million deaths and considerable mortality and morbidity worldwide 1 . WHO recorded a total of 219 million malaria cases and 435,000 malaria deaths around the world 2 . On the other hand, Malaysia recorded 33, 181 malaria cases from 2010 until 2017 3 . Malaria was caused by four species of Plasmodium species i.e. Plasmodium malariae, P. ovale, P. vivax and P. falciparum. However, study done by White 4 and Cox-singh & Singh 5 , con rmed another Plasmodium species responsible for zoonotic malaria, Plasmodium knowlesi as a fth species. Malaria is a vector-borne disease transmitted by Anopheles mosquito 6 . In 2007, WHO recorded 4,500 species of mosquitoes worldwide, in 34 genera from family Culicidae, order Diptera, class Insecta and phylum Arthropoda 6 . It is interestingly noted that only 70 species of Anophelines were con rmed as malaria vectors around the world 1 . Malaysia recorded 434 species of mosquitoes 7 . In 1997, Rahman et al., documented that there were 75 species of Anopheles recorded in Malaysia 8 . Previous study recorded there are 9 species of Anopheles which were established as a malaria vector in Malaysia, namely Anopheles balabacensis, An maculatus, An campestris, An sundaicus, An letifer, An donaldi, An dirus, An leucosphyrus and An avirostris 8 . It is interesting to investigate potential malaria vector in keeping with new technology for species complex identi cation. Potential malaria vector is de ned by the vectorial capacity of vector population to transmit malaria 9 . In accordance to that, Malaysia is working towards eliminating malaria and prevention of reestablishment 2 . Therefore, effective entomological surveillance is most important tools in order to interrupt malaria transmission. This can be achieved by designing a targeted control intervention based on the behavioral of malaria vector. For example, Anopheles balabacencis will rest outdoor after feeding 10 . Therefore, vector control activities need to be applied and strengthen on wall of the house, tree trunk, bushes or at any possible resting places. Thus, this review will serve as a resource for entomologists, malaria personnel and practitioners to inform malaria vectors in Malaysia. Consequently, entomological information accelerates the process of malaria elimination by enhancing the impact of e cacy of intervention needed 2 . The objectives of this study are to review malaria vectors as well as potential malaria vectors in Malaysia. Methods And Analysis Protocol Design This study will broadly cover subject's area with the nature of research activity on the topic in accordance (3) selecting studies; (4) charting the data; (5) collating, summarizing and reporting the results and (6) consulting with relevant stakeholder. This scoping review will adhere to 22-Items for Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Review (PRISMA-ScR) 13 Articles will be identi ed using medical subject heading and keywords combination of titles, abstract and keywords (Appendix) using systematic approach to search, screen, review and data extraction. Search will not be restricted by year or language. All selected search results will be imported into Mendeley software and Microsoft Excel spreadsheets (Microsoft Corporation, USA) for references and to manage duplications. Stage 3: Study selection All entomological research conducted in Malaysia will be included in this study including the previous reports such as longitudinal study, cross-sectional study, observational study and descriptive report. Firstly, inclusion and exclusion criteria will be used to determine the eligibility of the articles based on the title as the screening part of the review process. Any titles indicated that the research was conducted outside Malaysia will be removed. Secondly, titles and abstracts will be selected based on eligibility criteria. Only abstracts that ful ll the inclusion criteria will be further analyzed. Selected Full articles from of the selected abstracts will be reviewed and included in this study if it is considered signi cant and relevant studies. Stage 4: Charting the data The signi cant study characteristics from published research literature review will be extracted by a standardized data extraction framework (Table 1). Data extraction framework was developed to guide the extraction and charting process of the data from the articles. It will consist of the standard bibliographical information (title, author, journal, year of publication, language, location of the study and study objectives). Additional information such as type of study, primary outcome and other valuable information section will be included as well. Respectively it will provide an overall signi cant information about the study and to facilitate data analysis. Stage 5: Collating, summarizing, and reporting the results This scoping review intended to present an overview of study area compare to a systematic review where meta-synthesis reporting was required. Thus, information gather will be reported based on selection criteria. The nding of this study will summarize all data and information from the relevant articles and will emphasize the scope of malaria vectors in Malaysia. Furthermore, gaps in research targeting a speci c area will be identi ed and determine. This study will use PRISMA-ScR reporting guidelines to accurately report the review search results. Stage 6: Consulting with relevant stakeholder There is a need for entomological information regarding malaria vectors in Malaysia due to insu cient data available. These information are valuable for planning an effective intervention as well as accelerating the malaria elimination certi cation. Relevant stakeholders for example, Entomology and Pest Sector, Disease Control Division, Ministry of Health had been consulted and series of discussions will be made as this is an ongoing study. Outcome from the discussions will provide the insights of current entomological situation in malaria risk area as well as malaria situation in Malaysia and ongoing operational research. Declarations ETHICS AND DISSEMINATION Ethical approval is not required as there is no primary data collection. An article detailing the ndings of the scoping review will be submitted to a Scienti c Journal for publication. Study ndings will be disseminated via open-access publications in peer-reviewed journals, presentations to stakeholders, relevant meetings, conferences and Continuous Medical Education at department level as well as part of future seminars and workshops. It will contribute as a dossier of Malaria Elimination Program of Ministry of Health Malaysia. Appendixformalariavectorscoping.docx
2021-07-26T00:06:03.935Z
2021-06-09T00:00:00.000
{ "year": 2021, "sha1": "24f8caa1cbd970ef901e97927c6b1b397fe1393f", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-581690/v1.pdf?c=1631898708000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "1e9c4e0c3294b7abe3ff071ec457206176847300", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
10366892
pes2o/s2orc
v3-fos-license
Resiliency Training in Indian Children: A Pilot Investigation of the Penn Resiliency Program This paper examines the effectiveness of the Penn Resiliency Program (PRP) in an urban Indian setting. The PRP is a program to prevent depression in early adolescence and has proved successful in changing children’s attributional style of life events. While the program has been successful in preventing symptoms of depression in Western populations, the current study explored whether this program could be effective with an Indian sample. The aim of the current study was twofold; first, to study the attributional style of early adolescents in India and identify negative effects (if any) and second, to gain insights in using the PRP as a tool to change explanatory styles in Indian children. A total of 58 children participated in the study (Intervention group n = 29 and Control group n = 29). An Analysis of Covariance comparing post-test scores on Children’s Attributional Style Questionnaire (CASQ) while controlling for baseline scores indicated that children in the intervention group exhibited a significant reduction in pessimistic explanatory style and an increase in optimistic orientation compared to children in the control group. This indicates that the program was effective in changing negative attribution styles among upper-class Indian school children. Future work may look into the longer impact of the program as well as further considerations into adapting the program for a middle class population. Introduction Depression is a debilitating disease; hence, preventing its onset is beneficial at the individual, familial and societal levels. In India, a growing population, poor levels of literacy in rural areas, varying social support amongst a myriad of other cultural factors are observed to be key predictors for rising cases of depression [1]. In the Southern city of Chennai alone (where the current study is situated), the prevalence of depression in a sample of middle aged men and women is estimated at around 15% [2]. These studies also found a disproportionately higher incidence of depression in women than men. In part, the discriminatory treatment of women accounts for many social and emotional problems in low-income countries [3,4]. Reviews of epidemiological studies indicate comparatively lower incidence of mental health problem in low income countries like India attributing this to protective factors associated with family, cultural and religious values [3,5]. However, there are concerns around the reliability of this data as methodological difficulties in adapting measurement tools [3] and gross underreporting of suicidal attempts, abuse and gender violence (proven to be strong predictors of anxiety and mood disorders) exists [6]. Further, adolescence is marked by significant physiological changes as well as an interaction of psychosocial influences from the child's home, school and social environment. Adolescents in India are reported to be at risk for a number of behavioural and emotional problems [7]. The recent rises in the number of school-based intervention programs, government-mandated posts of school counsellors, are all indicators of the need to support the psychological and emotional needs of school-age children. Several risk factors have been associated with poor mental health in adolescents including stressful family environments, coercive sexual encounters, gender discrimination and poor social support [4]. Expectedly, the presence of mental disorders such as anxiety, emotional and mood disturbances affect wellbeing by increasing the risk of suicide attempts in adolescents [6]. Adolescent girls are particularly more vulnerable to depression [8] than their male counterparts as they experience more parental pressures and restrictions with regard to lifestyle choices. Therefore protective factors of social and familial ties seem to contradict with gender discriminatory attitudes rooted in traditional Indian culture with regard to adolescent girls. The sheer magnitude of the population and lack of resources, the cultural factors such as family obligations, poor literacy and taboos associated with discussing mental health problems places an enormous task on mental health services [7]. Research on child and adolescent mental health in India has been brought to focus in the last decade and the aforementioned studies report on the epidemiology and associated risk factors of depression. However, managing psychological problems in India are complicated by the cultural and social associations with seeking help [9]. Given the complex issues surrounding this, previous research indicates, the need for community-based health promotion programs with generic goals [3] to target awareness on gender discrimination, infant mortality as well as mental health-where there is a current dearth of literature. In their evaluation of mental health services in low income countries [3] (including India) the authors discuss that the key to providing these services is to focus on capacity building efforts such as empowerment of women and strengthening adolescents and their families. Similar approaches to this have been previously suggested for adolescents in the form of integrated "adolescent health services" targeting educational, psychosocial as well as emotional and physical needs of adolescents within schools [10]. South India is very much rooted in traditional culture like other parts of India where the diagnosis of a mental disorder or seeking help and treatment is influenced by fear of discord in family and social relations. Hence, the inoculant idea of preventing poor health or promoting good health and well-being appears desirable [9]. Depression, Resiliency and the Penn Resiliency Program Learned helplessness is a state wherein persistent negative events in an individual's life leads him/her to experience a loss of control over the consequence and expectancy of negative events [11]. This has been associated with the onset of depression-arising from a condition of hopelessness [11,12]. According to the attributional theory of learned helplessness [11], depressive thoughts and depression are a result of persistent negative attributions to life events. Three attributional dimensions namely, internal-external, stable-unstable and global-specific were identified in explaining an individual's susceptibility to depression. According to this explanation, individuals who explain bad events to internal, stable and global causes are said to have a negative or pessimistic explanatory style [12]. Pessimistic style of thinking has been associated with depressive symptoms and a pre-disposition for depression [13,14]. We note here that the terms "attributional style" and "explanatory style" has been used to refer to the same mechanism [15]. From this point in the paper, we will use the term "explanatory style". Studies have found that children at risk for depression (and those showing depressive symptomology) in contrast to their peers make more negative attributions, and tend to attribute negative events to personal and permanent causes and positive events to external and temporary ones [15][16][17]. Resiliency is the process of coping with adversity or adverse consequences of negative events [18]. Specifically, individual "protective factors" or specific capabilities are associated with coping flexibly with adversity [19]. Individuals thus use a repertoire of skills (drawing on past experiences and environmental factors) to face the challenges of a given situation. When they are persistently unable to cope with a situation, they are faced with a sense of "helplessness" and become vulnerable to depression [20]. The Penn Resiliency Program (PRP) is an intervention targeted at reducing risk factors for depression and promote resiliency is early adolescence [20,21]. It was developed to prevent depression in children and adolescents and has proved successful in changing children's pessimistic explanatory style [22]. When appraising their performance, children with a pessimistic explanatory style have distorted perceptions and tend to pay more attention to negative features of events [23][24][25]. Cognitive behaviour therapy interventions involve correcting distorted perceptions by analysing self-talk and teaching adaptive skills like social problem-solving [26]. The PRP consists of two components-the cognitive and the problem-solving [21]. The first teaches children to identify negative beliefs and examine the validity of those beliefs and helps children develop cognitive flexibility when confronted with negative thoughts. In the second part of the program, children are taught to resolve interpersonal conflicts, communicate assertively and avoid the extremes of aggression and passivity. The program is administered through a series of role-plays, cartoons, games and homework assignments that teach and reinforce these concepts [20]. There is currently no literature linking explanatory style and depression in Indian children. However, some insightful cross-cultural findings exist. In studying the cultural influence on explanatory styles in undergraduates [27], Chinese students showed a more pessimistic style of responding to events in comparison to White Americans and Chinese-Americans. They tended to attribute positive events to external causes and negative events to internal causes. Some of these findings were related to socio-cultural values held by specific groups (e.g., being modest or self-effacing) as well as coming from a individualist or collectivist culture. Further, when the PRP was adapted for children in China [28], researchers found it to be successful in reducing symptoms of depression in children as well as predicting (and reducing) the risk of pessimistic explanatory styles in vulnerable (at-risk) children. However, some parts had to adapted to suit the cultural context; for example, assertiveness was not encouraged in Chinese culture. We noticed similar concerns in our study with an Indian school sample given that Indian culture is also collectivist. Rationale and Need for Prevention Interventions to tackle mental health tend to be underreported in the literature with respect to upper class Indian children even though the prevalence of mental health problems is said to be comparatively higher in upper and middle class Indian families [29]. The World Health Organisation's (WHO) life-skills training program [30] is among the few recognised programs that aim at improving self-esteem, assertiveness and social skills in schools through group sessions. These have been popularised in rural areas and lower income schools in urban areas-though largely unreported. Recent changes in lifestyle choices and Westernisation indicate that adolescents in urban Indian cities face issues very similar to a Western population. In Goa [31], researchers found that adolescents are increasingly at conflict with traditional family value systems through modern lifestyle choices, such as dating and partying with friends. Furthermore private education in India is largely in English-making the idea of using an existing prevention program from a Western population feasible. One of the motivating factors for our study is that children from upper class Indian families have the exposure and English skills comparable with children in developed countries. With this knowledge, we were confident that children would be able to relate to aspects of the PRP in its original format before adapting it, like its implementation in China [28]. Using an established program would give us an indicator of what issues arise when adapting a program to a specific cultural context. The aim of the current study was twofold-first, to study the explanatory style of early adolescents in India and identify any negative effects (if any) and second, to gain insights in using the PRP (imported from the U.S.) as a tool to change explanatory style in an Urban, upper-class Indian setting. While the program has been successful in both preventing and reducing symptoms of depression in Western populations [22,32], the current study examined if this program would be effective as a preventive tool in a cross-cultural sample. As the United States (where the PRP was originally developed) and India differ on a number of social, cultural, economic, and demographic dimensions, the relevance and feasibility of importing the PRP in its original format for an Indian population was considered. However, before adapting the program for Indian students, a necessary first-step would be to administer the program in its original format. We also discuss what aspects of the program worked and provide insights for future preventive programs. Procedure The researchers sought approval to conduct the study directly from the Principal and Headmistress of the Primary School in consultation with the school counselor. Prior to the study's commencement, parents of both groups were informed about the study by the Headmistress during a Parent-Teacher meeting at school. There were no objections raised to the study and consent was obtained. Next, participating children were informed about the study by their class teachers. The PRP lessons were held twice a week over Term 3 (January-March) of the School Year during school hours on the school premises. We chose to work during school hours to avoid children returning home late as well as missing afternoon extracurricular activities. A typical school day is broken down into 45 min blocks with a half hour snack break in the middle of the day. Two blocks of 45 min each were used to conduct the PRP. The intervention group was further divided into two sub-groups with 15 children in one group and 14 in the other. Each sub-group was led by one of the researchers. The principal researcher holds a doctorate in Developmental Psychology and the co-researcher had a Master's in Psychology at the time. The intervention groups received 22 h of instruction in the PRP over three months. The attendance in each intervention sub group was 95% and 96%. The researchers studied the PRP program manual closely and reviewed each lesson in great detail together. They had copies of the lesson and homework for each session and practiced by rehearsing the first two lessons in advance of meeting with the students. The leaders also met after each lesson to discuss how each activity was received by the students. These steps were taken to ensure that the group leaders adhered closely to the original program and to minimize differences that may arise due to the groups being led by different leaders. Sample The sample comprised of 58 children studying in two sections of Grade V in a relatively affluent school in Chennai, India. Since its development, the PRP targeted preventing depression in early adolescents (10-14 years) [21]. The aim is to equip children with psychosocial skills that would help them cope with the impending physical, social and emotional changes in adolescence [20]. In this school, Grades II to V are regarded as Junior School. In every Indian school, grades are divided into manageable groups called "sections" such as "section A", "section B" and so on. Typically there is no rationale for assigning a child to a section-they are randomly assigned to sections and are representative of a range of academic abilities. The classification of children into sections is primarily to maintain gender and student-teacher ratios. This particular school had only two sections, A and B, one served as the control and the latter as the intervention group. English was the medium of instruction in the school. Refer to Table 1 for details of the sample. In Urban India, middle-and upper-class children tend to attend mainly private schools. Private institutions use different curricula but the most widely used and well-recognised are the CBSE (Central Bureau of Secondary Education), ICSE (Indian Council for Secondary Education) and State-based boards (varies by each State). The school selected for the program followed the ICSE curriculum-a more academically challenging program than ones offered by the State-based Indian schools. The school catered to children from higher-socioeconomic families and hence children were likely to be exposed to the Western culture and be able to relate to the original material of the PRP. Children from the selected school are likely to have imbibed aspects of Western culture from a number of sourceswatching Western programs on television, Hollywood movies, Disney cartoons, travelling overseas and have family living and visiting from other parts of the world. The impact of students' family background is elaborated on in the Discussion section. Measures Prior to the intervention, all 58 children were administered the Children's Attributional Style Questionnaire (CASQ) [33]. The 48-item questionnaire developed by Kaslow et al. [33] comprises of multiple-choice items and yields scores for a child's explanatory style for good (24 items) and bad (24 items) life events along three sub scale dimensions-permanent vs. temporary, pervasive vs. specific, and personal vs. other. Composite scores of "total good" (labelled TG) and "total bad" (labelled TB) are obtained by summing each of the three individual sub scale dimensions for bad and good events respectively. Categorical descriptions of "optimistic-almost invulnerable to depression", "average-somewhat depressive" and "pessimistic-at marked risk for depression" are also assigned to children (based on a range of scores) based on norms for boys and girls separately. In a study of 96 elementary school children in the U.S. [13] reliability of the CASQ over a six month follow-up for the composite sub-scales was consistent with Cronbach alpha co-efficients of 0.71 and 0.66 for total good and total bad respectively [13]. Children who attribute negative events to temporary, specific, and external factors are classified as optimistic. Likewise, those who attribute positive events to permanent, pervasive and personal factors are also classified as optimistic. Pessimistic children tend to attribute positive events to temporary, external factors and when faced with negative events, they tend to engage in self-blame and generalise setbacks. The rationale behind using the CASQ as opposed to other instruments was because we did not carry out a screening study with an intention to treat depressive symptoms in vulnerable children. The aims of the study were to identify and change negative explanatory styles in children and to pilot test the PRP as a preventive program in Indian children. In this sense, we used the program from the view of "prevention" i.e., to provide intervention where the disorder was not identified thereby reducing the risk associated with depression [20]. We were in fact surprised to note that many children in Grade V were showing signs of strong negativity in responding to events. This is consistent with the underlying assumption of the development of the PRP wherein pessimistic thinking can develop as early as pre-adolescence between 10 and 12 years [20,22]. The questionnaire was read aloud to children by the researchers, item by item, and children circled their responses. This procedure was followed preceding intervention and after the program. A week after the last PRP session, both sections comprising a total of 58 children were again administered the CASQ again. Scoring of the scales was done by the co-researcher and statistical analysis was done using SPSS v20. Statistical Methods To measure the effect of the intervention (PRP) on explanatory style, we used ANCOVA controlling for baseline scores across control and intervention groups. The pre-test TB and TG scores were used as covariates. At baseline, the children in control and intervention groups did not show significant differences in their composite TG and TB scores. Results Our main goal was to see whether the PRP would have an effect on changing the pessimistic explanatory style of children in the intervention group. The composite TB and TG scores on the CASQ indicate whether a child's explanatory style is optimistic or pessimistic. Therefore, a higher TB score indicates a pessimistic explanatory style and a higher TG score indicates an optimistic explanatory style. In Figure 1a we observe that following the PRP, the intervention group showed a distinct lowering in their mean score of negative events (TB score), while the control group showed an increase. In Total Good events, we notice only a slight increase in both intervention and control groups. Children's scores on the composite good and bad scores also provide categorical descriptions of explanatory style and their vulnerability to depression (refer Section 2.3). A pre and post-test comparison of children in the intervention group (see Figure 2a,b) indicated that a noticeable percentage showed a reduced pessimistic explanatory style. On descriptive analysis, comparison of their scores on the TB sub-scale dimension, we observed a doubling in the number of children who were classified as "optimistic" or "invulnerable to depression" (7 at pre-test and 14 at posttest). For the TG score, we did not notice such a significant difference. However, the post-test scores indicate that the direction of scores were positive (i.e., a lowering of children classified as pessimistic). An ANCOVA predicting explanatory style of negative effects (CASQ, Total Bad) revealed a significant effect of condition, F(1, 57) = 6.10, p < 0.05 (R 2 = 0.32). The two groups did not differ in their positive explanatory style (see Table 2). Therefore, we see that following the intervention, children in the intervention group experienced a lowering in their TB score. In their experience of good events (TG) children in the program show a small increase in their mean score. Previous work indicates that at pre-adolescence, boys tend to have a more pessimistic explanatory style than girls [28]. However, we found that girls in our study had slightly higher TB scores than the boys (see Table 3). We infer that this is may be due to cultural differences in Indian children-gender differences in expectations from girls are reinforced quite early. However, girls (at pre-intervention) started with a higher TG score, which reduced at post intervention. This shows that the explanatory style did not remain consistent over measurement periods. One explanation for this could be because children's responses to life events are still taking shape and they may be inconsistent in their appraisal of events. Further follow up as children move into middle adolescence is required to see whether these styles sustain or change over time. Notes: a Tests of difference were performed to study effects of intervention across gender. b Levels of significance. * Not significant. Discussion Despite cultural differences, the PRP was effective in reducing pessimistic thinking and promoting an optimistic explanatory style among school children in India. It is difficult to interpret explanatory style as the validity of the CASQ as a predictor of depression has not been established. There are also cross-cultural differences in attribution styles not previously explored in the Indian population. For example, Americans are more likely to attribute success to inherent abilities and failure to external factors while Japanese and Chinese tend to do the reverse [27,34]. In our observation, Indian students reflect similar patterns of attribution as their Asian counterparts. In our sessions, children were less likely to attribute negative events to external factors. This is something that is culturally encouraged in Indian culture as children are taught to avoid confrontation and externalising blame. Another aspect that warrants discussion is the higher TB score of pre-adolescent girls in our study. We notice this finding may be an important cultural feature of negative explanatory style in Indian girls. In the Indian population, girls cope with distinctive stressful factors like harmonizing academic pursuits along with parental expectations and early marriage. In many cases, the transition into adulthood for Indian girls may happen earlier; therefore, the need for psychosocial support may be required earlier on as well. One aspect that seemed culturally unfamiliar to our sample was assertiveness. Children tended to think that being passive was more appropriate than being assertive and "getting your way". In fact, some children equated being assertive with being aggressive. However, with sufficient modeling and explanation of how assertiveness was different from aggressiveness, children were able to exhibit assertiveness. It is important to note here that assertiveness is not a trait that is explicitly taught to Indian children. However, studies show passiveness in Indian adolescents is associated with increasing susceptibility to problem behaviours like substance abuse, behavioural misconduct and peer pressure, [4,6,31]. This has led mental health professionals and guidance counsellors in India place more emphasis on including assertiveness training as a part of life skills training and psychosocial intervention programs. However, assertiveness is not encouraged with elders, parents and teachers and any further work on this may need to clearly specify to Indian children where it may be applicable. In other Asian cultures like China where the PRP was adapted, they similarly found that assertiveness needed to be toned down to the cultural context [28]. Further, children's conflicts with traditional Indian family values (including valuing elders' opinion over their own) seem to create stressful home environments that predispose children to anxiety and other mental health issues [10,31]. Long-term follow-up at specific intervals may be required to see if any of these children are at risk or have developed depressive symptoms. While we did not see any significant differences in the intervention group with respect to gender, Indian adolescents (particularly girls) are at risk for psychological problems from stressors like academic pressures, pressure from peers and parental expectations as they progress into adolescence. This is supported with data from India show that depression increases with age in adolescence where incidence of depressive symptoms sees a steep increase in middle to late adolescence [35,36]. Further follow-up may also help in observing whether explanatory styles remain fairly consistent over time. A caveat has to be noted. Children in the control group did not receive a placebo intervention. Thus, the extra adult attention of two outsiders engaging the children in the intervention group may have contributed to the result. It should be noted that children in this sample were from relatively affluent backgrounds-in this school, several children were admitted after their families returned from spending considerable time abroad and hence were likely to be exposed to aspects of the Western culture. Further, non-traditional lifestyles of adolescents are fairly common in major Indian cities. In this light, the concerns and issues of children in this school were similar to those of Western populations. Issues such as bullying; interactions with the opposite sex, overcoming peer pressure were familiar with the children in this school. Given this context, we saw the assertiveness aspect as an essential part to include while adapting the program to an urban Indian population. Similar concern to address psychosocial issues of adolescents were found in other Indian cities like Goa and Chandigarh [7]. Hence prevention programs in urban Indian cities may call for a different cultural adaptation as compared to other Indian populations. Whether this program will be effective in a school catering to children from middle-class backgrounds in India needs to be studied. Another feature of the program noted by the researchers was that the language and vocabulary was above average for an Indian state school curriculum. In India, very few state-based school curricula cater to such a high level of English proficiency. The current sample showed no difficultly comprehending various aspects of the program. One reason for this is that the school curriculum was at a higher level than state board schools. It is difficult to generalise the linguistic and cultural appropriateness of the program to the general Indian population as the sample chosen to participate was a very selective group. Students of other school curricula may require a more simplified version of the lessons in order to fully comprehend and internalise them. Hence, a program for urban middle class or rural school populations may warrant significant modifications. Other qualitative observations were notably observed with regards to the implementation of the program in India. While it is not possible to provide detailed descriptions of these qualitative findings, we provide some key insights that may encourage future work with similar populations. Our experience in the sessions reflected that the program might work better in smaller groups. While the researchers worked with 15 children in each intervention group, we felt the ideal set would be around 6-8. This was because we found that children did not always get a chance to speak up and the quiet ones got left out. With regard to the program structure, the children seemed to prefer some activities over others. For example, the completion of homework lessons was inconsistent with some children giving them in regularly and others occasionally. In general, homework within the Indian context, is perceived quite negatively, hence future work on such interventions may choose to avoid too much written assignments for homework. We also found that some ice breakers or introductory activities may be a useful addition to the program. This may serve as a gentle introduction to the program and the facilitators. Conclusions and Future Considerations Overall, this study found that the Penn Resiliency Program developed for Western children was effective in changing the negative explanatory style of upper-middle class Indian children. However, further research is required to see if the results of this study generalise to other segments of the Indian population. Largely, we found that the children were very enthusiastic and responsive. Despite the researchers being outsiders, the children were able to warm up to them half way through the program. Some female students spoke about problems relating to their families and siblings and shared how the program homework was helping them. While boys did not take this initiative, in both groups, they participated actively in all aspects of the program. Similar results were found in other implementations of the PRP [32] indicating that the program meets the needs of pre-adolescent girls. This is indeed a promising finding as the scope of PRP or an adaptation to be applied in all-girls setting in India (given the unique cultural context) is certainly crucial and may be explored in future work. Furthermore, the longer term impact of the intervention may be studied through longitudinal follow-ups. Previous follow up studies indicate sustained optimistic attitudes and increasing optimistic outlook in children who underwent training in the PRP as compared to their peers who did not receive training [2]. Another aspect that we did not consider for this pilot and which may have potential for future work in India is a Parent Training Program. Other investigations of PRP discuss the usefulness of including parent and teacher involvement in the program [20,37]. This may be a component that is further explored in the Indian setting. The current research and prior work seems to indicate that problems in Indian adolescents may be aggravated from strained relationships with family members-siblings, parents and child-rearing practices [7]. In fact, while conducting the study, a number of examples in the homework exercises indicated that children had conflicts with siblings and parents. However, several protective factors of family support has also proved to be a deterrent to developing mental health problems [5,31], which could encourage future work in identifying and nurturing these factors and discouraging factors that cause dissonance in parent-child relationships. In addition, the validity of the CASQ as a predictor of depression has not been studied in the Indian context. Differences in explanatory style of girls and boys may be further studied on follow up to see if these styles remain consistent or fluctuate during adolescence. Further application of the PRP with larger groups of children and across more schools may also be pursued. Thus, further studies of attribution styles as predictors of depression in the Indian context are required. Moreover, indigenous programs and tools of assessment may be developed that factor in local cultural beliefs, patterns and preferences where the program cannot be applied in its original format.
2016-03-22T00:56:01.885Z
2014-04-01T00:00:00.000
{ "year": 2014, "sha1": "3111aa55f72fa039b4397f39476967679bfb1764", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/11/4/4125/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78a03bdba1b2ba4bbf22f08beeabe6cb8a15fb8d", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
52274975
pes2o/s2orc
v3-fos-license
Distilled Wasserstein Learning for Word Embedding and Topic Modeling We propose a novel Wasserstein method with a distillation mechanism, yielding joint learning of word embeddings and topics. The proposed method is based on the fact that the Euclidean distance between word embeddings may be employed as the underlying distance in the Wasserstein topic model. The word distributions of topics, their optimal transports to the word distributions of documents, and the embeddings of words are learned in a unified framework. When learning the topic model, we leverage a distilled underlying distance matrix to update the topic distributions and smoothly calculate the corresponding optimal transports. Such a strategy provides the updating of word embeddings with robust guidance, improving the algorithmic convergence. As an application, we focus on patient admission records, in which the proposed method embeds the codes of diseases and procedures and learns the topics of admissions, obtaining superior performance on clinically-meaningful disease network construction, mortality prediction as a function of admission codes, and procedure recommendation. Introduction Word embedding and topic modeling play important roles in natural language processing (NLP), as well as other applications with textual and sequential data. Many modern embedding methods [30,33,28] assume that words can be represented and predicted by contextual (surrounding) words. Accordingly, the word embeddings are learned to inherit those relationships. Topic modeling methods [8], in contrast, typically represent documents by the distribution of words, or other "bagof-words" techniques [17,24], ignoring the order and semantic relationships among words. The distinction between how the word order is (or is not) accounted for when learning topics and word embeddings manifests a potential methodological gap or mismatch. This gap is important when considering clinical-admission analysis, the motivating application of this paper. Patient admissions in hospitals are recorded by the code of international classification of diseases (ICD). For each admission, one may observe a sequence of ICD codes corresponding to certain kinds of diseases and procedures, and each code is treated as a "word." To reveal the characteristics of the admissions and relationships between different diseases/procedures, we seek to model the "topics" of admissions and also learn an embedding for each ICD code. However, while we want embeddings of similar diseases/procedures to be nearby in the embedding space, learning the embedding vectors based on surrounding ICD codes for a given patient admission is less relevant, as there is often a diversity in the observed codes for a given admission, and the code order may hold less meaning. Take the MIMIC-III dataset [25] as an example. The ICD codes in each patient's admission are ranked according to a manually-defined priority, and the adjacent codes are often not clinically-correlated with each other. Therefore, we desire a model that jointly learns topics and word embeddings, and that for both does not consider the word (ICD code) order. Interestingly, even in the context of traditional NLP tasks, it has been recognized recently that effective word embeddings may Figure 1: Consider two admissions with mild and severe diabetes, which are represented by two distributions of diseases (associated with ICD codes) in red and orange, respectively. They are two dots in the Wasserstein ambient space, corresponding to two weighted barycenters of Wasserstein topics (the color stars). The optimal transport matrix between these two admissions is built on the distance between disease embeddings in the Euclidean latent space. The large value in the matrix (the dark blue elements) indicates that it is easy to transfer diabetes to its complication like nephropathy, whose embedding is a short distance away (short blue arrows). be learned without considering word order [37], although that work didn't consider topic modeling or our motivating application. Although some works have applied word embeddings to represent ICD codes and related clinical data [11,22], they ignore the fact that the clinical relationships among the diseases/procedures in an admission may not be approximated well by their neighboring relationships in the sequential record. Most existing works either treat word embeddings as auxiliary features for learning topic models [15] or use topics as the labels for supervised embedding [28]. Prior attempts at learning topics and word embeddings jointly [38] have fallen short from the perspective of these two empirical strategies. We seek to fill the aforementioned gap, while applying the proposed methodology to clinicaladmission analysis. As shown in Fig. 1, the proposed method is based on a Wasserstein-distance model, in which (i) the Euclidean distance between ICD code embeddings works as the underlying distance (also referred to as the cost) of the Wasserstein distance between the distributions of the codes corresponding to different admissions [26]; (ii) the topics are "vertices" of a geometry in the Wasserstein space and the admissions are the "barycenters" of the geometry with different weights [36]. When learning this model, both the embeddings and the topics are inferred jointly. A novel learning strategy based on the idea of model distillation [20,29] is proposed, improving the convergence and the performance of the learning algorithm. The proposed method unifies word embedding and topic modeling in a framework of Wasserstein learning. Based on this model, we can calculate the optimal transport between different admissions and explain the transport by the distance of ICD code embeddings. Accordingly, the admissions of patients become more interpretable and predictable. Experimental results show that our approach is superior to previous state-of-the-art methods in various tasks, including predicting admission type, mortality of a given admission, and procedure recommendation. A Wasserstein Topic Model Based on Euclidean Word Embeddings Assume that we have M documents and a corpus with N words, e.g., respectively, admission records and the dictionary of ICD codes. These documents can be represented by Y = [y m ] ∈ R N ×M , where y m ∈ Σ N , m ∈ {1, ..., M }, is the distribution of the words in the m-th document, and Σ N is an N -dimensional simplex. These distributions can be represented by some basis (i.e., topics), denoted as B = [b k ] ∈ R N ×K , where b k ∈ Σ N is the k-th base distribution. The word embeddings can be formulated as X = [x n ] ∈ R D×N , where x n is the embedding of the n-th word, n ∈ {1, ..., N }, is obtained by a model, i.e., x n = g θ (w n ) with parameters θ and predefined representation w n of the word (e.g., w n may be a one-hot vector for each word). The distance between two word embeddings is denoted d nn = d(x n , x n ), and generally it is assumed to be Euclidean. These distances can be formulated as a parametric distance matrix D θ = [d nn ] ∈ R N ×N . Denote the space of the word distributions as the ambient space and that of their embeddings as the latent space. We aim to model and learn the topics in the ambient space and the embeddings in the latent space in a unified framework. We show that recent developments in the methods of Wasserstein learning provide an attractive solution to achieve this aim. Revisiting topic models from a geometric viewpoint Traditional topic models [8] often decompose the distribution of words conditioned on the observed document into two factors: the distribution of words conditioned on a certain topic, and the distribution of topics conditioned on the document. Mathematically, it corresponds to a low-rank factorization of Y , i.e., Y = BΛ, where B = [b k ] contains the word distributions of different topics and Λ = [λ m ] ∈ R K×M , λ m = [λ km ] ∈ Σ K , contains the topic distributions of different documents. Given B and λ m , y m can be equivalently written as where λ km is the probability of topic k given document m. From a geometric viewpoint, {b k } in (1) can be viewed as vertices of a geometry, whose "weights" are λ m . Then, y m is the weighted barycenter of the geometry in the Euclidean space. Following this viewpoint, we can extend (1) to another metric space, i.e., where y d 2 (B, λ m ) is the barycenter of the geometry, with vertices B and weights λ m in the space with metric d. Wasserstein topic model When the distance d in (2) is the Wasserstein distance, we obtain a Wasserstein topic model, which has a natural and explicit connection with word embeddings. Mathematically, let (Ω, d) be an arbitrary space with metric D and P (Ω) be the set of Borel probability measures on Ω, respectively. Definition 2.1. For p ∈ [1, ∞) and probability measures u and v in P (Ω), their p-order Wasserstein distance [40] is is the set of all probability measures on Ω × Ω with u and v as marginals. Definition 2.2. The p-order weighted Fréchet mean in the Wasserstein space (or called Wasserstein barycenter) [1] When Ω is a discrete state space, i.e., {1, ..., N }, the Wasserstein distance is also called the optimal transport (OT) distance [36]. More specifically, the Wasserstein distance with p = 2 corresponds to the solution to the discretized Monge-Kantorovich problem: where u and v are two distributions of the discrete states and D ∈ R N ×N is the underlying distance matrix, whose element measures the distance between different states. Π(u, v) = {T |T 1 = u, T 1 = v}, and Tr(·) represents the matrix trace. The matrix T is called the optimal transport matrix when the minimum in (3) is achieved. Applying the discrete Wasserstein distance in (3) to (2), we obtain our Wasserstein topic model, i.e., In this model, the discrete states correspond to the words in the corpus and the distance between different words can be calculated by the Euclidean distance between their embeddings. In this manner, we establish the connection between the word embeddings and the topic model: the distance between different topics (and different documents) is achieved by the optimal transport between their word distributions built on the embedding-based underlying distance. For arbitrary two word embeddings, the more similar they are, the smaller underlying distance we have, and more easily we can achieve transfer between them. In the learning phase (as shown in the following section), we can learn the embeddings and the topic model jointly. This model is especially suitable for clinical admission analysis. As discussed above, we not only care about the clustering structure of admissions (the relative proportion, by which each topic is manifested in an admission), but also want to know the mechanism or the tendency of their transfers in the level of disease. As shown in Fig. 1, using our model, we can calculate the Wasserstein distance between different admissions in the level of disease and obtain the optimal transport from one admission to another explicitly. The hierarchical architecture of our model helps represent each admission by its topics, which are the typical diseases/procedures (ICD codes) appearing in a class of admissions. Wasserstein Learning with Model Distillation Given the word-document matrix Y and a predefined number of topics K, we wish to jointly learn the basis B, the weight matrix Λ, and the model g θ of word embeddings. This learning problem can be formulated as Here, D θ = [d nn ] and the element d nn = g θ (w n )−g θ (w n ) 2 . The loss function L(·, ·) measures the difference between y m and its estimation y W 2 2 (B, λ m ; D θ ). We can solve this problem based on the idea of alternating optimization. In each iteration we first learn the basis B and the weights Λ given the current parameters θ. Then, we learn the new parameters θ based on updated B and Λ. Updating word embeddings to enhance the clustering structure Suppose that we have obtained updated B and Λ. Given current D θ , we denote the optimal transport between document y m and topic b k as T km . Accordingly, the Wasserstein distance between y m and b k is Tr(T km D θ ). Recall from the topic model in (4) that each document y m is represented as the weighted barycenter of B in the Wasserstein space, and the weights λ m = [λ km ] represent the closeness between the barycenter and different bases (topics). To enhance the clustering structure of the documents, we update θ by minimizing the Wasserstein distance between the documents and their closest topics. Consequently, the documents belonging to different clusters would be far away from each other. The corresponding objective function is where T kmm is the optimal transport between y m and its closest base b km . The aggregation of these transports is given by T = m T kmm = [t nn ], and X θ = [x n,θ ] are the word embeddings. Considering the symmetry of D θ , we can replace t nn in (6) with t nn +t n n 2 . The objective function can be further written as Tr( To avoid trivial solutions like X θ = 0, we add a smoothness regularizer and update θ by optimizing the following problem: where θ c is current parameters and β controls the significance of the regularizer. Similar to Laplacian Eigenmaps [6], the aggregated optimal transport T works as the similarity measurement between proposed embeddings. However, instead of requiring the solution of (7) to be the eigenvectors of L, we enhance the stability of updating by ensuring that the new θ is close to the current one. Updating topic models based on the distilled underlying distance Given updated word embeddings and the corresponding underlying distance D θ , we wish to further update the basis B and the weights Λ. The problem is formulated as a Wasserstein dictionary-learning problem, as proposed in [36]. Following the same strategy as [36], we rewrite {λ m } and {b k } as (7). The learning rate ρ. 2: Output: The parameters θ, basis B, and weights Λ. For Each batch of documents 6: Calculate the Sinkhorn gradient with distillation: where A = [α km ] and R = [γ nk ] are new parameters. Based on (8), the normalization of {λ m } and {b k } is met naturally, and we can reformulate (5) to an unconstrained optimization problem, i.e., Different from [36], we introduce a model distillation method to improve the convergence of our model. The key idea is that the model with the current underlying distance D θ works as a "teacher," while the proposed model with new basis and weights is regarded as a "student." Through D θ , the teacher provides the student with guidance for its updating. We find that if we use the current underlying distance D θ to calculate basis B and weights Λ, we will encounter a serious "vanishing gradient" problem when solving (7) in the next iteration. Because Tr(T kmm D θ ) in (6) has been optimal under the current underlying distance and new B and Λ, it is difficult to further update D θ . Inspired by recent model distillation methods in [20,29,34], we use a smoothed underlying distance matrix to solve the "vanishing gradient" problem when updating B and Λ. In particular, the (9) is replaced by a Sinkhorn distance with the smoothed underlying distance, i.e., y S (B(R), λ m (A); D τ θ ), where (·) τ , 0 < τ < 1, is an element-wise power function of a matrix. The Sinkhorn distance S is defined as S (u, v; D) = min T ∈Π(u,v) Tr(T D) + Tr(T ln(T )), where ln(·) calculates element-wise logarithm of a matrix. The parameter τ works as the reciprocal of the "temperature" in the smoothed softmax layer in the original distillation method [20,29]. The principle of our distilled learning method is that when updating B and Λ, the smoothed underlying distance is used to provide "weak" guidance. Consequently, the student (i.e., the proposed new model with updated B and Λ) will not completely rely on the information from the teacher (i.e., the underlying distance obtained in a previous iteration), and will tend to explore new basis and weights. In summary, the optimization problem for learning the Wasserstein topic model is which can be solved under the same algorithmic framework as that in [36]. Our algorithm is shown in Algorithm 1. The details of the algorithm and the influence of our distilled learning strategy on the convergence of the algorithm are given in the Supplementary Material. Note that our method is compatible with existing techniques, which can work as a fine-tuning method when the underlying distance is initialized by predefined embeddings. When the topic of each document is given, k m in (6) is predefined and the proposed method can work in a supervised way. Related Work Word embedding, topic modeling, and their application to clinical data Traditional topic models, like latent Dirichlet allocation (LDA) [8] and its variants, rely on the "bag-of-words" representation of documents. Word embedding [30] provides another choice, which represents documents as the fusion of the embeddings [27]. Recently, many new word embedding techniques have been proposed, e.g., the Glove in [33] and the linear ensemble embedding in [32], which achieve encouraging performance on word and document representation. Some works try to combine word embedding and topic modeling. As discussed above, they either use word embeddings as features for topic models [38,15] or regard topics as labels when learning embeddings [41,28]. A unified framework for learning topics and word embeddings was still absent prior to this paper. Focusing on clinical data analysis, word embedding and topic modeling have been applied to many tasks. Considering ICD code assignment as an example, many methods have been proposed to estimate the ICD codes based on clinical records [39,5,31,22], aiming to accelerate diagnoses. Other tasks, like clustering clinical data and the prediction of treatments, can also be achieved by NLP techniques [4,19,11]. Wasserstein learning and its application in NLP The Wasserstein distance has been proven useful in distribution estimation [9], alignment [44] and clustering [1,43,14], avoiding over-smoothed intermediate interpolation results. It can also be used as loss function when learning generative models [12,3]. The main bottleneck of the application of Wasserstein learning is its high computational complexity. This problem has been greatly eased since Sinkhorn distance was proposed in [13]. Based on Sinkhorn distance, we can apply iterative Bregman projection [7] to approximate Wasserstein distance, and achieve a near-linear time complexity [2]. Many more complicated models have been proposed based on Sinkhorn distance [16,36]. Focusing on NLP tasks, the methods in [26,21] use the same framework as ours, computing underlying distances based on word embeddings and measuring the distance between documents in the Wasserstein space. However, the work in [26] does not update the pretrained embeddings, while the model in [21] does not have a hierarchical architecture for topic modeling. Model distillation As a kind of transfer learning techniques, model distillation was originally proposed to learn a simple model (student) under the guidance of a complicated model (teacher) [20]. When learning the target-distilled model, a regularizer based on the smoothed outputs of the complicated model is imposed. Essentially, the distilled complicated model provides the target model with some privileged information [29]. This idea has been widely used in many applications, e.g., textual data modeling [23], healthcare data analysis [10], and image classification [18]. Besides transfer learning, the idea of model distillation has been extended to control the learning process of neural networks [34,35,42]. To the best of our knowledge, our work is the first attempt to combine model distillation with Wasserstein learning. Experiments To demonstrate the feasibility and the superiority of our distilled Wasserstein learning (DWL) method, we apply it to analysis of admission records of patients, and compare it with state-of-the-art methods. We consider a subset of the MIMIC-III dataset [25], containing 11, 086 patient admissions, corresponding to 56 diseases and 25 procedures, and each admission is represented as a sequence of ICD codes of the diseases and the procedures. Using different methods, we learn the embeddings of the ICD codes and the topics of the admissions and test them on three tasks: mortality prediction, admission-type prediction, and procedure recommendation. For all the methods, we use 50% of the admissions for training, 25% for validation, and the remaining 25% for testing in each task. For our method, the embeddings are obtained by the linear projection of one-hot representations of the ICD codes, which is similar to the Word2Vec [30] and the Doc2Vec [27]. For our method, the loss function L is squared loss. The hyperparameters of our method are set via cross validation: the batch size s = 256, β = 0.01, = 0.01, the number of topics K = 8, the embedding dimension D = 50, and the learning rate ρ = 0.05. The number of epochs I is set to be 5 when the embeddings are initialized by Word2Vec, and 50 when training from scratch. The distillation parameter is τ = 0.5 empirically, whose influence on learning result is shown in the Supplementary Material. Admission classification and procedure recommendation The admissions of patients often have a clustering structure. According to the seriousness of the admissions, they are categorized into four classes in the MIMIC-III dataset: elective, emergency, urgent and newborn. Additionally, diseases and procedures may lead to mortality, and the admissions can be clustered based on whether the patients die or not during their admissions. Even if learned in a unsupervised way, the proposed embeddings should reflect the clustering structure of the admissions to some degree. We test our DWL method on the prediction of admission type and mortality. For the admissions, we can either represent them by the distributions of the codes and calculate the Wasserstein distance between them, or represent them by the average pooling of the code embeddings and calculate the Euclidean distance between them. A simple KNN classifier can be applied under these two metrics, and we consider K = 1 and K = 5. We compare the proposed method with the following baselines: (i) bag-of-words-based methods like TF-IDF [17] and LDA [8]; (ii) word/document embedding methods like Word2Vec [30], Glove [33], and Doc2Vec [27]; and (iii) the Wasserstein-distance-based method in [26]. We tested various methods in 20 trials. In each trial, we trained different models on a subset of training admissions and tested them on the same testing set, and calculated the averaged results and their 90% confidential intervals. The classification accuracy for various methods are shown in Table 1. Our DWL method is superior to its competitors on classification accuracy. Besides this encouraging result, we also observe two interesting and important phenomena. First, for our DWL method the model trained from scratch has comparable performance to that fine-tuned from Word2Vec's embeddings, which means that our method is robust to initialization when exploring clustering structure of admissions. Second, compared with measuring Wasserstein distance between documents, representing the documents by the average pooling of embeddings and measuring their Euclidean distance obtains comparable results. Considering the fact that measuring Euclidean distance has much lower complexity than measuring Wasserstein distance, this phenomenon implies that although our DWL method is time-consuming in the training phase, the trained models can be easily deployed for large-scale data in the testing phase. The third task is recommending procedures according to the diseases in the admissions. In our framework, this task can be solved by establishing a bipartite graph between diseases and procedures based on the Euclidean distance between their embeddings. The proposed embeddings should reflect the clinical relationships between procedures and diseases, such that the procedures are assigned to the diseases with short distance. For the m-th admission, we may recommend a list of procedures with length L, denoted as E m , based on its diseases and evaluate recommendation results based on the ground truth list of procedures, denoted as T m . In particular, given {E m , T m }, we calculate the top-L precision, recall and F1-score as follows: P = 2PmRm Pm+Rm . Table 2 shows the performance of various methods with L = 1, 3, 5. We find that although our DWL method is not as good as the Word2Vec when the model is trained from scratch, which may be caused by the much fewer epochs we executed, it indeed outperforms other methods when the model is fine-tuned from Word2Vec. Supplementary Material. The ICD codes related to diseases are with a prefix "d", whose nodes are blue, while those related to procedures are with a prefix "p", whose nodes are orange. (b-d) Three enlarged subgraphs corresponding to the red frames in (a). In each subfigure, the nodes/dots in blue are diseases while the nodes/dots in orange are procedures. Rationality Analysis To verify the rationality of our learning result, in Fig. 2 we visualize the KNN graph of diseases and procedures. We can find that the diseases in Fig. 2(a) have obvious clustering structure while the procedures are dispersed according to their connections with matched diseases. Furthermore, the three typical subgraphs in Fig. 2 can be interpreted from a clinical viewpoint. Figure 2(b) clusters cardiovascular diseases like hypotension (d_4589, d_45829) and hyperosmolality (d_2762) with their common procedure, i.e., diagnostic ultrasound of heart (p_8872). Figure 2(c) clusters coronary artery bypass (p_3615) with typical postoperative responses like hyperpotassemia (d_2767), cardiac complications (d_9971) and congestive heart failure (d_4280). Figure 2(d) clusters chronic pulmonary heart diseases (d_4168) with its common procedures like cardiac catheterization (p_3772) and abdominal drainage (p_5491) and the procedures are connected with potential complications like septic shock (d_78552). The rationality of our learning result can also be demonstrated by the topics shown in Table 3. According to the top-3 ICD codes, some topics have obvious clinical interpretations. Specifically, topic 1 is about kidney disease and its complications and procedures; topic 2 and 5 are about serious cardiovascular diseases; topic 4 is about diabetes and its cardiovascular complications and procedures; topic 6 is about the diseases and the procedures of neonatal. We show the map between ICD codes and corresponding diseases/procedures in the Supplementary Material. Conclusion and Future Work We have proposed a novel method to jointly learn the Euclidean word embeddings and a Wasserstein topic model in a unified framework. An alternating optimization method was applied to iteratively update topics, their weights, and the embeddings of words. We introduced a simple but effective model distillation method to improve the performance of the learning algorithm. Testing on clinical admission records, our method shows the superiority over other competitive models for various tasks. Currently, the proposed learning method shows a potential for more-traditional textual data analysis (documents), but its computational complexity is still too high for large-scale document applications (because the vocabulary for real documents is typically much larger than the number of ICD codes considered here in the motivating hospital-admissions application). In the future, we plan to further accelerate the learning method, e.g., by replacing the Sinkhorn-based updating precedure with its variants like the Greenkhorn-based updating method [2]. Acknowledgments It should be noted that although we set the distillation parameter empirically, as [20,29] did, we give a reasonable range: τ should be smaller than 1 (to achieve distillation) and larger than 0.25 (to avoid oversmoothness). We will study the setting of the parameter in our future work. Sentiment analysis on Twitter dataset Besides the MIMIC-III dataset, we compared our method against the Wasserstein-distance based method [26] on sentiment analysis based on the Twitter dataset in that paper. Our method obtains comparable results, i.e., 28.92 ± 0.14% testing error, which is slightly lower than that in [26]. The enlarged graph of ICD codes The Fig. 2(a) in the paper is enlarged and shown below for better visual effect. The map between ICD codes and diseases/procedures is attached as well.
2018-09-12T23:10:23.000Z
2018-09-12T00:00:00.000
{ "year": 2018, "sha1": "a5bc217e205b70b03b512bc7c716f7493d270fad", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e520328ce3bc120985e3829031fcd5a75f406b7e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
257468971
pes2o/s2orc
v3-fos-license
The Potent G-Quadruplex-Binding Compound QN-302 Downregulates S100P Gene Expression in Cells and in an In Vivo Model of Pancreatic Cancer The naphthalene diimide compound QN-302, designed to bind to G-quadruplex DNA sequences within the promoter regions of cancer-related genes, has high anti-proliferative activity in pancreatic cancer cell lines and anti-tumor activity in several experimental models for the disease. We show here that QN-302 also causes downregulation of the expression of the S100P gene and the S100P protein in cells and in vivo. This protein is well established as being involved in key proliferation and motility pathways in several human cancers and has been identified as a potential biomarker in pancreatic cancer. The S100P gene contains 60 putative quadruplex-forming sequences, one of which is in the promoter region, 48 nucleotides upstream from the transcription start site. We report biophysical and molecular modeling studies showing that this sequence forms a highly stable G-quadruplex in vitro, which is further stabilized by QN-302. We also report transcriptome analyses showing that S100P expression is highly upregulated in tissues from human pancreatic cancer tumors, compared to normal pancreas material. The extent of upregulation is dependent on the degree of differentiation of tumor cells, with the most poorly differentiated, from more advanced disease, having the highest level of S100P expression. The experimental drug QN-302 is currently in pre-IND development (as of Q1 2023), and its ability to downregulate S100P protein expression supports a role for this protein as a marker of therapeutic response in pancreatic cancer. These results are also consistent with the hypothesis that the S100P promoter G-quadruplex is a potential therapeutic target in pancreatic cancer at the transcriptional level for QN-302. Introduction Pancreatic cancer, of which the most common form by far is pancreatic ductal adenocarcinoma (PDAC), is one of the most intractable of all human cancers [1][2][3]. Early stages of the disease are largely asymptomatic, so presentation is commonly at stages 2-4. Surgery is possible in only a small percentage of cases, and therapeutic intervention by single-agent or combination chemotherapy only rarely produces an increase in life expectancy beyond 1-3 years [4]. This dismal overall picture has not significantly changed in 30 years. Even though many small-molecule experimental drugs and immunotherapies have reached the clinical trial stage, remarkably few have had a significantly greater effect on survival than gemcitabine, for many years the (palliative) standard of care/drug of choice in the clinic [5][6][7][8]. To date, no targeted therapy has received clinical approval in PDAC, although expectancy beyond 1-3 years [4]. This dismal overall picture has not significantly changed in 30 years. Even though many small-molecule experimental drugs and immunotherapies have reached the clinical trial stage, remarkably few have had a significantly greater effect on survival than gemcitabine, for many years the (palliative) standard of care/drug of choice in the clinic [5][6][7][8]. To date, no targeted therapy has received clinical approval in PDAC, although the new generation of KRAS G12D inhibitors may have promise [9][10][11][12][13]. Genomic studies of PDAC have demonstrated the complexity and heterogeneity of the disease [14][15][16][17], which are major factors hindering effective precision medicine approaches to treatment [18,19] and the development of clinically useable biomarkers [20]. We have adopted an approach to the development of an effective small-molecule therapy for PDAC, based on targeting the prevalence of discrete "signal" quadruplex sequences within cancer genes, as described below. The compound QN-302 ( Figure 1A), a tetra-substituted naphthalene diimide derivative, has previously been disclosed [21] to have single-digit nM anti-proliferative activity in a panel of human pancreatic ductal adenocarcinoma (PDAC) cell lines and significant anti-tumor activity in the MIA PaCa-2 xenograft model for PDAC, with a 91% reduction in tumor volume relative to the vehicle control arm (p = 0.008) for twice-weekly dosing over a four-week period. A statistically significant increase in survival for treated animals (p = 0.016) was also observed in the KPC genetically engineered mouse model for PDAC. QN-302 has good bioavailability at therapeutic doses and is currently in advanced pre-clinical development with Qualigen Therapeutics Inc. It has also recently (Jan 2023) been granted Orphan Drug Designation status by the FDA in the USA for the treatment of pancreatic cancer. [lmn] [3,8]phenanthroline-1,3,6,8(2H,7H)-tetraone). (B) DNA sequence in the promoter region of the S100P gene found to form a G-quadruplex. The G4 sequence itself is bounded within the red box and the individual G-tracts are highlighted in red. The proposed mode of action of QN-302 involves high-affinity (ca 1 nM) binding to and stabilization of G-quadruplex (G4)-forming sequences [22]. G4s are formed by the folding into higher-order structures of short repetitive DNA and RNA guanine-rich sequences [23][24][25]. These are over-represented in the promoter regions of many cancer- [3,8]phenanthroline-1,3,6,8(2H,7H)-tetraone). (B) DNA sequence in the promoter region of the S100P gene found to form a G-quadruplex. The G4 sequence itself is bounded within the red box and the individual G-tracts are highlighted in red. The proposed mode of action of QN-302 involves high-affinity (ca 1 nM) binding to and stabilization of G-quadruplex (G4)-forming sequences [22]. G4s are formed by the folding into higher-order structures of short repetitive DNA and RNA guanine-rich sequences [23][24][25]. These are over-represented in the promoter regions of many cancer-related and proliferative genes [26][27][28][29][30]. This stabilization is believed to inhibit transcription factor binding and the progression of RNA polymerase and thus directly results in downregulation of gene expression at the transcriptional level. Transcriptome (RNA-seq) analyses of the effects of QN-302 [21] and the related compound CM03 [31] in MIA PaCa-2 cells has confirmed this hypothesis and has revealed a pattern of susceptible genes involved in cancer-associated pathways. This has also confirmed that the downregulated genes are over-represented with G4 sequences in their promoters. Notable G4-containing genes downregulated by QN-302 are in the mTOR, axon guidance, VEGF, insulin and Wnt/βcatenin pathways, which are implicated in PDAC disease and progression. The major gene changes in gemcitabine-treated cells have also been mapped and shown to generally not affect these G4-containing genes sensitive to QN-302 [21,32]. Consequently, PDAC cells with induced gemcitabine resistance retain their sensitivity to these G4 ligands. We report here on a further analysis of the transcriptomic data in PDAC cells, which revealed that the S100P gene is prominent among the most downregulated by QN-302 treatment. Other in vitro and in vivo studies described here have revealed it to be a gene that is knocked down by QN-302 not only in PDAC cells but also in a PDAC xenograft model. The expression of this gene, which codes for a small (10.4 kDa) calcium-binding protein [33,34], has been found to be highly upregulated in 70.4% of a cohort of 176 human PDAC patients [35], and correlates with disease status. S100P induces MAPK/ERK as well as PI3K/AKT growth-promoting pathways and S100P knock-out leads to P53-mediated cancer cell death. It has been proposed as a plausible biomarker for diagnostic purposes and possibly also as a therapeutic target in PDAC, as well as in colorectal cancer [33,34,[36][37][38][39]. Thus, the central question addressed in this study has been whether QN-302 treatment in cells and in vivo have effects on the S100P gene and its expressed protein that supports it being suitable as a prognostic biomarker for this drug, although the mechanistic details are yet to be fully established. RNA-seq Analysis of RNA from Cell-Based Studies RNA-seq analysis of the effects of 24 h exposure of QN-302 on MIA PaCa-2 cells have previously shown that the mRNA levels in the prominent cancer pathway quadruplexcontaining genes such as GLI1, MAPK11 and BCL-2 were downregulated by 1-2 fold [21]. Table 1 lists several other PDAC-related genes selected from the list of 229 genes in the cancer genetics web site [40] as also having significantly downregulated expression in this RNA-seq data set. Genes S100P and CX3CL1 are the most prominent in this set, and the former was selected for further study in view of its well-established upregulation role in PDAC (see below), which was also supported by the patient-derived transcriptome data presented in Table 1 and below. The downregulation of S100P expression, by 3.23 log 2 fold (89.8% downregulation relative to controls), is within the top 0.1% cohort of gene changes in the complete MIA PaCa-2 transcriptome. Figure 2A-C show the results of quantitation of protein and mRNA expression for S100P and its gene product for weekly and bi-weekly dosing using the MIA PaCa-2 xenograft model. In each case, as for the vehicle control arm, data are available for three mice. S100P protein expression was downregulated by 60% (p < 0.05) for weekly dosing and by 75% (p < 0.01) for twice weekly (Figure 2A,B), implying a dose-dependent effect. Similar reductions in mRNA levels were also observed ( Figure 2C). RNA-seq Analysis of Human PDAC Tumor Tissues To identify direct PDAC-related targets and/or potential biomarkers for QN-3 RNA-seq was performed on a set of four PDAC patient samples: two from male patien diagnosed with poorly differentiated adenocarcinoma with invasion and two from m patients diagnosed with moderately differentiated ductal adenocarcinoma with invasio Table 2 shows a list of PDAC patient samples with their diagnoses, obtained fro University of Liverpool, four of which were chosen for RNA-seq. Normal pancrea samples of three healthy male individuals with age-matching the PDAC patients we obtained commercially from OriGene and used in the differential gene expression analy ( Table 2). The full RNA-seq data sets determined in this analysis are shown in Table S1 Figure 2. (A) Western blots of S100P protein and GAPDH control, for protein extracted from the results of the xenograft experiment with MIA PaCa-2 implanted tumors and a vehicle control arm, at day 28 of the study. The number at the top of each individual column represents an individual mouse. QW: once-weekly dosing. BiW: twice-weekly dosing. (B) Quantitation of the Western blot data. Statistical significances are indicated by * p < 0.05, ** p < 0.01. (C) Quantitation of the qPCR data for the S100P gene in the treated vs. vehicle control animals. Statistical significances are indicated by * p < 0.05, ** p < 0.01 (one-way ANOVA test). RNA-seq Analysis of Human PDAC Tumor Tissues To identify direct PDAC-related targets and/or potential biomarkers for QN-302, RNA-seq was performed on a set of four PDAC patient samples: two from male patients diagnosed with poorly differentiated adenocarcinoma with invasion and two from male patients diagnosed with moderately differentiated ductal adenocarcinoma with invasion. Table 2 shows a list of PDAC patient samples with their diagnoses, obtained from University of Liverpool, four of which were chosen for RNA-seq. Normal pancreatic samples of three healthy male individuals with age-matching the PDAC patients were obtained commercially from OriGene and used in the differential gene expression analysis ( Table 2). The full RNA-seq data sets determined in this analysis are shown in Table S1. Figure 3A shows the numbers of differentially expressed genes (DEGs) in poorly and moderately differentiated PDAC, divided into four subsets as indicated. To identify the upregulated genes in PDAC which could be targeted by QN-302, DEGs with log 2 FC ≥ 1 and FDR < 0.05 (Strong UP) in poorly and moderately differentiated PDAC were intersected with DEGs with log 2 FC ≤ −1 and FDR < 0.05 (Strong Down) in the QN-302-dosed cell data. The Venn diagram shows the shared and unshared number of genes between the three conditions ( Figure 3B). There are eight genes common for all three conditions, which are involved in PDAC and/or other cancers. The genes TSPAN1, KRT16 and S100P are highly upregulated in PDAC and downregulated by QN-302, all of which could be considered as potential therapeutic biomarkers. Since S100P has previously been extensively studied as a potential biomarker in PDAC diagnosis (see Discussion), we focused on it in this study, while the other genes may be considered in a future study. KEGG pathways enrichment analysis shows the similar top-affected signaling pathways between poorly and moderately differentiated PDAC, such as ECM-receptor interaction, focal adhesion, and axon guidance ( Figure 4 and Table S1). Interestingly, two of these signaling pathways are also targeted for downregulation by QN-302 [21]: 1. the axon guidance pathway common to both PDAC stages; and 2. the Rap1 pathway, in poorly differentiated PDAC. KEGG pathways enrichment analysis shows the similar top-affected signaling pathways between poorly and moderately differentiated PDAC, such as ECM-receptor interaction, focal adhesion, and axon guidance ( Figure 4 and Table S1). Interestingly, two of these signaling pathways are also targeted for downregulation by QN-302 [21]: 1. the axon guidance pathway common to both PDAC stages; and 2. the Rap1 pathway, in poorly differentiated PDAC. Bioinformatics Analyses The QGRS Mapper program located 28 putative quadruplex sequences in the coding strand and 32 in the template strand. The majority of these have low G scores, i.e., have reduced likelihood of forming stable quadruplexes. However, of these a total of six have plausible stable G4 sequences (Table 3). All bar one occurs in intronic regions. The exception occurs within the S100P promoter, 48 nucleotides upstream from the transcription start site. The same sequence also occurs in the highly homologous mouse S100P gene (ENSG00000163993). This putative quadruplex sequence on the template Bioinformatics Analyses The QGRS Mapper program located 28 putative quadruplex sequences in the coding strand and 32 in the template strand. The majority of these have low G scores, i.e., have reduced likelihood of forming stable quadruplexes. However, of these a total of six have plausible stable G4 sequences (Table 3). All bar one occurs in intronic regions. The exception occurs within the S100P promoter, 48 nucleotides upstream from the transcription start site. The same sequence also occurs in the highly homologous mouse S100P gene (ENSG00000163993). This putative quadruplex sequence on the template strand, and its C-rich complement on the coding strand, were used for subsequent biophysical evaluations. Biophysical Studies All experiments reported here were performed with the putative G-quadruplex promoter sequence from S100P, as detailed above. UV thermal difference spectroscopy was initially performed to characterize the structure formed in 10 mM lithium cacodylate, 100 mM potassium chloride buffer at pH 7.0 ( Figure 5A). A TDS with positive peaks at 240, 275 nm and a negative peak at 295 nm is consistent with a G-quadruplex structure [41]. CD spectroscopy in the presence of either 100 mM KCl, NaCl or LiCl gave spectra of the same general form ( Figure 5B), with positive bands at 210 and 265 nm, and a negative band at 245 nm, are consistent with a predominantly parallel G-quadruplex structure [42], a shoulder at 295 nm also indicates a small proportion that is antiparallel. CD melting experiments of S100P in buffer containing 10 mM lithium cacodylate and 100 mM KCl buffer at pH 7.0 gave a transition at 73 • C, indicating the G-quadruplex structure formed would be highly stable under physiological conditions. Adding 10 µM (1 eq) of QN-302 to the S100P G4 resulted in a ∆T m of 7.4 ± 0.2 • C, and 20 µM (2 eq) and 50 µM (5 eq) had correspondingly higher ∆T m values of 17.0 ± 0.1 and 20.0 ± 1.3 • C, respectively, where at 50 µM the melting was at the limit of what can be accurately measured under these experimental conditions ( Figure 5C). The largest increase in melting temperature was achieved with 2:1 ligand:DNA stoichiometry and there was not much further increase in melting temperature on addition of five equivalents of ligand. These data (Figindicate that QN-302 has a strong stabilising effect on the G-quadruplex structure formed. Table 3. Top predicted quadruplex sequences in the S100P gene, as found using the QGRS Mapper program (https://bioinformatics.ramapo.edu/QGRS/index.php), accessed on 12 June 2022. The G4 score, as defined in [43], is a measure of the likelihood of the sequence forming a stable quadruplex under physiological conditions. Only those sequences with a G-score >35 are listed here. G-tracts are underlined. The highlighted sequence, in the S100P promoter, is discussed further below. Molecular Modeling Docking studies onto the parallel G-quadruplex determined that the core of the QN-302 molecule lies in an energetically favorable position at the center of the terminal G-quartets and is positioned directly above the central electronegative channel of the quadruplex. The QN-302 side chains access the four grooves and the protonated nitrogen atoms in the morpholino and the pyrrolidine side chains have electrostatic interactions with the phosphate backbone or the N3 atoms in the guanines within the top quartet. The benzene ring in the benzyl-pyrrolidine side chain acts as an extension of the planar naphthalene diimide chromophore core and makes π stacking interactions with the imidazole ring in the guanines. Very similar interactions are also observed in the generated complex between QN-302 and the G4-duplex junction ( Figure 6A-D). The only difference between the pure quadruplex and the junction structure is that in the latter one of the morpholino groups has interactions with the N3 atom from the duplex DNA nucleotide at the junction. It is notable that QN-302 exploits the same interactions at the 3 terminal quartet (as in the telomeric G4) as well as on the 5 end at the G4-duplex junction. Discussion The results presented here show that expression of the S100P gene at the mRNA a protein levels in a PDAC cell line and in a PDAC xenograft model is hig downregulated by the quadruplex-binding compound QN-302. We also show that S10 mRNA is over-expressed in tumors from human PDAC patients. This is consistent w the concept that the extent of enhanced expression correlates with disease progress (and prognosis) for moderately and poorly differentiated PDAC human tumors, albeit the small tumor sample size in the present study. This result is in accord with numero other studies [33][34][35][36][37][38][39] and reflects the role that appears to be played by S100P in PDA where it promotes proliferation, tumorigenesis, invasion and progression [33,34]. conclude that the data support previous studies (for example, refs [36][37][38][39]) showing t S100P is a viable biomarker for PDAC. Monitoring of S100P levels may also be a use predictor of tumor response to treatment with the experimental drug QN-302. T potential advantages of S100P as a biomarker in PDAC diagnosis have been extensiv documented [33][34][35][36][37]39], comparing favorably to CA19-9 antigen, which can produce b false positive and false negative indications of disease [44,45]. Discussion The results presented here show that expression of the S100P gene at the mRNA and protein levels in a PDAC cell line and in a PDAC xenograft model is highly downregulated by the quadruplex-binding compound QN-302. We also show that S100P mRNA is over-expressed in tumors from human PDAC patients. This is consistent with the concept that the extent of enhanced expression correlates with disease progression (and prognosis) for moderately and poorly differentiated PDAC human tumors, albeit for the small tumor sample size in the present study. This result is in accord with numerous other studies [33][34][35][36][37][38][39] and reflects the role that appears to be played by S100P in PDAC, where it promotes proliferation, tumorigenesis, invasion and progression [33,34]. We conclude that the data support previous studies (for example, refs [36][37][38][39]) showing that S100P is a viable biomarker for PDAC. Monitoring of S100P levels may also be a useful predictor of tumor response to treatment with the experimental drug QN-302. The potential advantages of S100P as a biomarker in PDAC diagnosis have been extensively documented [33][34][35][36][37]39], comparing favorably to CA19-9 antigen, which can produce both false positive and false negative indications of disease [44,45]. Evidence is also presented that the putative quadruplex sequence in the promoter region of the S100P gene [46] folds into a highly stable G-quadruplex in solution conditions of physiological potassium ion concentration. Circular dichroism analysis indicates that the quadruplex has a parallel topology that is retained on binding QN-302. This compound imparts further stability to this quadruplex, as shown by an increase in melting temperature. We suggest that the molecular model presented here is representative of a more generalized parallel quadruplex ligand complex structure, in that the similarity in the mode of QN-302 binding to that found in naphthalene diimide quadruplex complex crystal structures [47,48] gives plausibility to the model, even in the absence of detailed experimental structural data on this S100P quadruplex or its QN-302 complex. It is notable that the modelled geometry of the duplex-quadruplex hybrid complex [49] is remarkably close to that recently reported by NMR methods [50]. The S100P quadruplex sequence contains three putative loop regions, of which one, comprising the single nucleotide T has the most frequent occurrence in quadruplex loops [51], where its geometry is normally only consistent with a parallel quadruplex topology. The biophysical data is thus consistent with (but does not prove) the hypothesis that these changes in S100P expression are a consequence of QN-302 binding to and stabilizing this promoter G-quadruplex. Definitive proof of a direct causal relationship as compared to correlative evidence must await further studies. There are many reports in the literature of analogous downregulatory effects on the transcription of other promoter quadruplex-containing genes that have been ascribed to quadruplex binding, notably to the c-MYC [52,53], h-TERT [54,55] and c-KIT [56,57] oncogenes. Transcriptional suppression by QN-302-induced G-quadruplex stabilization may, in part, be due to physical blocking of the movement of RNA polymerase along the template strand, and in part to competition with normal transcription factor binding at this promoter site, as has been demonstrated for the ligand pyridostatin [58]. The S100P promoter quadruplex box is known to be a binding site for the SP/KLF transcription factors [46]. Small molecule [59] and antibody [60] approaches successfully targeting S100P have previously been reported. For example, S100P knockdown experiments by siRNA resulted in apoptosis in endometrial epithelial cells [61]. It is hoped that future studies, beyond the scope of this manuscript, will establish the causal basis of the relationship between QN-302 binding to the S100P promoter G4 and in vitro/in vivo effects. Elevated S100P expression has been found in several other cancers, including colorectal [38], lung [62,63], where it correlates with the activity of the oncogenic PI3K/AKT signaling pathway, and gall bladder cancers [64]. Evidence for S100P being a viable anticancer target is also provided by an aptamer approach in colorectal cancer with high affinity to and selectivity for S100P protein. This aptamer has shown high activity in cells and in a xenograft model for this disease [65]. Materials and Methods QN-302 (>98% purity, as judged by LCMS) was used as the free base. Its synthesis and purification have been previously described [21]. RNA-seq Analysis of RNA from Cell-Based Studies These have been previously reported in detail [21], and the process of determining changes in transcription upon exposure of MIA PaCa-2 cells to QN-302 has been fully described. The RNA-seq data is available in the GEO public functional genomics data repository, as GSE151741 (https://www.ncbi.nlm.nih.gov/geo/), accessed on 12 June 2022. RNA-seq Analysis of Tumor Material from Xenograft Studies This employed the MIA PaCa-2 xenograft model for PDAC. All animal experiments in this section were performed at AXIS BIO Discovery Services Northern Ireland, in accordance with the UK Home Office Animals Scientific Procedures Act 1986 and the United Kingdom Co-ordinating Committee on Cancer Research Guidelines for the Welfare and Use of Animals in Cancer Research 4 , and with the approval of the AXISBIO Animal Ethics Committee. Mice had access to food and water ad libitum. For therapy studies [21], female athymic nude mice (2-3 months old, weighing 20−25 g) were injected subcutaneously with 10 7 MIA PaCa-2 cells in Matrigel in the right flank. When the tumors were established (approximately 13 days, mean size 0.05 cm 3 ), the mice were randomly assigned into treatment groups with eight mice per group. Compound QN-302 was administered IV, in sterile PBS (pH 6), plus a few drops of 0.1 mM HCl if needed to ensure complete solubilization, on a twice weekly basis, for 28 days. The vehicle control used was saline only, also with twice-weekly dosing. Tumor size was measured 3 times weekly by caliper using the π-based ellipsoid volume formula (length × width × height × π/6), and the mice were also weighed at the same time. Animals were examined daily for any signs of distress or toxicity from the treatments. Results of the therapy experiments are detailed elsewhere [21]. Quantification of expressed S100P protein and mRNA within tumor tissue was undertaken, using tissue taken from three control and three treated animals sacrificed at day 28 of the xenograft experiment. Western blotting. Frozen tumor tissue was thawed and lysed in RIPA (radio-immunoprecipitation assay) buffer (CST#9806) supplemented with phosphatase and protease inhibitors (CST#5872) and PMSF protease inhibitor (CST#8553). Lysate was isolated and total protein quantified for each sample using a commercially available assay kit. 30 µg total protein was loaded onto a 4-20% polyacrylamide gel and electrophoresed. Separated protein was transferred to a PVDF membrane and blocked in 5% BSA. Primary antibodies as listed below were diluted 1:1000 in 5% BSA and incubated at 4 • C overnight with constant rolling. Membranes were washed in TBS-T (a mixture of tris-buffered saline and polysorbate 20) before being incubated with secondary antibody (diluted 1:3000 in 5% BSA) at room temperature for 1h. Following washes in TBS-T, bands were observed following addition of SignalFire Plus ECL reagent (CST#12630). Images were taken using a GeneGnome imaging system (Syngene, UK). Densiometric analysis of the bands was carried out using Genesys image analysis software (part of the GeneGnome system). Statistical significances were analyzed using a one-way ANOVA test with a Bonferroni correction, where * p < 0.05 and ** p < 0.01. Antibodies: qPCR. RNA was isolated from frozen tumor tissue using a GeneJET RNA isolation kit (Thermo Fisher; K0731). Isolated RNA was subjected to a 1-step RT-qPCR process using a Superscript III Platinum kit (Thermo Fisher, Waltham, MA, USA; 11736051). Reactions were set up in 96-well format in the LightCycler 480 apparatus and run according to the manufacturer's instructions. Primer sequences used are listed below. HPRT (hypoxanthine-guanine phosphoribosyltransferase) was included as a reference gene. Reactions for every gene were set up in triplicate, including non-template controls. For analysis, we used CT (cycle threshold), and ∆CT, ∆∆CT, and RQ (relative expression) values were calculated according to the 2 -∆∆CT method. Statistical significances were analyzed using Student's t test (GraphPad Inc., San Diego, CA, USA), where * p < 0.05 and ** p < 0.01. RNA-seq Analysis of Human PDAC Tumor Tissues Transcriptome analyses were performed on a supplied panel of RNA samples, comprising RNA from a set of poorly differentiated as well as from more highly differentiated human pancreatic tumors, and the results have been compared with normal pancreas expression data. The RNA-seq data are available in the GEO public functional genomics data repository (https://www.ncbi.nlm.nih.gov/geo/), as accessions GSE226307 and GSM7071068-GSM7071074, deposited on 28 February 2023. The normal pancreas samples were obtained commercially from OriGene Technologies GmbH Germany (Cat #: CR560569, CR562915, CR561640) and the averaged gene expression data of the three samples were used in the analysis. The PDAC RNA samples were obtained from pancreatic tumor tissue with the Maxwell ® RSC SimplyRNA Tissue Kit (Promega Ltd. Loughborough, UK, Cat # AS1340). In brief, the tissue was snap frozen in the operating theatre following tumor removal and then homogenized on chilled 1-thioglycerol before adding 200 µL to a Maxwell RSC Cartridge and running on a Maxwell RSC instrument according to the manufacturers' instructions. The samples were obtained with ethical approval from the NRES Committee North-West-Liverpool Central, MREC 07/H1005/87. RNA quality (RIN > 7.0) was checked with an Aligent 2100 Bioanalyser RNA 6000 Nano Chip and RNA concentration was quantified using a Qubit ® fluorometer (Ther-moFisher;Waltham, MA, USA ) and Qubit ® RNA HS Assay Kit (ThermoFisher, cat #: Q32852). RNA-seq libraries were then generated using the KAPA mRNA Library Hyper-Prep Kit for Illumina ® following the manufacturer's instructions and sequenced using an Illumina NextSeq 500 instrument (undertaken at the UCL Genomics Facility). The sequence data were demultiplexed and converted to FASTQ files using Illumina's bcl2fastq Conversion Software (v2. 19). The adapter contamination and poor-quality sequences were removed from FASTQ files using the program Trimmomatic (v0.36) [66]. FASTQ files were then mapped to the human reference genome GRCh38 using the RNAseq aligner STAR (v2.5b: https://github.com/alexdobin/STAR), accessed on 12 February 2020. The JE-Suite procedure [67] was used to estimate duplication levels, using a Unique Molecule Identifier program to deduplicate, and then mark the reads that are the result of PCR amplification. Then, reads per transcript were counted using the program Feature-Counts [68] (v1.4.6p5) followed by normalization, modeling and differential expression analysis using the SARTools (v1.3.2) package [69], accessed on 14 February 2020. Bioinformatics Analyses The DNA sequence of the human S100P gene was extracted from the ENSEMBL genome browser, release 108 (https://www.ensembl.org/, accessed on 12 June 2022), as entry noENSG00000163993. The complete gene sequence contains 4493 nucleotides. The program QGRS Mapper [45] (https://bioinformatics.ramapo.edu/QGRS/index.php), accessed on 12 June 2022 was used to locate putative quadruplex sequences in both sense (coding) and antisense (template) strands of the complete S100P gene. A cut-off of a maximum loop size of 12 nucleotides was used in the searches. Biophysical Studies The putative G-quadruplex-forming sequence (PSQ) from the S100P promoter, as found from the bioinformatics analyses, 5 -d[AGGGTGGGACAGTGGGGTTGGGA]-3 , was purchased from Eurogentec UK and was supplied rpHPLC purified and as a dry solid. The DNA was initially dissolved as a stock solution in purified water (1.77 mM); further dilutions were carried out in the respective buffer. Samples were thermally annealed in a heat block at 95 • C for 5 minutes and cooled slowly to room temperature overnight. A stock solution of QN-302 was prepared in buffer, with some drops of 0.1 M HCl to aid solubilization. Data were analyzed using the OriginLab package (https://www.originlab.com/). Thermal Difference spectra (TDS) were performed on a Jasco V-750 UV-Vis spectrometer. The oligonucleotide sample was diluted to 5 µM in 10 mM lithium cacodylate, 100 mM KCl buffer at pH 7. After annealing, the sample (250 µL) was transferred to a quartz 10 mm cuvette and stoppered to reduce evaporation. The absorbance at wavelengths between 230 and 320 nm was measured for the folded structure at 4 • C and the unfolded structure at 95 • C. The sample was equilibrated for 5 to 10 min at each of the two temperatures before recording the absorbance. To calculate the TDS, the spectrum of the folded structure was subtracted from the unfolded structure spectrum. The resulting spectrum was zero corrected at 320 nm. Data was analyzed using OriginLab. Circular Dichroism (CD) experiments were recorded on a Jasco J-1500 spectropolarimeter using a 1 mm path length quartz cuvette. To characterize and examine the effect of different cations on the S100P sequence, CD spectra were recorded in the presence of Na + , Li + and K + cations. In each case, the 10 µM DNA sample was thermally annealed in 10 mM lithium cacodylate with either 100 mM of NaCl, LiCl and KCl, respectively at pH 7 (total volume: 100 µL). The scans were recorded at room temperature between 200 and 320 nm. Data pitch was set to 0.5 nm and measurements were taken at a scanning speed of 200 nm/min, data integration time, of 1 s, bandwidth of 1 nm. Each spectrum was the average of four scans. Samples containing only buffer were also scanned to allow for blank subtraction. The spectrum was zero corrected at 320 nm. Circular dichroism spectroscopy was used to measure any ligand-induced effects on the stability of the DNA structure. DNA samples were thermally annealed in 10 mM lithium cacodylate, 100 mM KCl pH 7 at 10 µM using 100 µL per sample. Melting experiments were performed in the presence and absence of 1 and 5 ligand equivalents (10 µM and 50 µM QN-302) while heating the sample at a rate of 1 • C/min from 5 to 95 • C and measuring at 5 • C intervals. The temperature at which 50% of the thermal denaturation had taken place (T m ) was calculated by using OriginLab data analysis software to plot normalized ellipticity against temperature. These data were fitted with sigmoidal dose-response curves to give the T m values. Final values are given as the average and standard deviation of two repeats. Molecular Modeling The crystal structures of the parallel stranded quadruplex [71] formed from the human telomeric sequence and its complex with the earlier-generation naphthalene diimide compound MM41 [47] (PDB entry 3UYH) were used as models to assess how QN-302 interacts with parallel-topology quadruplexes in general. To generate QN-302, the piperazine side chains in MM41 were replaced with propyl-pyrrolidine and benzyl-pyrrolidine, while retaining the morpholino side chains. The coordinates of QN-302 were used to generate a grid, which extended 10 Å around the ligand and encompassed the terminal quartet and the loops of the quadruplex structure. QN-302 was docked using MolSoft ICM 3.9-3a software (https://www.molsoft.com/). A similar protocol was adopted to dock QN-302 at the junction of a previously generated model for a parallel quadruplex-duplex DNA hybrid [49]. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/molecules28062452/s1, Table S1 comprising RNA-seq data for expression of the genes from human PDAC compared to those from normal pancreas. Data Availability Statement: Transcriptome data has been deposited and is available from the GEO public functional genomics data repository (https://www.ncbi.nlm.nih.gov/geo/), as GSE151741 (cell-based studies) and GSE226507, GSM7071068-1074 (for patient-derived material). Details of the molecular models are available from the authors on reasonable request. Conflicts of Interest: S. Neidle is a paid consultant and Advisory Board member of Qualigen Inc. T. Arshad is a paid employee of Qualigen. J. Worthington is a paid director and shareholder of AXIS Bioservices.
2023-03-12T15:49:12.789Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "b7e23a94a0973e0948a2a9c8f32822e7cd6aa413", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/28/6/2452/pdf?version=1678199655", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "307e3c60c84c1ea9aad23aaa265f2e841166ee48", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
139447645
pes2o/s2orc
v3-fos-license
The Influence of the Asymmetric Metal Electrodes on the Self-assembly of Single-walled Carbon Nanotubes by AC Dielectrophoresis There are two procedures in the single-walled carbon nanotubes (SWCNTs) DEP (dielectrophoresis) assembly process, including the DEP and suspension removal. This paper mainly pays attention to the effects of the suspension removal procedure on the assembly efficiency. We found that the interaction energy between the SWCNTs and electrode metal materials plays a major role in preventing the arriving SWCNTs from getting off the electrodes when the suspension is blown-off, which will impact the DEP assembly yield of SWCNTs further. Three kinds of metal materials having various interaction energy with SWCNTs with the order of Ti>Hf>Al were chosen as the electrode materials and then the SWCNTs DEP assembly experiments were carried out. Our results show that the assembly yields exhibit the accordant order with that of the interaction energy of Ti>Hf>Al, which demonstrates our deduction. Introduction After the past two decades of intensive research on their intrinsic features and diverse application potential, carbon nanotubes (CNTs) have evolved into studious application exploitations, especially for the CNT-based electronic devices, including transistors, chemical and biological sensors, diodes, and so forth. Because of the wide application in photovoltaic and betavoltaic microcells, infrared detectors and microwave rectifiers, SWCNT-based diodes have attracted immense research attention. They mainly have three structures, such as doped p-n junction, split-gates, and Schottky barrier (SB). Among them, the SB diode with an asymmetric configuration of "high-work-function (h) metal/SWCNT/low-work-function (l) metal" has the advantages of very simple fabrication and is doping-free resulting in avoiding the failure induced by dopant diffusion or freeze whether in high or extremely low temperatures. For the widespread application of SWCNT-based devices, one of the crucial obstacles of the state of the art is how to make the large-scale assembly of SWCNTs precise, efficient and compatible with conventional micro-fabrication technologies as well. Although the chemical vapor deposition (CVD) can achieve direct growth of SWCNTs on substrates [1], the high growth temperature makes it incompatible with the current complementary metal-oxide-semiconductor (CMOS) technologies [1]. Post-synthesis assembly techniques are the promising alternative to the CVD technique owning to their very simple set-up and operation at room temperature [2][3][4]. DEP is advantageous over other post-processed techniques because it allows controlling the position and density of the assembled SWCNTs between pre-fabricated electrodes and has no post-etching or transfer printing processes, which may introduce defects in the SWCNTs and degrade their electrical properties. A lot of research have investigated the dependence of the assembly yield of SWCNT depends its conditions, including AC bias voltage and frequency [5][6], assembly time [7], electrode geometry [8] and solution concentration [9]. In our SWCNT assembly experiments for asymmetric-work-function metal electrodes, different electrode-metal materials result in greatly differing assembly yields. However, to date no report concerns the matter of electrode-metal materials on the yield both in theory or experiment. In this paper, we qualitatively analyzed the interaction forces on the SWCNTs that occur throughout the DEP process, especially taking into account the suspension-removal procedure during the DEP process. We found that the interaction force between the SWCNTs and metal materials is the main factor preventing the arriving SWCNTs from leaving the electrodes during the suspension-removal procedure. Next we experimentally examined the influence of different electrode-metals on the DEP assembly of SWCNTs and the results prove our qualitatively deduction. THEORY AND CONSIDERATION The conventional DEP assembly process can be divided into two steps, i.e. SWCNTs self-assembly and the suspension blowing-off. In the first step, SWCNTs dispersed in the suspension move, reach and bridge the two opposite tips of the electrode at last. There are mainly three forces acting on the SWCNTs during this procedure, such as DEP force, hydrodynamic forces of electro-thermal flow and AC electro-osmosis flow, and adhesive forces as shown in Figure 1. The first two are long-range forces, which determine the moving direction and speed of the SWCNTs in the suspension [4][5][6][7][8]. The z-direction of the forces push the SWCNTs toward the tip of the electrodes, and the x-and y-direction make the SWCNT align with the gap direction. When two ends of SWCNTs approach the electrode surface at the distance of nanometer scale, the DEP and hydrodynamic forces decline abruptly because of the electrical field reduction and the short-range adhesive force like van der Waals one becomes dominant( Figure 1(b)). Then, the second step starts, i.e. the solution with those SWCNTs left should be blown off by nitrogen gas as fast and clean as possible. It is unavoidable that some of the assembled SWCNTs will be taken out at the same time. Just before the blow-off, although the two ends of SWCNTs are in contact with metal, the major part of the arrived SWCNTs are still floating in the suspension over the gap area with the distance to the substrate approximately equal to the thickness of the electrode about 100 nm in this paper. Therefore, the adhesive force between the arrived SWCNTs and the electrode-metal material plays an essential role in fixing the SWCNTs to resist the blowing force, especially at the starting moment. So, it can be deduced that electrode metal materials should impact the SWCNTs assembly yield. FDEPx, and FDEPz are the x-, and z-direction forces of DEP. F_AF is the contacting force between the metal and SWCNTs. There are two categories of interaction energy between the contacting materials. One is the physisorption which bring about relatively weak adhesive force, i.e. van der Waals forces. The contacting force between SWCNTs and Al is this kind. In addition, the strong oxidation in air will weaken the binding affinity of Al with SWCNTs further [11]. The other is the chemisorption caused by covalent bonds which is much stronger than the former. For SWCNTs, it is generated by sp2→sp3 transition of metal and adjacent C-atoms, such as SWCNTs-Pt and SWCNTs-Ti [11]. It has been reported that interaction energies of transition metals enhance with the number of vacancies in d orbital. If the vacancy is the same, the closer to the nucleus the d orbital is, the higher the interaction energy is, [14]. We can confer that the 3d metal of Ti possesses higher interaction energy than the 5d metal of Hf, even though they have the same number of vacancies in d orbital. Therefore, the descending order of the adhesive forces between SWCNT and the different metals is Ti>Hf>Al which were chosen as the electrode materials in our following experimental. STRUCTURE AND EXPERIMENTAL The tip-to-tip finger-electrode pairs ( Figure 2) where the SWCNTs will be bridged, are 30 μm in length, 3 μm in width, 2 μm gap of opposed finger tips and 5 μm lateral interval of the fingers, which were optimized by our previous experimental study [9]. Because of our emphasis on asymmetric-work-function metals, Pt was chosen as the h-metal and Ti, Hf and Al as l-metals. The Ti/Pt (20 nm/150 nm) and 170 nm thick Ti, Hf and Al were respectively sputtered and patterned by lift off processes in turn on the n-type<100> silicon wafer with 300 nm thermal oxide. The SWCNTs powder of high purity (>90%) was dissolved in N, N-Dimethylformamide solvent with the concentration of 0.5 μg/ml by ultrasonic dispersion for 2 hours. An AC voltage was loaded and then, a drop (~3 μl) of SWCNTs suspension was dripped onto the sample by capillary tube. After several tens of seconds it was blow-dried using nitrogen gas. Different DEP conditions were also investigated with three groups as: (1) RESULTS AND DISCUSSION We counted the number of SWCNTs assembled between each pair of electrodes by high resolution scanning electron microscope (NanoSEM). 360 pairs of electrodes were used to estimate the yield in every condition. The total-assembly yield rate (P) is the ratio of the number of electrode pairs assembled by SWCNTs to 360, and the single-assembly yield rate (Ps) is the ratio of the number of electrode pairs bridged by only one bundle of SWCNTs to 360 and the multi-assembly yield rate (Pm) is that of electrode pairs bridged by more than one bundles of SWCNTs. assembly times (c). Both Figure 3(a) and 3(c) show that the P increased with both of voltage magnitude and DEP assembly time because more SWCNTs can be transported to the gap and assembled then, which is in agreement with other experiments and theoretical work [4][5][6][7][8][9]. As we know, in the range of 1 to 10 MHz, the total number of SWCNTs assembled depends very little on the frequency [6]. However, we found that the P measured at 1MHz is significantly higher than those at 5 MHz and 10 MHz ( Figure 3(b)). Figure 4 is the SEM images of assembled SWCNTs of Pt-Ti in three different frequencies. It can be seen that there were many tangled SWCNTs between the gaps of the electrodes with large particles at 1 MHz, while the assembled SWCNTs highly oriented between the gaps with few particles at 5 MHz and 10 MHz. Therefore, it is evident that these large particles at 1 MHz pin the arrived SWCNTs on the electrode and helped them not to be blown off by the nitrogen gas. Figure 5 shows the Pm of the three types of devices of Pt-Ti, Pt-Hf and Pt-Al under different assembly parameters. The Pm also displays the same order with P, i.e. Pm (Pt-Ti)> Pm (Pt-Hf) >Pm (Pt-Al). We observe that the slopes of Pm increase significantly with an increase in the interaction energy between metal and SWCNT as well. We will explain this phenomenon later. As for Ps shown in Figure 6 the data basically conform to the order of Ti>Hf>Al as P and Pm. However, there are abnormal phenomena under every kind of assembly conditions as shown in Figure 6 (a) -(c). One type of these is the decrease of Pt-Ti Ps at 10 V in (a) and in (c) and the Ps of Pt-Ti is lower those that of Pt-Hf and even Pt-Al. It can be found that the Ps exhibits an upper limit value about 40% regardless of assembly conditions or electrode materials. This is most likely due to the structure and dimension of electrodes, which conforms the electric field distribution and change during the assembly process. Davis et al. [15] reported that the electrode width is the determining factor for the number of SWCNTs assembled. If the electrode width is sufficiently narrow (< 200 nm), the second bundle of SWCNTs cannot assemble between the same pair of electrodes due to the repulsion force generated by the first assembled SWCNT [13].Therefore, it can be inferred that the electrode width of 3 μm in this paper, which is much larger than the 200 nm, leads to the 40% upper limits of Ps. Because of the relatively weak interaction force with SWCNTs of Hf and Al, the Ps of the Pt-Hf and Pt-Al electrodes is much lower than 40% so that it can keep increasing with the voltage and assembly time. While, the strongest interaction force with SWCNTs of Ti makes its Ps the first one to reach the upper limit value of 40% at Vpp=5 V, f=5 MHz, t=30 s. Then, it starts to decline dramatically when the voltage is higher than 5 V and the assembly time is longer than 30 s respectively resulting in those abnormal non-order data in Figure 6 (a) and (c). In addition, when total yield rate is low, Ps and Pm can increase simultaneously, whereas when total yield rate is high enough, the decrease of Ps means the increase of Pm, namely a restriction relationship between the Ps and Pm. Therefore, it can be deduced that both the 40% upper limits of Ps and the restriction relationship between the Ps and Pm result in that the change ability of all three types of electrodes Pm enhances with the increase of the interaction energy between the metal and SWCNTs as discussed in last paragraph. There are other two types of abnormal phenomena in our Figure 5 (b) and Figure 6 (b). One is that the Ps of Pt-Ti and Pt-Hf at 1 MHz is smaller than at 5 MHz and 10 MHz, but the conditions of the P and Pm are opposite. The other is that the Ps of Pt-Al at 1 MHz keeps the same trend with the Pm and P, contrary to that of Pt-Ti and Pt-Hf. Both of the phenomena can be explained by the comprehensive consideration of all factors proposed previously, including the pinning effect at 1 MHz, the 40% upper limit of Ps and the restriction relationship between the Ps and Pm under the different interaction energies between SWCNTs and Al, Hf , Ti, when total yield rate is high enough. Therefore, the former phenomena can be explained by the relative large interaction energies of Hf and Ti with SWCNT, the Pm is enhanced so much by the pinning effect that the Ps decreases although the total assembly yield is increasing. Then, the latter phenomena is caused by the reason that the interaction energy between Al and SWCNT is so weak that the pinning effect helps the Ps increase as well as the Pm , so all three kinds of yield of Pt-Al is enhanced at 1 MHz. Conclusion In conclusion, the SWCNTs were self-assembled on three different metal electrodes of Pt-Ti, Pt-Hf and Pt-Al by the AC DEP technique and experiments show that the total, multi-bundles and single-bundle assembly yields all display the same order with that of the interaction energy of Ti>Hf>Al at different assembly conditions. This demonstrated that the interaction energy between the SWCNTs and electrode metal materials is one of the crucial factors in the DEP assembly of SWCNTs as playing a role in preventing the arrived SWCNTs from getting off the electrodes again when the suspension is blow-off, as expected. Therefore, the weaker the interaction energy of the electrode metal with SWCNT is, the higher voltage and longer time are required to obtain higher SWCNTs assembly yield. As for Ps, its upper limit should be paid attention as well if the electrode is not narrow enough.
2019-04-30T13:07:54.618Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "50860e63fa92ff724de5a5b1f7e8d0f5843878b7", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/382/2/022019", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f5b10ae1513632200f83ac90fd30a3e3d59d2c59", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Materials Science" ] }
271686489
pes2o/s2orc
v3-fos-license
Survival After Newly-Diagnosed High-Grade Glioma Surgery: What Can We Learn From the French National Healthcare Database? Background This study aimed to assess the overall survival (OS) of patients after high-grade glioma (HGG) resection and to search for associated prognostic factors. Methods A random sample of ad hoc cases was extracted from the French medico-administrative national database, Système National des Données de Santé (SNDS). We solely considered the patients who received chemoradiotherapy with temozolomide (TMZ/RT) after HGG surgery. Statistical survival methods were implemented. Results A total of 1,438 patients who had HGG resection at 58 different institutions between 2008 and 2019 were identified. Of these, 34.8% were female, and the median age at HGG resection was 63.2 years (interquartile range [IQR], 55.6–69.4 years). Median OS was 1.69 years (95% confidence interval [CI], 1.63–1.76), i.e., 20.4 months. Median age at death was 65.5 years (IQR, 58.5–71.8). OS at 1, 2, and 5 years was 78.5% (95% CI, 76.4–80.7), 40.3% (95% CI, 37.9–43), and 11.8% (95% CI, 10.2–13.6), respectively. In the adjusted Cox regression, female gender (HR=0.71; 95% CI, 0.63–0.79; p<0.001), age at HGG surgery (HR=1.02; 95% CI, 1.02–1.03; p<0.001), TMZ treatment over 6 months after HGG surgery (HR=0.36; 95% CI, 0.32–0.4; p<0.001), bevacizumab (HR=1.22; 95% CI, 1.09–1.37; p<0.001), and redo surgery (HR=0.79; 95% CI, 0.67–0.93; p=0.005) remained significantly associated with the outcome. Conclusion The SNDS is a reliable source for studying the outcome of HGG patients. OS is better in younger patient, female gender, and those who complete concomitant chemoradiotherapy. Redo surgery for HGG recurrence was also associated with prolonged survival. INTRODUCTION High-grade gliomas (HGG) are the most common primary Survival After Newly-Diagnosed High-Grade Glioma Surgery: What Can We Learn From the French National Healthcare Database?malignant central nervous system (CNS) tumors comprising mainly the glioblastoma multiforme (GBM) subtype (World Health Organisation [WHO] grade 4) [1,2].Anaplastic astrocytomas (AA) were previously classified as WHO grade 3 based on greater degree of cellularity, nuclear pleomorphism, and mitotic activity as compared to low-grade gliomas (WHO grade 2).However, unlike GBM that are 20 times more frequent, AA lack vascular proliferation and necrosis on histopathologi-cal examination.However, both types exhibit parenchymal infiltration and thus remain almost incurable [2,3].Others much less frequent subtypes such as anaplastic oligodendroglioma (AO), anaplastic oligoastrocytoma, or malignant glioneuronal tumor also fall into the category of HGG.Previous classifications of brain tumors based solely on histopathological criteria were limited by diagnostic discrepancies and variability in outcome and response to therapies.In France, the standard of care for newly diagnosed GBM includes maximal safe resection, concurrent temozolomide (TMZ) during radiotherapy (RT), and adjuvant TMZ for six or more 28-day cycles [4].Grade 3 gliomas can be either treated like GBM or with combined chemotherapy using procarbazine, lomustine, and vincristine (PCV) or with TMZ after surgical resection, followed then by RT.AOs are also responsive to PCV chemotherapy, especially when harboring 1p19q codeletion.The 2021 fifth edition of the WHO Classification of Tumors of the Central Nervous System incorporated advances in understanding the molecular pathogenesis of brain tumors with histopathological criteria in order to group tumors into better defined entities.For the first time, adult-and pediatric-type gliomas were classified separately based on differences in molecular pathogenesis and prognosis.Furthermore, the previous broad category of adult-type diffuse gliomas was consolidated into three types: astrocytoma, isocitrate dehydrogenase (IDH) mutant; oligodendroglioma, IDH mutant and 1p/19q codeleted; and GBM, IDH wild type.These major changes were driven by IDH mutation status and included the restriction of the diagnosis of GBM to tumors that are IDH wild type; the reclassification of tumors previously diagnosed as IDH-mutated GBM as astrocytomas IDH mutated, grade 4; and the requirement for the presence of IDH mutations to classify tumors as astrocytomas or oligodendrogliomas.These changes will likely improve dedicated treatment efficacy and, hence the homogeneity of outcome. Administrative medical databases are massive repositories of collected healthcare data for various purposes with a constant and often ongoing collection process.They frequently encompass the whole nation, a region, or a scheme ensuring high statistical power.In that respect, the French nationwide healthcare database, the Système National des Donnèes de Santé (SNDS), is a great opportunity to carry out comprehensive health studies at the country level [5].In France, to date, no one has ever attempted to assess the outcome of HGG patients using the SNDS. The aim of this study was to assess the overall survival of patients after HGG resection and to search for associated prognostic factors using information collected and available in the SNDS. Clinical material and population selection We performed a cross-sectional and longitudinal nationwide observational retrospective study using the SNDS.The SNDS database links claims with hospital discharge summaries and the national death registry, using pseudonymization of the unique national identifier.It now covers 99% of the French population, over 66 million persons, from birth to death, making it one of the world' s largest continuous homogeneous claims database.The database includes demographic data, date and cause of death, long-term disease registration for full reimbursement, outpatient reimbursed healthcare encounters such as physician or paramedical visits (e.g., nursing, physiotherapy), medicines prescribed, medical devices, lab tests with costs; all private and public hospitalizations with primary, linked and associated ICD-10 (International Classification of Diseases 10th Revision) diagnoses, procedures, duration, and cost coding system as well as most very expensive drugs.The power of the database is correlatively great, and its representativeness is guaranteed.As such over 3,000 variables are spread into around 500 tables.For this study, we used many variables such as date of birth, sex, previous neurosurgical procedure, past medical history of neoplasm, age at surgery, anatomical location of the tumor, delay between HGG resection and chemotherapy start, chemotherapy (molecule(s), dose, duration, number of course), delay between HGG resection and RT start, number of RT fraction, duration of the RT, anti-epileptic treatment (molecule, dose, duration), redo neurosurgical procedure, and date of death.A random sample of patients treated for a malignant brain tumor was extracted from the SNDS and provided to us for research purpose.The period of selection of patients operated on for an HGG extended from January 1, 2008, up to December 31, 2017.Patients were then followed up until December 31, 2020.An algorithm combining two variables to get appropriate cases was used: the type of the surgical procedure identified by the French Common Classification of Medical Acts (CCAM) which describes precisely all medical and surgical interventions (AAFA002: Exérèse de tumeur intraparenchymateuse du cerveau, par craniotomie; resection of an intracerebral tumor by craniotomy) [6][7][8].The second variable taken into account was the main diagnosis of malignant cerebral tumor according to the ICD-10 code C71.x: malignant neoplasm of brain.As such metastasis (C79.3 secondary malignant neoplasm of brain and cerebral meninges) or other types of brain tumors were not taken into account.In this study, we solely considered newly diagnosed HGG resection.The patients who solely had a brain biopsy were not considered.However, the patients who had a brain biopsy followed by HGG surgery were included.To ensure that no low-grade glioma were represented in our study, a complex selection process was applied to our initial population in order to keep only HGG patients (Fig. 1).The Mortality-Related Morbidity Index (MRMI) predictive of all-cause mortality was used to assess the severity of the patient's global health state including numerous comorbidity [9].This weighted index summarizes the association between a set of conditions identified through algorithms using SNDS data and each outcome [10].The MRMI index has been validated against the most commonly used morbidity indices [9].Chemotherapy and other medications such as anti-epileptic drugs were retrieve across the databases using ad hoc "Unité Commune de Dispensation" (UCD) or "Code Identifiant de Présentation" (CIP) codes.Aware that coding rules are somewhat inconsistent, we applied a stepwise selection algorithm to the initial population as our goal was to target mainly GBM patients (Fig. 1).As standard treatment after HGG resection is chemoradiotherapy with temozolomide (TMZ/RT), we solely considered the patients who received this therapy after surgery.All HGG patients who at least completed the RT and initiated the TMZ adjuvant phase were included. Statistical analysis Continuous variables were reported as means and standard deviations or medians and interquartile ranges (IQR) for non-Gaussian distributions.Categorical variables were reported as frequencies and proportions.Survival statistics were based on time to death, which was measured from the first date of HGG surgery to the date of the last follow-up or death.We used the Kaplan-Meier method to estimate the OS and the Mantel-Cox log-rank test to compare survival curves.Cox proportional hazards regressions were used to identify predictors of death and to estimate hazard ratio (HR) with their 95% confidence intervals (95% CI).All tests were two-sided, and statistical significance was defined with an alpha level of 0.05 (p<0.05).Data extraction and processing were achieved with SAS Enterprise Guide version 8.3 (SAS Institute Inc., Cary, NC, USA), and analyses were performed with the R program- Compliance with ethical standards This study was conducted according to the ethical guidelines for epidemiological research in accordance with the ethical standards of the Helsinki Declaration (2008).It was also approved by the French Data Protection Authority (Commission nationale de l'informatique et des libertés) an independent national ethical committee, authorization number: DR-2021-352. Population description A total of 1,438 patients who had HGG resection between 2008 and 2017 were selected.Among them, 34.8% were female, and median age at HGG resection was 63.2 years (IQR 55.6-69.4)(Table 1).Females were significantly older at surgery (64.3 years) compared to males (62.4 years) (p=0.003).According to the MRMI index, male had a significantly higher mortality risk at baseline compared to female (p=0.008)(Table 1).Additionally, 51.8% of the patients had at least one seizure, and 73.6% used to take an anti-epileptic medication over 3 months of which levetiracetam was the most often prescribed with 65.3%.The median follow-up time was 7.4 years (95% CI 5.4-8.8). DISCUSSION HGG remains one of the least treatable cancers.The current standard therapy for HGG represented by maximal surgical resection combined with chemo-and radiotherapy, offers only a palliative treatment since the median OS is less than 2 years [4,11].We report herein on the outcome of a sample of 1,438 patients who had HGG resection.The present study, one of the largest on outcomes after HGG resection in France, may serve as a reference for future research. The restricted mean survival time (RMST) is determined by measuring the area under the Kaplan-Meier survival curve.It can be defined as the average event-free survival time, ranging from 0 up to a specific prespecified important time point that reflects a clinically relevant temporal horizon, such as 5 years.As the median survival time is insensitive to outliers, it is expected to be much shorter than the mean survival time in the presence of many long-term survivors.Although the median survival time is easy to understand, it describes only the outcome at a single time point, i.e., the length RMST can be seen as an improvement of the median because it can be computed with no exceptions (i.e., irrespectively of the number of events that have occurred) and, more importantly, it examines the entire shape of the survival curve (from time 0 to the last time-point of the follow-up) and therefore takes into account the presence of long-term survivors [19].Comparatively, in our study, the 5-year RMST was 2.16±0.04years. Even if prolonged survival of HGG patients has been reported, it is nonetheless a rare eventuality.In our study, 125 patients (8.7%) were found to be alive at data analysis, and for these alive patients, the median survival was 5.8 years (IQR 4.6-7.8).Most of these long survivors may likely have an AA and not a GBM.OS of GBM, including all cases, is nowadays around 12 months [16].However, many factors influence patients' outcomes. Predicting factors Unsurprisingly, age at surgery was one of the favorable predictors but, gender also.Despite the fact that female patients were older at surgery, they demonstrated a significantly better OS.We confirm herein that women have the survival advantage in HGG patients who have received standard of care treatment [20].Of the numerous factors associated with the OS, HGG subtypes are one of the strongest.Despite GBM patients constituting the vast majority of this sample, it was not possible to precisely identify AA and AO patients whose better outcome, increased the global OS in this study.However, AO is preferentially treated with the PCV regimen.Our stepwise selection algorithm, which includes solely the patients who received concomitant TMZ and RT within the 100 postoperative days, mostly targets newly diagnosed HGG. The patients who received TMZ for 6 months or more demonstrated a better outcome.The DNA repair enzyme O(6)-methylguanine-DNA methyltransferase (MGMT) antagonizes the genotoxic effects of alkylating agents.MGMT promoter methylation is the key mechanism of MGMT gene silencing, and predicts a favorable outcome in HGG patients who are exposed to alkylating chemotherapy [21].MGMT promoter methylation status is not only prognostic but also predictive of a better response to chemotherapeutic agents in GBM, such as TMZ or carmustine (Bis-ChloroethylNitro-soUrea, BCNU).On the contrary, GBM patients with unmethylated MGMT promoters have limited survival benefits from TMZ [22].The better OS of patients who received TMZ over 6 months may reflect this molecular features of the GBM cells. There is much debate regarding the use of bevacizumab in HGG patients.Bevacizumab slows tumor growth but does not affect OS of newly diagnosed GBM patients, nor of those presenting a recurrence [23].In our study, those who received bevacizumab had a reduced OS (HR=1.22,95% CI 1.09-1.37,p<0.001).Under the proportional hazards assumption, crossing of the survival curves is impossible.Thus, in a study where the patient groups do not differ between the treatments, a crossing of the survival curves implies a violation of the proportional hazards assumption.As discussed previously, the RMST has been recommended as an alternative measure to overcome some of the limitations of proportional hazard modeling.As such, Fig. 3C present the plot of 10-year mean restricted survival time for patients who received bevacizumab (arm=1, RMST=2.28years) vs. those who did not (arm=0, RMST=2.96years) (p<0.001).This finding is however hard to interpret as this anti-angiogenic therapy may have been given in combination with chemotherapy.However, it likely reflects the fact that patients receiving bevacizumab are those with rapidly progressing tumors causing symptomatic edema, and who consequently have worse survival. HGG nearly always recurred, often in the vicinity of the original tumor site.Few treatment options are then available at recurrence.There is much evidence that the extent of resection for newly diagnosed HGG increases OS.Whilst the role of initial aggressive resection has become standard practice, its implication for recurrent GBM is still controversial.With the surgical progresses and adjuvant treatment modalities, many patients are now surviving to recurrence in good functional status.The indications for redo surgery include among others individuals with tumor mass effect or radiographic evidence of progression with or without new neurological defi-cit.In a review investigating reoperation for recurrent HGG, Hervey-Jumper and Berger [24] found that 29 studies among 31 showed a survival benefit or an improved functional status.In Sacko et al. [25], the median OS of patients who underwent repeat resection for HGG recurrence was significantly better than that of those who did not, with 23 months (95% CI 20.20-28.85) vs. 14.6 months (95% CI 12.63-16.81),respectively (p<0.05).In France, redo craniotomy for recurrent GBM is around 9%, a rate inferior to the North American series (13%-31%) [11,24,26].OS after a second resection for HGG recurrence varies greatly from one month up to over one year [24,27].No meta-analysis of OS rates has ever been published; however, Montemurro et al. [27] in their "concise overview of the current literature" found a median OS of 9.7 months after recurrent HGG surgery.Some comparative studies have suggested a possible survival advantage with re-operation within the context of being able to select suitable candidates for reoperation [28].There is no agreement about the best way to manage recurrent HGG, given that no treatment has ever been shown to be more beneficial than another [29].The management of recurrent HGG is thus based on expert guidelines.Treatment decisions usually require multi-disciplinary discussion on a case-by-case basis to determine the optimal option. Strengths and limitations The strengths of the SNDS lie both in the large number of patients and in the comprehensive data available from every hospital in France.The database's representativeness is nearly perfect, as it includes the whole country's population of nearly 68 million inhabitants constituting one of the largest healthcare databases in the world [5].Evaluation of patients before that time is therefore not possible.These data were not initially collected for research purposes, and they may therefore be subject to random or systematic measurement errors, which can have consequences when defining study populations, events, and covariates.Compiled from various institutions, its accuracy is limited by inconstancies in data collection and recording.Moreover, important variables such as the quality of resection or histopathological details are not recorded in the SNDS.The retrospective nature of this study, together with the lack of clarity regarding treatment rationales and non-homogeneous management strategies without random assignment, needs to be considered when evaluating the results.The most significant limitation of our study was the lack of histological diagnoses, which made it impossible to assess the OS by glioma subtypes.Without knowing the exact tumor types, caution should be taken whilst attributing the observed survival differences solely to the presented factors rather than to the inherent biology and prognosis of different glioma subtypes.Using the ICD-10 code C71, we assumed that we could extract mainly data on astrocytic, oligodendroglial, ependymal, and other neuroepithelial tumors.Primary CNS lymphomas were not included as they are specified by a different code (C83/C85).We also hypothesized that no malignant meningeal (C70) or metastasis (C79.3) were included in the analysis.Patients with malignant tumors of the sellar region were also excluded.Moreover, we apply a strict selection process to exclude as much as possible unwanted brain neoplasms.Nonetheless, our median survival of 1.69 years (95% CI 1.63-1.76) is greater compared to usual findings.This likely betrayed the presence of borderline or benign primary brain tumors such as low-grade glioma within our population.Despite these limitations, the SNDS is an invaluable tool to assess HGG patients' outcomes.It offers an incomparable means to explore associations with other pathology, medication, or combined surgical treatments, which could not be assessed before.Moreover, use of these databases is less expensive than conducting specific surveys in dedicated populations.However, SNDS data extraction and analysis is a complex task that requires dedicated training, coding expertise, and special authorizations.We estimated that around 3,000 cases of HGG are operated each year in France.However, solely a random sample of patients treated for a malignant brain tumor was extracted from the SNDS and provided to us for research purposes.The SNDS is a huge medical database covering over 95% of the 68 million inhabitants with a 20-year follow-up.We cannot directly have access to this complex database.However, to assess its usefulness for studying the outcome of HGG patients, a random sample of 3,834 patients was provided to us to perform this pilot study whose results are described in the present paper. Perspectives The main weakness of the present work is the absence of verified HGG histopathology.Our study was made with healthcare data extracted from the French administrative medical database that does not record precise histopathological diagnoses.The French Brain Tumour Database (FBTDB) is an original, nationwide, surgical-based system for registration of histological cases of primary CNS tumor (PCNST) [2,3,30].The FBTDB is hosted by the Hérault Tumours Registry which is part of Francim, a network grouping the French cancer registries.This collection process is a one-time registration with no information regarding the treatment, the follow-up, or the outcomes.Tumor characteristics and patient's demographic details are registered once.Patient's information is not updated, and solely new cases of brain tumors enrich the database over time.Our project is thus to merge the FBTDB with the SNDS to assess HGG patients' outcomes according to each subtype. Conclusion The SNDS is a reliable source to study the outcome of HGG patients.OS is better in younger patients, females, and those who complete concomitant chemoradiotherapy.Additionally, redo surgery for HGG recurrence was associated with prolonged survival. Fig. 2 .F Fig. 2. Kaplan-Meier curves for overall survival (OS).A: OS from date of birth.B: OS from high-grade glioma surgery.C: OS by gender.D: OS by age categories.E: OS by categories of comorbidity index.F: OS by tumor location. Fig. 3 . Fig. 3. Kaplan-Meier curves for overall survival (OS) by treatment.A: OS by temozolomide duration over 6 months.B: OS by lomustine.C: OS by bevacizumab, with restricted mean survival time plot.D: OS by redo surgery for HGG recurrence. Table 1 . Characteristics of the 1,438 patients with HGG resection Table 1 . Characteristics of the 1,438 patients with HGG resection (continued) Table 3 . Multivariable Cox regression for overall surviva Table 2 . Univariable Cox regression of overall survival
2024-08-04T15:14:13.334Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "9b23c2c53b52b974b1de3bccfa03f4843d51158f", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "502870b0cb3d08b4fd861f4aaa944d940e642c9a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
207814690
pes2o/s2orc
v3-fos-license
Routine clinical care for chronic immune thrombocytopenia purpura in Denmark, 2009–2015 Objectives: To describe routine treatment and clinical characteristics of patients with chronic ITP (cITP). Methods: We used data from Danish nationwide registers and medical records to examine routine clinical care, including splenectomy and medical treatment, of Danish patients with chronic immune thrombocytopenia (cITP, de fi ned as two or more ITP diagnoses at least 6 months apart), i.e. treatment initiation before cITP diagnosis and treatment initiation within one year post-diagnosis for treatment-naïve patients. Results: Nearly half of all 964 cITP patients diagnosed during 2009 – 2015 initiated treatment between initial ITP diagnosis and chronic onset; 43% received glucocorticoids, 12% received IVIG and 18% received rituximab. Within one year post-diagnosis, 9.2% of previously untreated patients commenced therapy, most often corticosteroids and rituximab. Discussion: Our results are in line with fi ndings of recent studies from other countries. Conclusion: We found that corticosteroids, IVIG, and rituximab are common fi rst- choice of ITP drugs. Bleeding events occurred in nearly one third of treated patients in the year before cITP diagnosis and in 5% of the treatment-naïve patients. A substantial number of patients do not need treatment during the fi rst 6 – 12 months. However, some of these patients will subsequently need treatment as the disease may worsen, indicating the need for continuous follow-up of these patients. Introduction Immune thrombocytopenic purpura (ITP) is an autoimmune disease characterized by platelet destruction and decreased production [1]. However, patients with similarly low platelet counts may present with a spectrum of symptoms, ranging from being asymptomatic to severe bleeding diathesis [2]. Severe bleeding events occur in about 10% of adult ITP patients [3]. Corticosteroids alone or in combination with intravenous immunoglobulin (IVIG) are considered as the first-line therapies in ITP, whereas splenectomy, immunosuppressive treatment and Thrombopoietin receptor agonists (TPO-RA) are used in patients failing steroids [4]. Treatment is recommended for patients with bleeding symptoms or at high risk of bleeding. In adults, initiation of treatment is recommended if platelet counts fall below 30 × 10 9 /L [4,5]. In the current study, we described the routine treatment and clinical characteristics of patients with chronic ITP (cITP) in the year after chronicity. Setting and data sources This population-based cohort study was conducted in Denmark. The Danish National Health Service provides universal tax-supported access to hospitals, free of charge, for the entire population [6]. The Danish National Patient Registry (DNPR) contributes data to the Nordic Country Patient Registry for Romiplostim (NCPRR) in Denmark [7]. The DNPR has recorded information on all non-psychiatric inpatient hospitalizations since 1977 and on outpatient and emergency room visits since 1995. Data has been coded using the International Classification of Diseases, Eighth Revision (ICD-8) through 1993 and the Tenth Revision (ICD-10) thereafter [8]. The Danish Civil Registration System records information on migration and vital status for the entire population using the unique civil registration number assigned to each Danish resident [9]. The registries can be linked unambiguously using the patient civil registration number, which is recorded at every hospital contact. This number additionally allows for identification of patient medical records for abstraction of clinical information. Study population All Danish patients aged 18 years or older with incident cITP, defined as two or more ITP diagnoses at least 6 months apart between 1 April 2009 and 31 December 2015, were identified from the NCPRR. To avoid including patients with prevalent ITP, we excluded patients who had an ITP diagnosis between 1 January 1996 and 1 April 2009. We also excluded patients with secondary ITP defined as ITP associated with other diseases and conditions, such as systemic lupus erythematosus (SLE), human immunodeficiency virus (HIV) infection, hepatitis C virus infection, liver cirrhosis, hematological malignancies, diagnosed within 5 years before ITP. The NCPRR also contains data manually abstracted from electronic or paper medical records at all hospital departments treating ITP patients, including inpatient units, outpatient specialist clinics, and emergency departments. Medical records were reviewed to confirm ITP diagnosis. Bleeding events were abstracted from medical records by type and month of event. We only included serious bleeding events requiring hospital contact. Laboratory data included abnormal hemoglobin and white blood cell counts recorded in medical records as laboratory tests taken within 28 days of each other. Anemia was defined as a hemoglobin level below 8.1 mmol/L (13 g/dl) for males and below 7.4 mmol/L (12 g/dl) for females. Leukocytosis was defined as a total leukocyte count above 10 × 10 9 /L. Follow-up and treatment The date of a second ITP diagnosis coding occurring more than 6 months after a first ITP diagnosis was considered the cITP diagnosis date. Patients were followed for 12 months after their cITP diagnosis date, with censoring at emigration or death. Data on treatments were abstracted from medical records, except for splenectomy, which was defined by surgical codes recorded in the DNPR. Treatments before and after each patient's cITP diagnosis date included prescriptions for corticosterioids (prednisolone or other oral glucocorticoid, dexamethasone, or high-dose methylprednisolone), rituximab, intravenous immunoglobulin (IVIG), danazol, azathioprine, cyclophosphamide, dapsone, mycophenalate (mycophenolatmofetile), TPO-RA (eltrombopag or romiplostim), supportive treatment (including tranexamic acid and platelet transfusion), and other treatments including vinca alkaloids, cyclosporine, desmopressin, and mercaptopurine. Statistical analysis We calculated the distribution of patients by age group on the cITP diagnosis date, gender, Charlson comorbidity index score, prevalent comorbidities on the cITP diagnosis date, nadir platelet count within 90 days before the cITP diagnosis date and the percentage of patients who experienced ≥1 bleeding-related hospital contact within one year before their cITP diagnosis date. We then computed the overall proportions of patients treated with ITP drugs, splenectomy, and supportive treatment between their initial ITP diagnosis date and their cITP diagnosis date. We also calculated the cumulative proportion of patients who were treatment-naïve on their cITP diagnosis date and initiated therapy during the subsequent 12 months, accounting for death as a competing risk. In a sensitivity analysis, we changed the definition of cITP to two or more ITP diagnoses at least 12 months apart, in accordance with the most recent guideline [5]. Among the 495 patients who were not treated prior to cITP diagnosis date, 6.5% had a nadir platelet count within 90 days before their cITP diagnosis date of <30 × 10 9 /L; 13.3% had a count of 30-50; 44% had a count of 50-150 × 10 9 /L; and 7.1% had a count >150 × 10 9 . Values were unavailable for 29% of patients. For those treated prior to their cITP diagnosis date, corresponding proportions were 19%, 9.8%, 24%, and 34%, respectively, with values unavailable for 14%. Bleeding events within one year before the cITP diagnosis date were reported in 32% of treated patients and for 5.3% of patients who were not treated before their cITP diagnosis date (Table 1). Changing the definition of cITP to ≥2 ITP diagnoses separated by >12 months reduced the cohort to 848 patients. The proportion of patients treated between their ITP and cITP diagnosis dates increased from 49% to 55% and a bleeding-related hospital contact in the year before cITP diagnosis decreased from 5.3% to 1.8%. Discussion We found that 51% of all Danish patients diagnosed with cITP were treatment-naïve at the time of chronicity. In the 12 months after cITP diagnosis, 9.2% of previously untreated patients commenced therapy. More patients initiated corticosteroid or rituximab treatment than underwent splenectomy or initiated IVIG or TPO-RA treatment. The strengths of our study included its use of data from routine clinical practice. In the NCPRR, data are abstracted from healthcare registries enriched with medical record data serving a national population with uniform tax-supported access to healthcare. This eliminates selection bias stemming from selective inclusion of specific hospitals, insurance plans, or age groups. Furthermore, the data in these registries provide almost complete follow-up, thereby reducing information bias [8]. A weakness was lack of information on treatment lines and on treatment compliance and the restriction to treatments within one year of cITP diagnosis. We relied on data from routine clinical care in which platelet count may not always be measured regularly in stable patients. While most patients had a platelet count within 90 days before study inclusion, 21.5% patients had no such measurement in that 90-day period (and almost 30% of the treatment-naïve patients). When we included all measurements within one year of cITP, the proportion of patients without a measurement dropped to 4.1% (and 1.7% for treated patients). Although we have virtually complete data on platelet count if we extended the look-back period, we consider the 90-day window the most clinical relevant with regard to subsequent treatment. Our results are consistent with recommended treatment for cITP [5], and also in line with findings of recent studies from other countries. For example, Lee et al. reported that 31% of 10,814 Korean patients with primary ITP received treatment during 2010-2014 [10]. Data from the United Kingdom Immune Thrombocytopenia Registry reported that 80% of all patients were treated at any time during the course of their disease, most often with prednisolone (70%) and IVIG (13%) in the period 1990-2015 [11], and that most patients receiving romiplostim received it ≥1 year after first ITP diagnosis [12]. Similar findings from Sweden showed that 65% of 587 patients with cITP, defined as two ITP hospital-based diagnoses at least 12 months apart, were treated during the 2009-2014 study period, most often with corticosteroids and IVIG (personal communication, 2017). We found that corticosteroids, IVIG, and rituximab are common firstchoice of ITP drugs [13]. In this large Danish population-based cohort study, only half of the cITP patients received ITP medication between ITP and cITP, and the chance of initiating treatment in previously untreated patients within the first year of cITP was only 9%. Corticosteroids, IVIG, and rituximab were the most common firstchoice of ITP drugs. Bleeding events occurred in nearly one third of treated patients in the year before cITP diagnosis and in 5% of the treatment-naïve patients. A substantial number of patients do not need treatment during the first 6-12 months. However, some of these patients will subsequently need treatment as the disease may worsen, indicating the need for continuous follow-up of these patients.
2019-11-02T13:06:32.016Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "473cd6b518d4ecbacb656838c000840297bd58ee", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/16078454.2019.1685739?needAccess=true", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e490d0818e9e5287d9fd4583c84fb001efc9345a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267566408
pes2o/s2orc
v3-fos-license
Preliminary results of automatic cotton crops mapping using remote sensing data . The paper presents the results of application of the method of automatic generation of representative and unbiased set for in-season cotton crop mapping, based on crop simulation model, previously parameterized using ground truth and satellite data. The method provided confident mapping of cotton fields without using actual ground-truth information or a-priori information about their in-season phenology. Overall mapping accuracy calculated using relevant ground truth data for cotton fields has reached 95.6 %. Consideration of time series of NDVI values as a model of phase characteristics allowed using relatively simple criteria to identify typical representatives of the selected crop on the basis of analysis of their seasonal phenology and made it possible to build a reference sample for modeling and further classification. Introduction The cotton sector plays an important role in the economy of the Republic of Uzbekistan.The reforms implemented by the government in the cotton sector have become an important element of the country's planned development and its transition to a market economy" [1]. The Republic of Uzbekistan has implemented comprehensive large-scale measures to improve efficiency of the production process of seed cotton and to introduce highly effective technological process management systems that improve properties of cotton products" [2]. Timely information on crop mapping is of paramount importance for operational assessment of crop condition, crop rotation control and yield forecasting.The use of remote sensing data has obvious advantages, over traditional ground inventory methods, due to the prompt receipt of information, as well as the objectivity and spatial detail of assessments.Automatic methods of processing and analysis of satellite data allow minimizing material and labor costs, ensuring independence, timeliness and repeatability of results.At the same time, this method is of particular relevance for the mapping of large areas of agricultural land in cotton-textile and agro-clusters in the Republic of Uzbekistan [3]. In particular, the largest cotton and textile cluster in the Republic of Uzbekistan -"Buxoro Agroklaster" LLC, has about 100 thousand hectares of acreage, 65,000 hectares of which are located in the Bukhara region.In 2022 season, about 41,350 hectares of these areas, located in the Bukhara region, were allocated for cotton, 22,500 hectares for winter wheat, the remaining crops accounted for about 1,150 hectares [3].At the same time, for the cottontextile clusters, first of all, it is important to have operational data on the cultivation of the main raw material -cotton, both on their own lands and on contractual farmers lands.In this regard, during the implementation of the work the special attention paid to the possibility of cotton crops mapping. To ensure the effective management of agricultural production processes, and, in particular, mapping of crop areas in huge areas, it is not possible without the use of automated remote sensing programs.Taking into account the above stated, since 2020 "Buxoro Agroklaster" LLC with the assistance of "Cotton Research and Innovation Center" LLC within the framework of a state grant from the Ministry of Higher Education, Science and Innovative Development of the Republic of Uzbekistan has been developing a remote sensing system for cotton, wheat and other agricultural crops.The "Agro Smart Map Uz" software package is being implemented [4]. The purpose of this software package is to develop and implement a new digital remote monitoring system for generating primary accounting data based on the digitalization of the agricultural sector, automation of accounting processes, which together will reflect agricultural activities in such aspects as an inventory of agricultural land with the creation of a map of fields and crop rotations, agrochemical (agricultural chemical investigation) survey and monitoring of the green mass index (NDVI), agroecological survey (Scouting), analysis of weather conditions (Meteo), precision farming with differential application of seed, mineral fertilizers, plant protection products ( PPP), etc., and also monitoring the movement of equipment, planning and auditing the fact of agrotechnical measures with the formation of analytical data [5]. The scientific significance of the results of the ongoing research lies in the development of a single web platform that will allow, on the basis of information coming from remote sensing modules, stationary and mobile devices, to form historical databases for each field on the readings of weather stations, annual crop rotations , NDVI indices and plant development, the condition of the soil and its fertilization with nutrients, the movement of equipment and material resources, the planned and actually completed field work.The specified platform will also be equipped with a module, for the first time in practice, capable of generating statistical data in the context of administrative-territorial divisions (ATD: region, district, settlement), agricultural enterprises and farms [5]. Modern methods of remote in-season crop mapping often involve the use of a-priori information about the timing of their sowing, stages of development and harvesting [6,7,8], or use stable differences in seasonal dynamics of remotely measured plant characteristics [9]. Almost always, the advantages of time series of satellite images that have proven to be effective for solving these problems" are used [6,7,8,9,10]. The use of a training sample describing the spatial and thematic variability of the characteristics of objects on the Earth's surface is a necessary condition for ensuring the required level of reliability of their mapping based on parametric and nonparametric classifiers.Obtaining timely and spatially distributed information on the crops mapping from independent sources over large areas, as well as the accumulation of such data based on ground surveys or expert analysis of satellite images, is associated with significant organizational difficulties, financial and time costs and is difficult to accomplish. The paper presents a method for automatic generation representative and unbiased training set for cotton crop mapping, based on crop simulation model, previously parameterized using ground truth and satellite data for 2022 crop season.The method provided confident mapping of cotton fields without using actual ground-truth information or a-priori information about their in-season phenology. Materials and methods Ground data were presented by the results of ground surveys of 108 fields of "Garden Buxoro Agroklaster" LLC (Figure 1), covering the 2022 season.Data included, in addition to the boundaries of fields, the names of crops, the dates of the onset of the main phonological phases and the dates of application of mineral or organic fertilizers, soil type and other data. Bukhara region is a predominantly agricultural region, where almost all the most important crops in terms of national gross harvest volumes are represented.The territory is "homogeneous from the point of view of GAES global agrostratification" (Fischer et al., 2012), which means minimal differences in soil and climatic conditions and agricultural practices in the study region.Two crops mainly prevailed in the studied fields: of which 69 contours were occupied by cotton and 13 by winter wheat (Figure 1).In other areas studied, the fields were divided as follows: 12 fields under pairs, 2 fields under corn, 2 fields under melon, 1 field under potatoes, 1 field under soybeans and 6 fields were divided under orchards (apple trees, apricot).These cultures were not considered due to their insignificant number.Thus, two crops were studied in the work: cotton and winter wheat, for which the total area of crops is more than 90% of the entire studied sown area of the region. Seasonal series of NDVI parameters from Sentinel-2A/B (MSI) high spatial resolution satellite images were used to crops mapping based on the created sample as mapping signs. Seasonal time series of NDVI parameters for cotton and winter wheat (excluding the placement of repeat crops) presented in Figure 2. It is clear from the presented data that the parameters of normalized relative vegetation index of cotton and winter wheat differ significantly from each other.This circumstance allows providing mapping of crops by crop-specific models that include ranges of maximum and minimum NDVI parameters. The model has been parameterized using ground and remote sensing information for 2022 season for 108 fields within a single farm for cotton and wheat.To assess the capability of the proposed approach, the models used only within the boundaries of the surveyed fields with ground-based crop information, where a comparison between the classes of the generated sample and the crop was possible. At the next stage, when the model parameters for each of the considered crops were established, it became possible to simulate the process of phenological development of plant structural units and their accumulation of green biomass, taking into account the agrometeorological features of the current growing season.At the same time, it is necessary to have information about sowing dates of modeled crops.To establish the belonging of the current object of agricultural vegetation to one of the classes of modeled crops, NDVI measurements of a particular field and its corresponding in time series of predicted model values were compared.Examples of correspondence of different seasonal time series of satellite and model NDVI values for five randomly selected cotton fields presented in Figure 3. Model-based mapping results Regional crop mapping was performed based on the reference set obtained and described above, as well as seasonal time series of multispectral, high spatial resolution Sentinel-2A multispectral satellite images (MSI).The satellite dataset used contained measurements of plant NDVI, which used to estimate green plant biomass and which is quite informative in crop mapping. The overall mapping accuracy calculated based on 69 fields occupied under cotton.During mapping, 66 fields were recognized without errors, 1 field was not recognized and 2 fields were mistakenly recognized (Table 1). Discussion Analysis of the mapping results shows that the proposed method, taking into account the predominance of cotton and wheat crop areas, can be successfully used to in-season mapping of agricultural crops based on satellite data and simulation of seasonal plant phenology.Small errors occur and become larger where there are sown areas of crops that have similar vegetative phenology.However, the absolute values of NDVI indexes of the main crops -cotton and winter wheat -are markedly different (see Figure 2), which provides the possibility of their successful mapping by using of additional parameters to improve the method in the traditional conditions of predominance of wheat and cotton crops.With a hypothetical complete separation of these classes, the overall mapping accuracy could increase up to 99%. It should be noted that when using the mapping of a particular crop, the phases of vegetative development are also an important factor.For example: during mapping of Upland cotton varieties (Gossypium hirsutum), the vegetation phases may be early compared to the Pima cotton (Gossypium barbadense), which in turn may lead to some deviations or distortions.At the same time, the higher accuracy of crop mapping with reference to the phenology phases gives more reliable crop mapping.This fact will allow to promptly collecting information from large areas under crops. The most demanded is obtaining operational information on the in-season crop mapping.In this regard, it is promising to study the possibility of early construction of a reference set taking into account the vegetation phases. Additional efforts are planned to be directed to changing the working conditions of the method in order to provide more rapid assessments, as well as to investigate the impact of these changes on the quality of the model and the mapping accuracy.Despite the fact that the study region is homogeneous from the point of view of global agrostratification of GAES, further work on large heterogeneous territories will require localized parameterization of the model, carried out using local ground survey data or expert interpretation of high spatial resolution satellite images and other auxiliary data.In addition, for the automatic application of the proposed approach in large areas, reliable methods of independent and timely determination of sowing and harvesting dates are needed, which can be based on the use of satellite, meteorological and model indicators or their combinations. Conclusion Due to its relative simplicity and versatility, the proposed method may be promising for the development of low-cost and operational technologies for in-season crops mapping based on remotely sensed data, including over large areas.Its application requires a limited set of remote and ground data necessary at the stages of simulation model tuning.Consideration of time series of model NDVI values as benchmarks of phase characteristics allows us to use simple criteria to map typical representatives of selected crops based on the analysis of seasonal dynamics of spectral-reflectance characteristics and to construct a sufficiently accurate reference set for further mapping.This solution can be universal in mapping a wide range of crops in large areas, where the main problem is the impossibility of timely and simultaneous acquisition of reference data of the current growing season by other methods. We would also like to note that operational crops mapping can contribute to the effective use of the "Forecast of food balance of the country", which will allow to adjust the development strategy of "Food Security" of the Republic of Uzbekistan, while increasing the export-import potential of the country. Fig. 1 . Fig. 1.The research region, the boundaries of the fields with ground data and the location of crops in 2022 season. Fig. 2 . Fig. 2. Examples of the values predicted by the model and the NDVI values obtained on the basis of remote observations for the 2022 season. Fig. 3 . Fig. 3. Examples of correspondence of in-seasonal time series of satellite and model NDVI values for five randomly selected cotton fields. The research was carried out under the grant of the Ministry of Higher Education, Science and Innovative Development of the Republic of Uzbekistan No. IZ-202010156 "Development of remote sensing system for cotton, wheat and other agricultural crops" with the use of resources of the Center for Digitalization of Agriculture under the Ministry of Agriculture of the Republic of Uzbekistan. Table 1 . Matrix of classification results errors based on the reference set obtained from the cotton model.
2024-02-09T16:13:32.489Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "c2951dabbbb85efd679de5d8dbfcc1d59086f156", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2024/16/e3sconf_agritech-ix2023_04009.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3b33762ebcd1b58140763c13b2ba5f5043e7cf08", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [] }